NCSC Warns of Rising Security Threats from Chatbot ‘Prompt Injection’ Attacks

The UK’s National Cyber Security Centre (NCSC) has raised alarms about the heightened risk of chatbots being exploited by hackers through “prompt injection” attacks. Such attacks involve individuals crafting specific inputs to manipulate the responses of the underlying language models of chatbots.

Chatbots, integral to many online platforms like banking and shopping, rely on large language models (LLMs) such as OpenAI’s ChatGPT and Google’s AI chatbot Bard. These LLMs generate human-like responses based on extensive training datasets.

The NCSC points out the growing danger of malicious prompt injections, especially since chatbots often interact with third-party applications, potentially sharing data. The centre advises treating LLMs with caution, similar to beta products or code libraries.

Prompt injections can lead to unintended model actions, potentially resulting in the generation of inappropriate content, unauthorized data access, or breaches. Oseloka Obiora, CTO at RiverSafe, warned of the potential for increased fraud, illegal transactions, and data breaches due to chatbot vulnerabilities.

Recent incidents, like the exposure of Bing Chat’s initial prompt by a Stanford University student and vulnerabilities in ChatGPT discovered by security researcher Johann Rehberger, underscore these risks.

To counteract these threats, the NCSC recommends a comprehensive system design that considers machine learning risks. Implementing a rules-based system alongside the machine learning model can help prevent malicious prompt injections. The centre stresses the importance of understanding attacker techniques and prioritising security during the design phase.

Jake Moore, Global Cybersecurity Advisor at ESET, highlighted the need for secure application development, understanding potential machine learning weaknesses, and prioritising user data protection.

With chatbots becoming increasingly central to online interactions, the NCSC’s alert underscores the urgency to bolster defenses against emerging cybersecurity challenges.

(Image courtesy of Google DeepMind on Unsplash)

Related: OpenAI introduces ChatGPT Enterprise for enhanced business operations.

https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/

(Image courtesy of Google DeepMind on Unsplash)

Related: OpenAI introduces ChatGPT Enterprise for enhanced business operation

Write a Comment

Your email address will not be published. Required fields are marked *