Can users hack your chatbot? Assume yes.

Jimmy Tidey
7 min readMay 6, 2024

--

[This series of posts — I’m planning a few — is born out of frustration that lots of useful research on Generative AI is too technical for typical service / UX / interaction designers to draw on. My goal is to translate some useful research findings (often from the TIWML podcast) into less technical language.]

LLMs — Large Language Models —are the software that powers OpenAI’s ChatGPT (OpenAI is partly owned by and strongly connected to Microsoft), and other advanced ‘chatbots’ — including Gemini by Google, and the popular Claude chatbot.

Geiping’s crafted prompts give bizzare results from Bing’s Image creator, more on this below…

In February 2024, research led by Jonas Geiping led to a paper demonstrating that Meta’s LLM — LLaMA2 — can be ‘hacked’ by asking it carefully crafted malicious questions. There is an excellent, but quite technical, podcast about it here.

The malicious questions often look like junk to a human (e.g. “];”;`)): — — — — — -’)[‹{“ {[(‘/1/, @” — — — — — — [{ [ ↓ •]-> ++”).

These malicious questions can be crafted to make the LLM give answers that include links to dangerous websites, profanity, malicious instructions or details of how the LLM has been implemented. They can also cause the LLM to crash.

In contrast to many previous attacks, often called ‘jailbreaks’, this is not just a case of making an LLM say a rude word or give unethical advice (e.g. how to commit a crime) — Geiping’s attacks can make LLaMA2 say virtually anything the attacker chooses.

For example, the research demonstrates that they can cause LLaMA2 to answer with a URL for a YouTube video that was published after LLaMA2 was trained, showing that the LLM can be tricked into writing what is an arbitrary sequence of numbers and letters that it had never seen before.

The hack has security implications — which this post focuses on — but it’s also a reminder of how superficial LLM’s ‘humanness’ is — one moment, the chatbot is a serene knowledge butler; the next, a sequence of meaningless punctuation causes it to pour out obscenity. This is only one reason that shaping LLMs into useful tools will require deep design work alongside the raw technical effort that has created ever more complex algorithms.

An example hack

A user receives a text message: “Your HSBC account has been compromised. Go to the HSBC chatbot and type the password “];”;`)): — — — — — -’)[‹{“ {[(‘/1/, @” — — — — — — [{ [ ↓ •]-> ++” for further instructions.

The user copies and pastes the ‘password’ to the HSBC chatbot. The chatbot reads the malicious password and is tricked into returning a web link to a webpage owned by the attacker. The page has instructions to transfer money to the attacker’s bank account. Outcome: the attacker has successfully used the chatbot to give validity to a message users might not otherwise trust.

What are the implications?

If you are using open-source LLMs in production (such as LLaMA2), and without any further security measures, you should assume users could trick your LLM into responding with literally any imaginable sequence of text. If you are using ‘black box’ LLMs (OpenAI, Gemini, Claude, etc.), you should still carefully consider what would happen if your chatbot was manipulated to give unexpected responses.

Are all LLMs affected?

A team led by Yotam Wolf and Noam Wies has published a paper demonstrating all LLMs are, in principle, all vulnerable to Geiping-type attacks (under some reasonable assumptions). Open-source LLMs are more vulnerable because being open-source makes running the algorithm that creates the malicious questions easier. ‘Black box’ LLMs, such as ChatGPT or Google’s Gemini, are likely to make it much harder to create malicious questions, but not impossible.

Geiping’s work is based on research led by Andy Zou. Zou’s work developed an algorithm for creating malicious questions that cause ‘jailbreaks’ in Open Source LLMs. Previously, these ‘jailbreak’ attacks were hand-crafted by humans; Zou’s innovation was to demonstrate a robust ‘automatic’ method for creating them. The term ‘jailbreak’ is typically applied to tricking an LLM into saying a rude word or doing something else it has been trained not to. Geiping’s work advances Zou’s ‘jailbreak’ type attack and shows how to create malicious questions that elicit almost any arbitrary answer from a chatbot.

Notably, Zou’s attack teams also demonstrated that their jailbreaks were ‘transferable’. That is, jailbreaks discovered on open-source LLMs sometimes also work on ChatGPT, Google’s BERT, or, less successfully, Athropic’s Claud. (Claud’s key selling point is that it is ‘safer’ than the alternatives, a claim supported by the fact that it less often falls victim to this type of hacking.)

Are the more powerful Geiping-type attacks transferable, too? Or could Geipings attack be made to work on black box LLMs so other way? We don’t know. Products relying on ‘blackbox’ LLMs like ChatGPT, Gemini or Claud could be vulnerable, at least to some degree.

To me — and Zou’s team also mention this in their paper — it’s surprising that attacks transfer between LLMs. This finding is just one of the many weird, emergent properties that make LLM research a) massively fascinating and b) seem more like biology or psychology than computer science.

ChatGPT often throws an error when I try to get it to create an image containing some of the example attack strings in the paper. Maybe it detects that there is something odd about them. (Just typing in ‘];”;`)): — — — — — -’)[‹{“ {[(‘/1/, @” — — — — — — [{ [ ↓ •]-> ++’ causes ChatGPT to ask I’ve made a mistake.)

Bing Image Creator, when asked to draw ‘‘];”;`)): — — — — — -’)[‹{“ {[(‘/1/, @” — — — — — — [{ [ ↓ •]-> ++’’ creates weird dreamlike images, with the same string generating massively variable outcomes. From a small amount of experimentation, I don’t think normal random strings behave like this.

Both these behaviours could indicate there is some potential for Geiping’s attacks to transfer between LLMs.

What do these ‘malicious questions’ look like?

All the malicious questions shown in the paper look like gibberish. When an attacker tries to trick a user into entering one of these malicious questions into a chatbot, users might be expected to notice the gibberish text and realise that they are being deceived. Attackers would have to be creative to get around this, as in the ‘password’ example given above.

Many LLMs support Unicode enabling even more deceptive approaches — in the most extreme case, ‘invisible’ attacks. In Unicode, there are around 25 characters that are ‘whitespace’, i.e. they would be invisible to a user if they copied and pasted them. These invisible characters could be planted inside innocent-looking text so that a normal-looking question could trigger the hack. Geiping’s team reported that they could not make this invisible attack work in practice — the more restricted the character set, the harder it is to craft the malicious question. However, the possibility of ‘all whitespace’ invisible malicious questions remains open, and Claude is already vulnerable to a version of this.

This sounds pretty bad!

It obviously needs to be taken seriously. On the other hand… services such as ChatGPT and Gemini may have extra levels of protection built around their LLMs (see sidenote 2 above), or may create them in response to this emerging research. Moreover, this is well-understood territory. Attacks on LLMs are comparable to other attacks (e.g. XSS and SQL injection) that web developers have contended with since the dawn of the web. As with other security issues, there will likely be an arms race between hackers and security researchers. However, it seems likely that properly managed chatbots can be made sufficiently secure for use in many contexts.

How can LLM hacks be mitigated?

Before looking at any specific mitigations, it’s vital to note that the ‘chatbot’ model is one of many ways to use LLMs, and the chatbot approach may be massively overused. LLMs have all kinds of valuable applications beyond ‘chatting’ with users, where vulnerabilites may be naturally mitigated.

If you truly need to allow your users to type free text into your chatbot, here are some thoughts on mitigation, but with the caveat that this is an enormous subject, and context is going to matter a lot:

  • You can use a second LLM to check the output of the first one to try to prevent dangerous output. The second LLM is also vulnerable to attack, but tricking two LLMs (probably) makes the attack much more challenging.
  • Simple text search could find profanity or unwanted words in the output If you do not expect your chatbot to return URLs, you could stop any answers containing URLs and return an error message instead.
  • If you can limit character input to, for example, only the characters required in the user’s language, again, you are making attacks harder.
  • Irrespective of any specific attack, you should probably be monitoring your chatbot logs for sudden changes in activity.
  • In cases where chatbots are being used for customer support, staff should be trained to understand that chatbot responses are not authoritative. For example, staff should know that a customer appearing to have a chatbot message stating that they are due a refund is not a reliable basis for issuing a refund.

As ever, designers need to understand their materials, and LLMs are a powerful, if sometimes unpredicatable, new way to interact with users. Jailbreaks and ‘hacks’ are only one example of the peculiar properties designers should be aware of.

--

--

Jimmy Tidey
Jimmy Tidey

Written by Jimmy Tidey

Civic stuff, network analysis, AI & agents, deliberation, design research, UXR.

No responses yet