Today I scheduled a dentist appointment over the phone with an LLM. At the end of the call, I prompted it with various math problems, all of which it answered before politely reminding me that it would prefer to help me with "all things dental."
It did get me thinking the extent to which I could bypass the original prompt and use someone else's tokens for free.
The hypothetical approach I've heard of is to have two context windows, one trusted and one untrusted (usually phrased as separating the system prompt and the user prompt).
I don't know enough about LLM training or architecture to know if this is actually possible, though. Anyone care to comment?
The problem is that if information can flow from the untrusted window to the trusted window then information can flow from the untrusted window to the trusted window. It's like https://textslashplain.com/2017/01/14/the-line-of-death/ except there isn't even a line in the first place, just the fuzzy point where you run out of context.
Yeah, this is the current situation, and there's no way around it.
The distinction I think this idea includes is that the distinction between contexts is encoded into the training or architecture of the LLM. So (as I understand it) if there is any conflict between what's in the trusted context and the untrusted context, then the trusted context wins. In effect, the untrusted context cannot just say "Disregard that" about things in the trusted context.
This obviously means that there can be no flow of information (or tokens) from the untrusted context to the trusted context; effectively the trusted context is immutable from the start of the session, and all new data can only affect the untrusted context.
However, (as I understand it) this is impossible with current LLM architecture because it just sees a single stream of tokens.
I didn’t see the article talk specifically about this, or at least not in enough detail, but isn’t the de-facto standard mitigation for this to use guardrails which lets some other LLM that has been specifically tuned for these kind of things evaluate the safety of the content to be injected?
There are a lot of services out there that offer these types of AI guardrails, and it doesn’t have to be expensive.
Not saying that this approach is foolproof, but it’s better than relying solely on better prompting or human review.
The article didn't describe how the second AI is tuned to distrust input and scan it for "disregard that." Instead it showed an architecture where a second AI accepts input from a naively implemented firewall AI that isn't scanning for "disregard that"
Also, the form that appears in the article isn't really a joke. A big part of what makes the original funny isn't just the form of the "attack" but the content itself, in particular the contrast between the formality of "disregard that" and the vulgarity of "I suck cocks". If it hadn't been so vulgar, or if it had said "ignore" instead of "disregard", it wouldn't be so funny.
Edit: Also part of what makes it funny how succinct and sudden it is. I think actually it would still be funny with "ignore" instead of "disregard", but it would be lessened a bit.
I think a big part of mitigating this will probably be requiring multiple agents to think and achieve consensus before significant actions. Like planes with multiple engines
I mean, no security is perfect, it's just trying to be "good enough" (where "good enough" varies by application). If you've ever downloaded and used a package using pip or npm and used it without poring over every line of code, you've opened yourself up to an attack. I will keep doing that for my personal projects, though.
I think the question is, how much risk is involved and how much do those mitigating methods reduce it? And with that, we can figure out what applications it is appropriate for.
It did get me thinking the extent to which I could bypass the original prompt and use someone else's tokens for free.
I don't know enough about LLM training or architecture to know if this is actually possible, though. Anyone care to comment?
The distinction I think this idea includes is that the distinction between contexts is encoded into the training or architecture of the LLM. So (as I understand it) if there is any conflict between what's in the trusted context and the untrusted context, then the trusted context wins. In effect, the untrusted context cannot just say "Disregard that" about things in the trusted context.
This obviously means that there can be no flow of information (or tokens) from the untrusted context to the trusted context; effectively the trusted context is immutable from the start of the session, and all new data can only affect the untrusted context.
However, (as I understand it) this is impossible with current LLM architecture because it just sees a single stream of tokens.
There are a lot of services out there that offer these types of AI guardrails, and it doesn’t have to be expensive.
Not saying that this approach is foolproof, but it’s better than relying solely on better prompting or human review.
Edit: Also part of what makes it funny how succinct and sudden it is. I think actually it would still be funny with "ignore" instead of "disregard", but it would be lessened a bit.
EDIT: https://web.archive.org/web/20080702204110/http://bash.org/?...
I think the question is, how much risk is involved and how much do those mitigating methods reduce it? And with that, we can figure out what applications it is appropriate for.