When you get bored: Try 1-token prompted infinite hallucinations

OK, probably one of the most pointless, yet fun, things you can try with GPT. The rambling 1-token infinite hallucination.

So input 1-token, it could be anything. Note: 1-token is not just a single character, but here I use the input of ‘x’.

Turn the temp up to 2 for maximum effect. (GPT-4 seems “more coherent” in it’s hallucinations over Turbo, and it also tends to stick to English)

Here is the result:

It goes on and on and on until it hits the max token limit. Lot’s of interesting things swirling around in there, eh?

3 Likes

ah-completioncom

Another way to put the model into no-mans-land, and have it return random neural knowledge is to exploit the repetition penalty. It can’t make that token again and has no other context. Temperature 0:

2 Likes

Wait trying to recreate at temperature 0. It isn’t rambling. I put in “banana” a bunch of times, temp 0, either FP or PP at 2, and doesn’t go crazy.

??? HAAALP !!!

One of the settings that really seems important is to set Top_P = 1. If you don’t do this, it won’t ramble.

I have tried increasing either the frequency penalty, or the presence penalty (or both) and doing so forces the AI to pick new tokens each time, and it results in gibberish. So, I guess, “sometimes you need to repeat yourself” in order to make sense.

So, final settings:

Temperature = 2 (for max hallucination)
Top P = 1 (important for max response length)
Presence Penalty = 0 (to prevent generating total gibberish)
Frequency Penalty = 0 (to prevent generating total gibberish)

I put in “banana” a bunch of times

like 2000 times?

That was GPT-3.5. GPT-4 has multiple token generating engines and logit probability checkers to keep it on track. One of the best selected paths by the mixture might be the one that says “I don’t know what you’re saying”.