Randomized replys with ChatGPT 4 using Code Interpreter

Hi, I’d like to share an interesting prompting strategy I’ve come up with, to get more randomized replies.

You could ask, why would someone want a randomized reply? An AI should be accurate, shouldn’t it? Yes, that’s right, but sometimes you want more random behavior, e.g., in storytelling. Let me demonstrate this with jokes.

When you tell ChatGPT 4 to tell you a joke, every time you start a new conversation and ask it to write a joke, it will respond with nearly the same joke. Try regenerating this “tell me a joke” instruction. It gave me the same joke 10 out of 10 times I regenerated it. So, when you make a Joker GPT and give it some conversation starters (like “Tell me a joke…”), it’ll reply (nearly) every time with the same joke.

Note that this is not bad behavior or an error; it’s about the inner workings of a Large Language Model (LLM). LLMs use previous context to guess (and write) the next word. So, for the same prompt, you should get the same answers, or for our example, the same joke (assuming the temperature parameter is low enough or zero).

How can we use this knowledge to get a different joke every time? LLMs (like humans) are bad at randomizing, but Python is not, and ChatGPT can use Python. So, when we start the Code Interpreter and generate some random characters, the context before the joke changes, and the generated text should be different. I tried this approach, but the result was the same, the same joke each time I regenerated it. I think it is because the generated random string has nothing to do with the joke, so when it writes the joke, the random string is not considered as context.

I corrected this behavior with a little trick. I didn’t just generate a random string, but a key-value structure, where only the values are randomly generated. The keys are constant and are picked to match the task you want to generate. Then tell the GPT that the values were scrambled and ask it to reconstruct the values. This way, it converts the random strings to words, and we can instruct it to use the reconstructed words to write the joke. This way, you get a different joke every time you regenerate it. (Note: You can also tweak it to not show the reconstructed card or mention it.)

I hope this helps you, GPT creators, a little.

3 Likes

That is a very cool idea, thanks for sharing!

Thanks for the inspiration.
I came up with the following ‘solution’:

Prompt:
Tell me a joke, please. (Random word)

The model will then proceed to tell a joke about the random word.

Include a long list of words and have Code Interpreter pick a random word from the list.

1 Like

This is a good idea. I thought about it too. Only thing I didn’t like about this was the long list of words, that’s why I went down the route with generating random letters and reconstruct words. But yes, this works too.

1 Like

The principle is the same when you code, for example random numbers, it will always start with the first one, but there are functions to create a seed and the result is random, this seed is set to the time set by the server’s internal time which is counted in milliseconds from the beginning of midnight.

Now, in the case that you comment, the most reasonable thing is to create a knowledge base on jokes, for example a pdf, enumerated by jokes, and to create a random value so that it chooses between the training base among five thousand jokes.

This would involve using a get action from an external server, which returns the random number.

Now the other way is to be very specific in natural language what it should do, how it should do it, so when we don’t tell it what to do the bot will freely go for what it thinks is best.

Thank you , this is brilliant. and not entirely sure how it works but it is magic. I had the same problem with a joke GPT I built in December. No matter what I tried the regenerated content was not funny, just punny. After reading your post I went and edited the old build and toggled code interpreter and started prompting with your approach, I initially got some technical bs and some language that system could not incorporate code, I could do it manually etc, but after tweaking the prompt and persisting and finally instructing to the build to generate random strings related to the joke and not gibberish , it started working. I still need to tweak this some , but clearly the build is using code interpreter when prompted for new joke. Thank you
chat-openai-com/g/g-G0w9laHIp-bar-joke-buddy
chat-openai-com/share/4e66f4a7-7ca4-41a9-88d1-611a4b19cea3

Defining possible funny outcomes for the joke

funny_outcomes = [
“the priest tried to baptize the bear, the minister tried to convert it, and the rabbi tried to circumcise it.”,
“the priest says, ‘I read him the last rites,’ the minister says, ‘I prayed for deliverance,’ and the rabbi says, ‘I just ran faster than you two!’”,
“the priest confesses he tried to exorcise the bear, the minister admits he attempted to bless it, and the rabbi reveals he just wanted a selfie with it.”,
“the priest began a sermon, the minister started singing hymns, and the rabbi set up a picnic, hoping to befriend the bear.”,
“the priest prayed, the minister read scripture, and the rabbi simply said, ‘Bear, meet your new congregation!’”
]

Randomly selecting a funny outcome for the joke

random_outcome = random.choice(funny_outcomes)

random_outcome

1 Like