Hello Experts!
A newbie here working on a simple backend to query OpenAI models to retrieve a joke/pun. However, the issue I see is the API always returns the same joke. This does not happen with OpenAI chat app: https://chat.openai.com/.
I tried to give context to the model using “System Instructions” and the prompt is as simple as “Tell me a joke”. Every time it is the same joke that is returned. I tried multiple different models as well like gpt-4o, gpt-4o-mini, gpt-3.5-turbo.
I think it is because the model does not hold the chat context and hence the same joke is returned every time. Is there a way to retrieve a unique joke on every request? Any suggestions/ideas/pointers?
Hi!
The main issue is indeed that unlike in the ChatGPT interface, the API is stateless, meaning it does not recall prior responses. If you want it to be aware of jokes it has already produced, then you would have to send the already created jokes as part of the context and instruct it to create a joke different from those.
1 Like
I see. However, sending the already presented joke back in the request will significantly increase the request payload.
Is there anything better that we can do here?
1 Like
One way I would approach it is to create a dynamic prompt. Rather than just have the instruction “Tell me a joke”, I would use an instruction like “Tell me a joke about [placeholder for topic}” and then dynamically replace the placeholder by different topics for every API call.
There might be other ways but this my suggestion on top off my mind.
1 Like