A riddle solvable by ChatGPT, but not by OpenAI ChatGPT4 API?

Prompt: There are three sisters in a room. Anna is reading a book. Alice is playing a game of chess. What is the third sister, Amanda, doing?
OpenAI API model: gpt-4-0613 all other settings default ( temperature 1 … )
Answer: The text does not provide information on what the third sister, Amanda, is doing.

ChatGPT model: GPT-4
Answer: The riddle you’ve presented is a classic one! The answer is that Amanda is playing chess with Alice. Since Alice is playing a game of chess, she must have an opponent, and that opponent is Amanda.

I presumed that using the API would grant us access to the same version of ChatGPT, not a simplified one. Is this the case, or are there specific parameters, such as Temperature or Top P, that I would need to adjust?

1 Like

Just put “Solve the riddle.” in the System message, works fine, even over a variety of settings.

4 Likes

I am looking for an overall solution, not just a response to that one specific prompt.

The overall solution is that you put past context into the System message. This is all ChatGPT does, and to mimic this in the API, you have to do this yourself.

How can I determine the context when multiple users are interacting with a our custom chatbot that only uses chat prompts like ChatGPT?

The general idea is that you would summarize the conversation, specific to each user, and put this in the system, along with any other direction and information you’d like to give the bot. Or also put objectives … like whispering in the bot’s ear “Try to also sell widget X”, or “Answer this next question as precise as you can”, or “Start an engaging conversation about widget X”, etc.

Also, don’t forget to place past history of user/assistant directly to provide context, and whatever correlated related history from the far past, if you have it.

Also, as you can tell, you don’t ever want the system to be empty, unless you are OK with very generic and “dumb” answers. So start out with something generic, rather than nothing at all.

Usually, in system, you start out with something that defines the “personality” of the bot, and take it from there, and update dynamically as the conversation progresses.

GPT-4 is heavily influenced by system, and essentially this is your control knob.

3 Likes

Oh, this is interesting! So, you are saying that ChatGPT uses varying system prompts, depending on the user prompt? How does it determine the appropriate system prompt?

ChatGPT uses a set system prompt, similar to “You are ChatGPT, a large language model based on GPT-4…”

You can provide the exact system message as ChatGPT and get similar behavior by API. It doesn’t necessarily need the same name, even.

The output can vary depending on prior conversation you’ve had. Ask about several riddles and you’ve put the AI in the mood for solving them. Put a message in conversation history “you will discuss the scenario of any riddles before answering, and work through the steps required to solve it”, and you’ve changed the answering even more.

Every answer from ChatGPT can also be different due to random selection of words to be used; there might be equal probability of “I don’t” and “I can” to start the answer for a particular type of input. API users can reduce the temperature and top-p parameters to get more consistent answers.


The riddle is ambiguous; Anna is not monopolized by reading a book. More games of chess are played online than in person these days.

2 Likes

This is the biggest problem I see—she could be playing correspondence chess via postal mail for all we know.

Hell, it doesn’t say that the three sisters are the only people in the room.

Additionally it’s very likely that ChatGPT website model is changing and is ahead of anything you can access via the API now. All future API model variations are first provided through ChatGPT website to gather feedback.

So there is no exact 1:1 correspondence between website and API models. At least I wouldn’t count on it.