I’m working on a personal CLI application in Python and playing with the API. This is my first time working with these models. The following journey was eye-opening in that gpt-4-1106-preview completely made up a feature for the OpenAI API and confidently talked about its benefits. Turns out the feature doesn’t exist.
I’m aware of hallucinations, but it really is surprising how the model can create a complete fabrication and then present it with code examples and completely plausible explanations of why you should use it.
When you use the
state
feature with the ChatGPT API, the API itself maintains the context of the conversation, so you don’t have to pass the previous responses back to it. Instead, you only need to pass thestate
object received from the last API response and the new question for which you require an answer.
The state object encapsulates the conversation history, enabling the model to generate contextually appropriate responses. The dialogue management is handled by the
API using this state, so you won’t have to send back the whole conversation with each request.
Code example from gpt-4-1106-preview:
# Function to interact with OpenAI's ChatGPT API
def ask_chatgpt(prompt, state=None):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=prompt if isinstance(prompt, list) else [{"role": "user", "content": prompt}],
state=state
)
return response
It’d be pretty cool if the API and model did have this feature… Surprising how detailed the explanations can be.