Hi team,
I using Chat completion API and i want to maintain user session but its not working.
I am using user peoperty in Body of api but still user session not maintaining as expected. but its working in ChatGPT but not in API.
Any one please help to resolved this issue.
API body:
{“model”: “gpt-3.5-turbo”,
“messages”: [ {“role”: “user”, “content”: “Hi”}],
“user”:“1”}
You need to pass the entire conversation history on each request. There is no built-in conversation history storage/persistence. You can search these forums for other threads offering advice on how to handle this.
Yes it does, but it is the only option. The API has no persistence, it is stateless. It needs the entire history (or as much that fits in token limits, or a summary of the history) on each request. The “user” parameter does not provide this/any functionality, it is used just for abuse tracking.
Based on the previous response, you can pick and choose what you want from the conversation and then pass it on to the next conversation is you are worried about token length. You can also try to limit the generation of the output by restricting the max token.
Other than that, try using gpt-3.5-turbo-16k, so it would allow for more information to be passed in the context
Welcome to the OpenAI community @giddpoojab
@novaphil is correct. You will have to pass the messages you want the model to “remember” when responding to the latest user message.
In your case, you can use embeddings. Here’s how you can Use embeddings to retrieve relevant context for AI assistant
Thank you for solution @sps
I have another query mentioned below
If I use subscribed version(Paid version) of OpenAI then also will have issue of User State Management in Chat completion API
There’s no subscription for the API. It’s a pay as-you-go service, where you pay based on the tokens you consume
This doesn’t exist.
ChatGPT: website with a talking robot giving advice at chat.openai.com
- payment model: free, with more features at $20/month
- conversation management: handled by user interface and backend database
OpenAI API: access backend AI models for application development, management at platform.openai.com
- pay-per-use, measured by data-in, data-out (in AI “tokens”)
- stateless, all infomation to generate a response must be passed each API call
Connection between the two? Almost zero. You even need to put the payment method in again in a different place for API use.
Thank you for clearing doubts @sps.
In any OpenAI document its mentioned Chat Completion API not maintain user management state.
If there please share me document(Not community raised query ). which help me to share with my teammates.
Including conversation history is important when user instructions refer to prior messages. In the example above, the user’s final question of “Where was it played?” only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied as part of the conversation history in each request. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way.
– Docs