How to keep session with gpt-3.5-turbo api?

if you are using node.js, get the latest chatgpt package from npm chatgpt - npm

Now it supports parentMessageId param so you should be able to track the conversation

The parameter parentMessageId is not exist. So the api is not working.

1 Like

Does this actually work?
It’s weird that it has this parameter but it cannot be found in OpenAI API official document.

1 Like

It’s not an official package, but I’ve tested it with my friend and the conversation works, here referring to the issue in the repo Persist conversation state · Issue #296 · transitive-bullshit/chatgpt-api · GitHub. For example, asking for tourist attractions will return an ordered list of items, and the following prompt referring to any specific item with only a serial number works for us. It’s not a thorough test I admit.

首先这确实不是一个官方的包,我觉得应该是他们自己封装了功能做了个字段,参考issue - Persist conversation state · Issue #296 · transitive-bullshit/chatgpt-api · GitHub 我和我朋友试了还可,第一个prompt问推荐的旅游景点,第二个prompt用“第几个景点”这样的序数词指代可以得到正常的反馈。

So i have found numerous threads and websites explaining that no sessions are remembered and that you have to pass the entire conversation each time. Ok sure, except the knowledge i want GPT to remember is a lot of information that even when optimised/reduced/compressed is still several K tokens.

When using the chat interface, i can just teach the knowledge using several messages and it works great and the AI can reason about said knowledge. This is currently impossible to do using the open AI API.

Is there any information if a conversation_id or similar will be added in the near future? Is there any devs on here that give answers? it just seems the same ‘send history on each prompt’ solution which doesn’t work for our case. Not to mention that it seems very inefficient and a waste of tokens.

This is in my opinion and extreme limitation and it is making me actively find an alternative to open AI.


I we implemet in this suggested way for next create call the tokens used also gets appended, it processes previous prompts as well again.

Do we have flexibility to having tokens used on latest message ?

GitHub - dustinandrews/gptFlaskByGpt-3.5-turbo: A flask app for chatting with gpt-3-turbo written primarily by gpt-3-turbo Has a simple example. See the summary branch for a simple summarizer to keep up context.

API means application programming interface. In other words you are expected to write your own app.

1 Like

I wanted to use ChatGPT for the game, I thought that I could tie requests to a certain sessionId, feed it with a text algorithm, with a description of game objects, location data and a bunch of other parameters, but it turns out that I have to fit all this into 4096 tokens, this unreal. It’s a pity. :pensive:

1 Like

I DO this is the just VERY RIGHT approach, use another model to summarize prompts & completions , sending a pack of short summaries in right format.

All people asking for API as chatGPT, assume the chatGPT developers just build a web?

“Yes, making session management more accessible to users can enhance the usability of the OpenAI API. Providing an intuitive and user-friendly interface for managing sessions can simplify the process for developers and allow them to focus on the conversation logic rather than low-level session management details.” - ChatGPT.
And they say it would not replace “”“the human factor”“”…


Has there been a resolution to this question? I think I have the same query, but to re-state in case my query is different.

I’d like to be able to continue a conversation with gpt3.5 form a ->chat() call. Instead of resending the past conversation back to the API which gobbles up tokens, I expected that the most recent response ID could be used instead.

Otherwise to be honest, I’m not sure why it’s necessary to return such a detailed unique response Id. I could just as easily create a unique Id at my end or use a DB row ID.

The mere fact an OpenAI unique ID is generated for each ->chat() response would indicate that responses can be cross-referenced back at OpenAi headquarters - perhaps for checking responses that go awry?

Bing Chat search also suggested that completions can be re-accessed using the ID like this: /v1/completions/[chatcmpl-000000000ID]). Not sure how to apply this in an API context or if it applies to /v1/chat-completions.

1 Like

There is no conversation/history support for OpenAI API’s. You must send history (or summary) with each request, even though it does use tokens.

ID’s are likely for internal OpenAI logging/traceability/debugging. You can see in the API reference there are no endpoints that accept that conversation ID. Bing/GPT is not trustworthy regarding OpenAI API.



Is this method ‘openai.ChatCompletion.Session’ still available ?
I cant seems to work with sessio

its just not how GPT works
If you want to generate response based on the previous text, then you have to send it everytime and GPT has to process it every time.
How do you expect it to generate text based on it when you dont send it to be processed?
It must be processed, which has costs.
Even if there was a feature to store it in the API, GPT still has to process it, to be able to generate text based on it. Would not be for free.

Think of it this way, at least we are not limited by whatever memory implementation OpenAI does and we are fully in control of how you handle the memory/context.
You can do whatever you want with it before sending it.
Need to remove a line somewhere in the past? You can. Edit it? You can. Replace it with aummarization? You can. Fully in control of how its managed.

After the call is done, GPT is just the same as it was before the call. There is no memory/storage. You do that with your dev skills.

Its all about formatting the strings that you send

+1 for adding API functionality which mimics conversation behaviour as the interface version does. I write a Python script to pass Stored Procedures into the API and return a full description. This works fine when the SP’s are below a certain count of tokens, but most of my SP’s are above the limit and so the API is of no use. I could send by chunks, but then I would lose context between chunks because there is no session history maintained for my particular session.

Would be nice to set a Start_Session with first chunk, have an ID returned, then resend that with every new chunk until done for that session.

1 Like

Believe me when I say that you don’t want ChatGPTs conversation history management technique on your own product.