hi, is it possible to have the same multi-turn conversation experience I have on the web version of ChatGPT, but using the ChatGPT API?
From what I have seen, every request I send using the API is considered as a new conversation/dialog.
hi, is it possible to have the same multi-turn conversation experience I have on the web version of ChatGPT, but using the ChatGPT API?
From what I have seen, every request I send using the API is considered as a new conversation/dialog.
There is no API for ChatGPT, but you can do the same thing with GPT and suitable prompts
It has no memory of previous API calls, so you need to include part of the conversation with subsequent requests. You have to make sure you don’t go over the token limit though
Also, because the prompt will be quite long, you will start to eat 4000 token on each request - so watch the cost of this - especially on Davinci
Hi @raymonddavey ,
What should I do if my request reach 4000 tokens as multi-turn? Can I just increase the paid amount (limit usage) to solve this problem?
No, you have to drop off part of the conversation. Normally you would drop the oldest interactions.
Also you probably want to do it around 3000 or 2000 tokens depending on how much you are expecting the ai to write to your next prompt
Going to gpt4 will give you more tokens. But you will just hit the limit again and you can’t avoid it forever
One approach you might consider to partially get around the 4000 tokens limit is to use a second summarising “agent” to produce a condensed summary of the conversation and its most salient points. The result is that rather than losing the early parts of the dialog altogether, the information resolution drops as the conversation exceeds the limit. It’s a graceful degradation as opposed to a complete amnesia for the start of the conversation, which may still contain very salient information. BTW I just heard about MemoryGPT, which solve this problem, but I don’t know anything about how that tool works (yet)…