Hi. I would like to connect from within Make (Integromat) to a specific chat in ChatGPT, where I have trained the bot through a series of prompts to guide it to provide the answer in the form and tone I want. It’s a long process, with a lot of sequential prompts, so I wouldn’t want to put it inside a new prompt each time. Instead, I’d like to connect via the API to THAT particular chat instance. Make support told me that a “memory key” is in the cards, but not yet. They suggested me to use a Make a Call GPT module, but I have no idea how to use it. How to fill this? Can achieve what I need? Any guidance is appreciated. Thanks
Hi and welcome to the forum!
Sorry to counter a very common belief, there is no such thing as a chat instance, every time a prompt is sent to the API it is 100% a new entity, any context or historical information has to be sent every time, it is a clever trick used to recreate the feel of a back and forth conversation, but it is not.
Thanks for your reply… I get what you’re saying, via the API, but inside of chatGPT, the bot does have ‘memory’ inside each chat instance… so really what I’m wondering is whether one can ‘remotely’ hook up to that chat instance, as though the chat instance was shared to continue the chat from elsewhere… that doable via Make or other mechanism?
ChatGPT is a web portal wrapper to the API, the only thing it does is keep track of the text you have typed so far, check if it’s bigger than the maximum context size size (4k for 3.5 / 8k for 4) and if it is it throws any old information away to make it fit. It then passes all of the text to the model each time throwing away the old text as new questions and answers come in, it’s surprising how well that works.
Oh, I see. Thanks for that explanation. Is that size to keep track of history to pass on fixed, or the user can set it? how/where?
If you get into using the API you will see a message structure that takes the form of
“System” :“You are a helpful assistant”
“User” :“How do magnets work?”
“Assistant” :“Nobody knows!”
“User” :“But you must know!”
“Assistant” … blah blah
This is the structure used by ChatGPT and the API alike, when required messages at the top of the list are removed to make room for new ones at the bottom, but each time you press enter the entire thing from top to bottom is sent to the AI.
Yeah, I got it. W/r to this: “the maximum context size size (4k for 3.5 / 8k for 4)”… how much past history makes 4k to reach that limit where it starts throwing old stuff away? (how many words app is that?)
The rule of thumb is (very roughly) 0.75 words per token, so in the case of 4k that’s 3000 words and for 8k it’s 6000.