We use an assistant to develop a new service for multiple users, using the API with an assistant, thread, run, etc. One of our developers gets the impression that the assistant learns from run to run. Namely, that after creating (and stopping) a few threads and runs, the performance (accuracy) of the assistant is better. Is this developer just lucky? Or does the assistant learn from run to run?
AI models don’t learn. API usage is not even used to improve future AI models.
The AI can see all of a chat history passed to it (until the point where the thread won’t fit into the maxed out AI model context length any more). So if you continue a thread, just as you would continue a chat in ChatGPT, there’s more topical information that gives the AI background. (this previous text can also impact originality and improvement negatively).
Working with an AI model can train a user though. You learn what the AI can’t figure out on its own in normal writing, and you start to break down tasks and explanations in a way to make yourself understood.
Eventually the illusion is broken. Your input generates superficial responses that are window dressing without deep understanding, a style of text that other thumbs-up button pushers found soothing by having their input repeated back to them.
“Design a rotifer robot for me. It shall have all the working anatomy of a rotifer animal, corona with cilia, and locomotion, along with neural net providing motivation of behaviors.”
Does the AI understand? No. It made a picture. Rotifers have one foot…
Thank you! Yes, I guess that this developer learns how to prompt better and gives the credit for the progress to the system rather than to himself.
The above answer is quite general and indeed convincing. However, in addition I wonder if the above answer is correct when the assistant is invoked from the Playground? OpenAI tells us that data used for consumer ChatGPT may be used to improve the models. So, is the Playground (rather than the API) used to learn?