On other threads we discussed putting a user breakpoint in the messages, like:
“Now you have a new User. He does not know any of the previous conversation.”
When I have GPT process data into incontext learning messages, it always makes them all and messages.
This is the format I see in some examples, but not in the docs. What is the right way to provide a conversation as context?
The right way is to make a list of messages:
1 - system - always
2 - user + assistant replies
3 - current user question
The role “name” field is not necessary for normal use.
When conversation gets too long, the AI can still understand if you decide to omit old AI replies and keep a sequence of only their user inputs. That’s what ChatGPT’s management does.