GPT Agents losing information over time?

We are using GPT Teams and training GPT Agents for team usage. Some that I trained and that passed with validation questions after training are failing those same questions when asked today.
For one agent, it was trained on 3 documents, and today it knows about the third one but cannot provide any information on it.
For another, it was trained on a book I wrote, and it passed validation questions after training in March. Today, it completely made up the six stages of a process it knew about after it was originally trained.
For a third, it was trained on Labor Categories (36 of them), and passed validation back in March after training. Today, the same exact validation question only shows 35 labor categories - it missed #23.

My question: Anyone else having their trained GPT Agents lose information over time?

1 Like

People that have developed on OpenAI certainly have undergone repeated cases of applications being broken by the quality of AI models being reduced. And OpenAI released a new AI model this month, the release version of gpt-4-turbo.

“Losing information” is inaccurate. “passing good information to a good AI” is what is not constant.

I suppose the simple “fix” is to retrain all of my agents that were trained prior to the April 9th GPT-4 Turbo version release.

You don’t train a GPT like it was a monkey you whip.

You can just edit, and look at the language that has been placed in instructions in the “configure” manual viewer, and see if there are areas of improvement that could make the task and what you want the AI to produce clearer.

1 Like