How can I correct my assistant on the OpenAI platform?

My assistant is giving wrong answers when it should be able to provide the correct ones. I think I haven’t trained it properly. Do you know what are the best practices for training an assistant?

First, you can upgrade the AI model that is being used, in order of quality and thus expense:

gpt-3.5-turbo → gpt-3.5-turbo-0613 → gpt-4-turbo-preview → gpt-4-0613 → (gpt-4-0301 only for past users)

Then if you have a specialization you want the AI to answer about, in the chat messages “system” message (or “instructions” in assistants), you’d program it to be a specialist with language telling the AI where it focuses it attention. Telling the AI it is an expert on sewing actually can have it answer better.

Further instructions can be to have the AI think step-by-step, writing out how it will approach finding a solution if any question seems difficult or a logical puzzle. When the AI is writing more about the topic before actually deciding on an answer, it can have more introspection that leads to the correct final conclusion.

That’s “training” the AI by providing it input context besides simply a user input, to prepare for the completion of an output.

Real training would be if you need to substantially alter the behavior of the AI - then you can use the fine-tuning endpoint to create your own model, needing significant dedication to preparing data, and then you are limited to only refining gpt-3.5-xx models, and then would have to employ a fine-tuned model outside of “assistants”, using the more typical chat completions endpoint (where you maintain your own chat history of customers).