I’ve created a few mirrored versions of custom GPTs and Assistants. I’ve added all the same files (or “knowledge” in custom gpt configuration) with identical instructions. Yet, in every case, their functionality is almost unrecognizably different. The GPTs reference the knowledge materials seemingly 10x more effectively and elegantly, while the Assistant repeatedly struggles to answer questions or explicitly says it has no answers to questions that are even directly answered in attached FAQs. And the assistant is hallucinating consistently while nothing in remotely close to this has happened at all with GPTs thus far.
I cannot seem to find answer to why this is happen, though it seems many people have experienced similar challenges noted in other threads without any answers.
Can someone comprehensively help me understand:
Why this is happening
Will this be fixed (or is the API just supposed to be ineffective at retrieving or limited in capabilities for AI safety reasons, or other reason not yet made explicit)
Is there something I’m not understanding about the difference between “files” for assistants and “knowledge” for GPTs,
is there guidance materials around how to alter instructions and alter files such that an assistant can perform as well as a custom GPT
is there potentially different word counts or formats of files that the assistant can more effective treat as a knowledge base?
Please help, as I intended to launch a collection of products in 2 days after building their functionality as GPTs assuming I could copy and paste in order to make accessible outside of the GPT plus plan and platform.
Can you publish them both so we can check them out properly? It would be a valuable data point.
For a comparative test you would need to consider the GPTs system prompt and temperature. Also, the GPTs prompts get rewritten, I believe, but your assistants prompts wont be. You can see the final GPTs prompt in the configure tab.