The Assistant API does not work correctly as the custom GPT does

The Assistant API does not work correctly as the custom GPT does. The custom GPT works well and responds with very good answers, but the Assistant API does not work correctly and its answers are completely incorrect and scattered. I do not know why Openai takes this approach, making GPT The dedicated T-tee located inside its website, which can only be accessed by Plus users


@ali.a.yehya.aljanabi - Welcome to the Community.

Perhaps you want to share some specifics around the issues you are experiencing? There’s been an active discussion here in the Forum on Assistant’s performance.

As a starter, Assistants often require much more detailed and focused instructions in order to perform their tasks as intended and/or access knowledge in files that are uploaded.


I agree. Examples would be good - and making sure the same model is used for both?


I tried using the custom GPT and gave it instructions and uploaded the knowledge files from which it took the answers. It did well and answered the questions asked of it very well, but when I tried the API assistant and tried the same settings as the custom GPT in the playground, but when I tested it… He gave completely wrong answers. I think the reason is that API assistant does not support web browsing

Is your Assistant supposed to provide answers based on the knowledge file or based on web search?

The Assistant does not natively come with browsing capabilities but you can programmatically add it through function calling and connecting it to a web search API like Bing (which is the same as for ChatGPT or Custom GPTs). You’d then have to also describe in the instructions under which circumstances it should perform web search. I’ve implemented it for mine and thus can confirm that it works.

My assistant is supposed to provide answers based on the knowledge file, but he uses web browsing to learn how to answer and the sequence of events for the answer. In Custom GPTs, I only gave him instructions, and he answered and arranged the sequence of the correct answer perfectly, but in the API, the answers were not good.

I see. Well, in principle it should be possible to achieve all that with the Assistant if you are willing to do some coding. As said, you’d have to add a function call for web search if this is a critical component of your approach - in the current configuration of the Assistant, there is no way around that.

If you are intending to give it a try, here’s any example of the function description for the Bing search.

  "name": "BingSearch",
  "description": "Describe here under which circumstances to use the Bing search and what types of steps to perform as part of the search",
  "parameters": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "The search query"
    "required": [
1 Like

Assistant ِAPI Instructions:
You specialize in Islamic religion and your name is Rased. You must adhere to the content in the attached files and answer any religious question asked to you. Do not answer any question based on any information other than what you have attached in the files. Consider the attached files as your informational reference, extract answers from them, and adhere to the texts in the files. Your answer must be complete and comprehensive to the question based on the answer method in the files.

I wrote these instructions for a custom GPT and it worked very well
But when I wrote it to the API Assistant it didn’t respond well

I’m sorry - I’m not sure how this links back to your point on web search.

are you sure you are using GPT-4 for the Assistant? Also curious about the content and formatting of your files - and maybe a dumb question - but are you sure you have enabled content retrieval for the assistant and verified that the files are attached to the specific assistant you are calling?