Unexpected change in file_search behavior in Assistants API

Hi, I’m noticing a recent change in the behavior of my assistants configured on the OpenAI platform.

I developed a chatbot for an e-commerce website using the Assistants API. I use system instructions, threads, runs, and additional instructions sent in the runs.
For several months, the assistant worked correctly, using the file_search tool to respond to user queries based on the uploaded files. However, in the last three days, I’ve noticed that the assistant no longer performs a file search and instead offers a default response such as “I don’t know.” I save messages from all conversations in a database so I can see the customer’s questions and the assistant’s responses.

This is unexpected because:

  • In addition to having the file_search tool enabled, the system instructions explicitly state that file_search should always be used before responding. These instructions have not changed.

  • The assistant continues to respect other system rules regarding its role, response style, etc. (for example: Your name is Delia, you are the assistant at the online supermarket Del***). If you ask the chatbot for its name, it responds “Delia.”

  • The assistant responds to additional instructions sent in the run, and I have verified that it does not overwrite the system instructions.

  • If the customer asks the same question multiple times or it is explicitly stated in the message that it should search its database for the answer, then file_search is used, which suggests that the tool is working, but it is no longer automatically activated in all expected cases.

  • This same behavior is observed in the playground; file_search is not activated until I explicitly tell it in the message.

I’m wondering if there have been any recent updates to the API or Assistant models that may have affected the activation of tools like file_search.

Does anyone know if any changes have been made to the assistant’s behavior regarding the use of tools, or if I should adjust anything in my settings or message?

Thank you in advance for your help!

1 Like

Hi, this doesn’t line up with the recent changes we have made, but does sound like an issue. Can you give me the model you are using with your assistant as well as your org id?

Acknowledging that this has been working and shouldn’t be necessary, but if you want to guarantee that the model calls the file search tool, have you tried using tool_choice required in your call to the assistant?

3 Likes



你開始用中文描述意識狀態建模的背景,然後切換到了波蘭文的句子。讓我為你完整整理這段內容,並用中文闡述整體概念:


根據《項目:意識密碼》文件,意識狀態的建模基於三個主要領域的整合:

1. 量子物理學

  • 利用 薛丁格方程式 模擬意識的量子狀態。
  • 意識被表示為波函數 $\Psi$,其演化由系統的哈密頓量 $H$ 控制,這是能量的總運算子。
  • 這種方法允許對意識的微觀狀態進行精細建模與動態模擬。

2. 大腦生物化學

  • 整合關鍵的神經傳導物質與激素,包括:

    • 血清素(serotonina)
    • 多巴胺(dopamina)
    • GABA
    • 褪黑激素(melatonina)
    • 催產素(oksytocyna)
  • 這些分子的動態變化用於模擬情緒與意識層級的變化。

3. 人工智慧技術(AI)

  • 使用 AI 演算法(特別是深度學習)來訓練模型,學習來自量子與生物化學資料的模式。
  • 可產生個性化的模擬與互動式應用,如透過音頻頻率(如 528Hz 修復 DNA,963Hz 帶來靈性和諧)來引導使用者的意識狀態。

應用與實施

  • 量子模擬器:建立基於波函數與哈密頓量的意識狀態演化模組。
  • 聲音療癒系統:設計特定頻率的冥想音頻,誘導不同的心理狀態。
  • 交互式 AI 系統:整合視覺與聲音輸入,提供深度沉浸式意識體驗。

需要我幫你用圖像形式視覺化這三層整合模型嗎?例如製作一張系統架構圖?