We are using ChatKit with the OpenAI-hosted backend (Agent Builder workflows), with the agent model set to GPT-5.2.
In some assistant responses, we observe unexpected tokens (e.g. filecite, turn4fileX) that resemble internal reference or citation markers, as shown in the image below:
These tokens are rendered verbatim in the final assistant output and are visible to end users. The Show search sources option is unchecked in the agent settings in the Agent Builder.
Based on our observations, this issue occurs when the agent retrieves or uses data from files via the File Search tool. It does not happen every time and only occurs occasionally, but when it does, the tokens appear directly in normal assistant response.
This issue is also being noticed by our clients who are using the chatbot in production. When these tokens appear in responses, they are visible to end users and are perceived as a clear system error, which negatively affects the overall user experience of the chatbot.
Additionally, I tried switching the agent model to the newly released GPT-5.4 today. The issue is present there as well and, based on our initial observations, it even seems to occur more frequently.

