is there a way to understand what the assistant does and why, based on what reasoning? I have inserted a manual in the store for the search file and a function. If I send { “input”: “31/05/2025 is a valid date?”} I imagine that it will read the manual in the store where it will find written “the date must be greater than the last accounting closing date”, since in the system instructions it is written “to respond to the user you must consult the manual in the store in combination with the functions” and therefore it will use the function to obtain the last closing date. But mine are hypotheses, I would like to see a debug, a trace that shows me how it “thinks/acts”. In this way if it doesn’t work I can fix it.
A prompt (instruction) is different than a ‘manual’ added as a file, The manual even if it is full of ‘instructions’ would not easily be used as a ‘prompt’ to act on. How long is the manual>? If these are truly all ‘instructions’ you might consider just added the full text as instructions. (if you don’t mind the token use)
No, that is not an available feature. OpenAI does not expose this kind of data from the LLM. Though if you are using reasoning models in the “web app” you should see some kind of reasoning output. I’m not sure that any reasoning output (steps) is available via API.
Presumably you are saying that you have uploaded a document through whatever interface (vector store?), and are properly using your “search file_and_function” and are saying that WHEN that tool is called, the store for that document is provided to the LLM.
If this is the case, and you can confirm that the document is being correctly provided during the tool call and that the LLM is receiving it, then try adding to your “input” or “developer instructions/instructions” something like “you must reference the [manual-doc-name/store ID]”
So if you can confirm that:
- The doc is present in the store
- The tool call pulls it and returns the doc to the LLM
Then you just have to modify your instructions most likely.