I am attempting to run a function tool once a piece of information was gathered in the conversation.
The code interpreter is available since sometimes calculation on data from a CSV are needed.
I see that gpt-4-0125-preview at times chooses to run this function call in the Python console (with the right variables), and it of course fails. How can this situation be avoided?
“Direct the GPT” is poor language. OpenAI has made the abbreviation mean almost nothing.
Since you are specifying the model, you are using the API.
Since you are turning on “code interpreter”, you must be using the assistants framework.
The only guess is that the AI is being easily confused by the large amount of various text and confusing pretraining that makes it favor emitting python. The same behavior is being seen in ChatGPT: ask for an image: python. Ask for AI to do a language task: python.
Language is injected for code interpreter, on top of pretraining. You can get python function calls to be emitted when just using regular functions on the chat completions API.
The solution would be to be able to write your own tool definition for python. You cannot do that with assistants. I would suggest making your own “calculator” tool, and turning code interpreter off. You can even specify a python tool that is disabled. Finally going back to real gpt-4 if not using retrieval.