Here’s what’s happening:
-
Python internal tool instructions do not inform the AI how to produce links that generate annotations.
-
The reasoning models do not have quality post-training on generating annotation-style links for sandbox mount point files.
Solution
If using just the Python tool without other tools or functions, you can use your first system message to “extend” the tool definition before your own instructions, where this can be believed as being the same tool:
### python tool usage notes
- python has hundreds of useful preinstalled modules;
- stdio, print, logs, .show() etc are all for AI consumption only;
- user can only receive *presented* generated file output as a deliverable with a markdown file link or markdown image link (URL sandbox:/mnt/data/...);
- use `python` freely for math, calculating, and tests, for reliable answering;
- state persistence: 20 minutes of user inactivity
---
(more)
If you have a collection of tools and functions being passed into the AI so that “python” is not the last, especially your own functions with parallel multi_tool_use, you’ll need to talk in terms of your own tune-up:
# Responses
## python
- when producing a response for a user after generating python jupyter notebook sandbox files for the user, you must provide a markdown file link to the user, using URL style `sandbox:/mnt/data...)`
You can adapt these ideas to what actually performs for you.
What OpenAI must do is allow the developer to change the internal text of all tools in Assistants and Responses, as the instructions are non-performative and general-purpose (and the injections of system messages with counter-intuitive instructions right after internal tool outputs also must stop.)