Reliably retrieving code interpreter files from the container?

The issue is that:

  • The tool is implemented and described poorly
  • You are blocked from improving the internal tool language
  • Models assumed to be “trained” are not trained
  • Notebook state is self-deleting
  • You have no persistent file or image ability at all
  • A “container” is an internal convention you cannot access
  • A file listing method is incorrectly described
  • Files are locked behind only being “input” or “output”
  • Files are now further locked behind never being available unless annotated by AI
  • Files are also ephemeral, blob storage
  • The way to have the AI generate annotations is never described to you or the AI
  • The user invisibility of the code itself is not made apparent to the AI
  • Incompatible combinations of modules with methods that can never work are loaded
  • Methods of libraries for show and display() that can never display and have no presentation layer
  • The AI has no information about library modules it can employ, and must write code it will never autonomously write just to find out about them.

I can just go on, as this is barely a crib sheet for “why every feature and every internal tool stinks by design, and you should give up on Responses”

I don’t even need to remark about the stupidity of an AI that will go in loops of writing a script 2+2 because it thinks the notebook needs testing instead of its code sucking.


The solution is that you have to “system prompt” the AI that markdown web links must be created for every file created for the user, and that the URL written must be:

[file_name.txt](sandbox:/mnt/data/file_name.txt)

Thus making chat infected with undesired output at your expense, when on a non-suck code function, you could have a UI that natively shows newly-appearing files, in a file system browser even.

2 Likes