I’m trying to write a custom GPT to help people with the SymPy Python library. The code interpreter is awesome because it means that ChatGPT can actually execute the code that it writes.
However, as of a recent update, the code interpreter hides this code, which makes it very difficult to actually use this for a code learning GPT. There is a small [^_]
icon at the end, but it’s hard to see, and it also doesn’t seem to show all the code (the old UI for this was WAY better).
There also seems to be a bug where the [^_] only shows the first thing the code interpreter executed. See https:// chat.openai .com/share/b7947b5f-8cf4-434a-a3fe-b07552092d1b (sorry for the obfuscation, it won’t let me post links). It’s actually impossible to see what the actual SymPy code it wrote was. Incidentally, if open that same URL in a private tab, it shows me the old UI where I can click the buttons in the middle of the conversation to see the code (but even then, I would prefer for this to be expanded by default for this GPT).
I tried to fix this in my prompt, but I couldn’t figure out how to make the GPT output the same code separately in Markdown and then again in the interpreter. If someone knows how to make it do this let me know.
It would really help if there was an option for GPTs to always show code interpreter code, since that is kind of the whole point of this GPT.
Separately from this, I would also appreciate a user option to always do this for every ChatGPT code interpreter conversation, not just certain custom GPTs, as I am a Python programmer and I generally want to see what the code that ChatGPT writes looks like, even if I’m not necessarily going to do anything with it. I also would like to put something like “whenever you write Python code, test it out in the code interpreter” in my custom instructions, but then I run into the same problems I described above. Although I suppose it it were at least an option for custom GPTs I could just make my own GPT that does it for day-to-day usage.