I wanted to make this and get everyone’s perspective. A few days ago I was pretty adamant that CI was nothing special, and that the community had already built plugins thay enhanced GPT-4 past that level…
But I think I’ve got it wrong, and maybe others do too…
We arent supposed to be using Code Interpreter to make graphs, or write us code, analyze data etc etc…Sure it can do all those things but thats not the underlying utility.
We are being given the opportunity to watch how GPT interprets our requests into code prompts.
I think the future of “prompt engineering” is actually going to be the manipulation the details within a standard schema of prompt code-blocks.
Basically, always look at the “Show work” feature. Its telling US how to make code pipeline prompts. Play with it by changing a few words in your prompts and see how the code changes, and tell me if Im crazy lol
I agree and also love the usage of “blocks”. One doesn’t need to, or even know how to form concrete to build their castle. Although there will be some strange buildings
It’s very similar to a Jupyter notebook, which is great. My only gripe is that it uses Python. Understandable, but still.
The next step is creating the logic to connect and reuse these building blocks. If only…
Check out @jxnlco on Twitter. He’s doing some CRAZY work with Pydantic. I think he’s done some really cool stuff with it. I also had some ideas a while back about plugins and how they affect GPT behavior. GitHub - GlassAcres/Pressure: "Pressure" (psai): Methodolgy and Metrics for External Influence on Base Model Performance. Its super rough, but it follows the idea that if we can create external programs that can be used as functions to change GPT behavior, then there HAS to be an inherent pattern (even if its impossibly complicated) that we can try to glimpse and measure.
Either way, I think Code Interpreter is actually a subtle nudge from OpenAI in that direction. “Check out what your natural language prompt looks like translated programmatically!” Tons of utility.