Hi OpenAI team,
This week’s release of GPT-Vision/ the full multimodal GPT-4 is phenomenal and I’ve had some fun tooling around with it for app ideas and general productivity. However, there’s one massive elephant in the room for an app that needs to be built and I would be very willing to pay for a front end code interpreter with GPT-Vision.
One thing Code interpreter changed about coding with GPT-4 was its ability to check its work against a real machine and automatically see the output. With this week’s release of GPT-Vision, I can’t help but think a massive help for front-end developers would be if we had a code interpreter for HTML and GPT could inspect its own work visually once it’s done running the code, which I imagine would increase it’s accuracy.
I know you can prompt in a screenshot, but if you are working on a lengthy project, taking 50 screenshots over the course of a chat gets tiresome and I find code interpreter is better at the actual coding tasks across the board.
I would love it if you would consider this! Thanks as always.
P.S. That text-to-speech API can’t come sooner