Crazy idea about how to reduce the token consumption on coding examples

Hey!

diegoUSDZ on here! I’m a developer specialized in AR. So I was thinking about how the interface of chatGPT could look like in the future.

I was thinking a way to reduce token count in the answer of chatGPT. I envision a future where we have mixed reality glasses and chatGPT connected to a spatial interface.

So now is chat is text based but later will be text and voice based. Meaning the interface of text can be extended up to 2 channels of immersion, visual and audible.

Under this context I was thinking that a spatial interface for future chatGPT could be, the same interface as now, but, when you ask about finding errors in the code, instead of printing the full code, highlight in a AR layer on top of the phone or computer screen the are to be changed.

Sort of like this, the idea would be that instead of wasting resources on printing the suggestion code around the solution. The software through the AR glasses can show in a digital layer the code suggestion.

Ideally we would augmented the chat beyond the current interface and would anchor buttons to interact with chatGPT verbally and ask about a like of code.

Maybe the interface of chatGPT could be extended in spatial computers like HoloLens / vision pro.

What y’all think?

Useful?

2 Likes

Hi and welcome to the forum!
Nice idea!
Today sometimes it makes sense to request specifically only the parts of the code that have changed instead of waiting for a complete rewrite. This is a lot faster.
You could likely build a intermediate layer and then perform the highlighting as you imagined.
But I haven’t done any AR, yet.

2 Likes

Welcome to the OpenAI community @diegoUSDZ

Why wait to put on AR glasses when you could do that now on vscode by simply writing an extension.

1 Like

i don’t get the point here, so GPT series is a transformer based model which basically just predicts the next token from the previous ones. Your idea is more like a UI design at a layer waaaay higher than openai api. You can definitely implement that, but the transformer model has to output whatever it outputs now.