My list:
-
Search bar on ChatGPT history (ok fine, not an API feature, maybe with assistants?)
-
o1 release?
-
I don’t think AVM or tts-2 will be out, it would certainly be a surprise
-
sora-preview would be nice
-
option to view how much a request would cost (I know there’s a way to do this, but an improvement on this would be nice) - specially on o-models (o1-mini, o1, o1-preview)
-
search the web would also be nice, but I’m not sure that is something they’ll want to add to the API due to competitors
-
improvements to the assistant pipeline (I don’t use it much, but I suspect there will be many updates on this)
-
lower the price of models again?
-
SearchGPT for everyone would make my day, I know some users were selected to test it during the same time as AVM, so maybe (ok fine, probably not an API thing, but it would certainly create a little bit of excitement)
-
More image generation functionalities via the API?
-
Memory? I mean, most of us are using some database for this
-
“auto-gpt” mode? would certainly make us all spend more tokens, but I feel like there would need to be some IDE plugin to work with it (replit and devin are banking on this, so I guess it would make a lot of devs make happy noises)
-
GPT generation via the API? If hosted on OpenAI servers to deliver zero friction, that would be amazing, but I don’t think the metrics on GPT is all that strong for this to be entertained
-
Improvements to abstract more away of the fine tuning process?
-
simple commands to the api to view models and organization/personal metrics? for example:
openai models
on the terminal (yes, I’m aware we can do this ourselves with a little bit of code and some aliases, but still, it would be cool to see something like this as a natural evolution of the API’s maturity) -
parsing of prompts longer than token limitations? similar to what gpt does (the “continue” button) but native to the API?
-
lower the costs and increase the rate limit, everyone is always happy if this happens
-
no breaking changes (as in, no changing from completions to chatcompletions like we had with gpt-4)
-
streaming on where there is no output streaming
-
a toggle-able progress bar for when you send a request (right now, when you send a prompt it just hangs for a few seconds before just giving you back the output)
Honestly, the API has progressed so much, I’ve sort of got used to it since gpt-4 (similar to python2 to python3) that I feel like my list is “weak”.
Can’t wait! I’ll see you all there!