With assistants API an option, I’m just wondering what the longterm plan is going to be for pricing? You can imagine the input/output costs alone are going to be hefty in the long-run. Any forecasts or thoughts on whether we can expect a longer term price drop, or is the plan ultimately to keep things gated within the GPT Store for managing lower price?
The Assistants API and Custom GPTs in the GPT Store, while similar in many ways, are altogether different products, with different use cases and different target customers.
I have zero inside information about any planned price drops, but if you look at the pricing history of OpenAI and their models, they’ve dropped pricing fairly dramatically several times in the last year or so. I think it is fair to assume it will continue in the future, eventually.
Another thing to consider, the price of a unit of compute (however you decide to measure it) has historically been halved every 2-4 years. So, you’re looking at an order of magnitude over a decade. Prices will come down… eventually.
The truth is, someday, there will be children’s toys using a local-language model more powerful than GPT-4 with better TTS than exists today because four things will happen,
- The cost of computing will continue to drop.
- People will continue to have amazing new ideas.
All the computing power in the world couldn’t make an LLM as powerful as even GPT 3.5 without the transformer—which is less than 7-years-old. The transformer is a great idea, but it’s not the best idea—that hasn’t (probably) been had yet. Someone alive today is going to write a paper (probably within the next five years) that will make things possible in NLP that simply would not be possible without that idea. - The amount of data in the world is increasing exponentially.
As we create more and more data there are people dedicated to building tools and processes to refine and optimise this data for use in training models. So, models of the future will have more and better data to train on than those that came before. We’re in the early days of learning just how important high-quality data is, we will be able to train smaller models with less data than GPT-4 and its training corpus and still get better results in the end. - We’re continually developing and improving ancillary tools to augment language models.
By not demanding LLMs be magical creatures that do everything themselves and instead building systems around them to augment and extend their capabilities they become superpowered. As we continue to learn more about the weaknesses and strengths of LLMs the ecosystem of tools to scaffold LLMs to new heights will explode.
You must remember, this is early days. GPT-4 was ten-months-old yesterday!
I very much doubt GPT-4 will still be king-of-the-hill in 2025, I wouldn’t even be surprised if there were some \sim70-billion-parameter open source model released sometime in 2024 that outperforms it in almost every way.
I guess my point is I think it’s somewhat crazy to talk about long-term pricing plans for a product that will probably be obsolete in a year because everyone, including OpenAI, will have moved on to something better.
tl;dr: ¯\_(ツ)_/¯