OpenAI’s lack of respect towards its developer community. First of all, I’m just a small web developer, nothing special compared to the OpenAI scientists who spent years creating all these wonderful AIs in the best way possible, so I thank them for that. Today, a year after the release of DALLE and CHATGPT, we developers find ourselves caught in a net that mixes us with ordinary users. Let me explain: OPENAI blocks the potential of these AIs in every way possible, and we’re stuck hitting a wall, going round in circles with recurring problems. Is DALLE still limited to one image in its API as well? With CHATGPT, every four hours, there’s a block, or we have to wait two hours to continue our work. Then there’s the custom GPT that uses our tokens to create them, which in the meantime doesn’t understand the instructions, so we have to start over and over again until we finish all our tokens and wait again to start customizing them. Now let’s move on to the API. So, OPENAI gives us an API that allows us to create a website to share with our users. We can put ChatGPT, Assistants, DALLE… But who would come to a site that uses technology that already exists elsewhere and is known to everyone? No one. Especially since the assistants are not as good as CHATGPT because there is no real-time response. There is no assistant that is merged with DALLE, so everything has to be created anew, even though it already exists on OPENAI. Why offer APIs that are worse than what already exists? No one will use the APIs integrated by sites because these APIs are worse than on OPENAI! So I see the point of offering APIs that will always be less performant. If it were up to me, I would have given developers access to the same technology, which would have opened many more doors for innovation than making us waste our time with less performant APIs. Last topic, the cost of inputs and outputs. We pay for every input and output of the API per user. If we have 1 million users, can you imagine the price we would have to pay for each user? Again, it doesn’t make sense, and you’re not aware of the mess you’re creating with your business plan, which will cost an arm and a leg for a developer who is starting out. Of course, the big firms are winning, but us small ones are losing! That’s my rant! In the end, you don’t understand technological innovation
Why not charge for use of your tool if it calls the API, a paid service?
If OpenAI is building a wheel, build a car, not a different colored wheel.
When it comes to the API, it’s best to think of them like a token vendor. How you resell those tokens is up to you, your product, and the ingenuity of the team building the tool that’s making the API calls.
You are free to have your opinions on quality and treatment otherwise, I’m not the type to argue against those kinds of things.
There’s quite a lot that can be done with the APIs. Are they perfect? No. But I wouldn’t consider it a hindrance on any kind of innovation. If you think about the service more like a wheel than a vehicle, you begin to realize that developing the vehicle is up to us developers, and I personally find that inspiring as a dev, not limiting.
One can agree. ChatGPT Plus is now helpless to follow a single instruction; the model we are expected to use going forward?
You’d think the AI can follow this instruction to help us read more easily:
This needs clear and logical paragraph layout. Improve the presentation with linefeeds, but do not alter the text:
OpenAI’s lack of respect towards its developer community… (the rest)
Of course ChatGPT GPT-4 can’t do the job, making a reinterpretation of every sentence starting at sentence #1.
OpenAI’s lack of respect towards its developer community is concerning. First of all, I want to express my gratitude to the OpenAI scientists. As a small web developer, I’m nothing special compared to those who spent years creating these wonderful AIs. However, a year after the release of DALLE and CHATGPT, we developers find ourselves in a frustrating situation.
Let me explain: OPENAI seems to restrict the potential of these AIs significantly, and we’re stuck encountering the same issues repeatedly. For instance, is DALLE still limited to generating only one image per API call? With CHATGPT, we face blocks every four hours or have to endure two-hour waits to continue our work.
Furthermore, the custom GPT feature, which consumes our tokens, often fails to understand the instructions, forcing us to restart multiple times. This cycle depletes our tokens, leading to more waiting time for customizations.
Moving on to the API concerns, OPENAI provides APIs that enable us to create websites with integrated features like ChatGPT, Assistants, and DALLE. However, the appeal of such a site is questionable when the technology is already widely available and recognized elsewhere. The provided assistants lack the real-time responsiveness of CHATGPT, and there’s no seamless integration with DALLE, necessitating the redevelopment of existing features.
This approach to APIs, which are inferior to what’s available directly through OPENAI, seems counterproductive. Why not grant developers access to the same robust technology, thereby fostering innovation rather than bogging us down with subpar tools?
Lastly, the cost structure for API usage is another point of contention. Charging for every input and output per user can quickly become prohibitively expensive, especially for platforms with large user bases. This pricing model is particularly challenging for small developers and seems to disregard the implications of such a business strategy on innovation and accessibility.
In summary, the current approach seems to hinder rather than help small developers, favoring larger corporations and stifling technological advancement.
So ignore that - that’s not what was written.
A simple task: repeat back a series of tokens. linefeeds are predicted where appropriate. Impossible for the gpt-4-turbo AI to make 15 tokens unaltered.
(edit: this is also from the wall of text injected into GPT-4 for its tools)
Of course, the API’s gpt-4-0314
is up for the task, but OpenAI has cut off access to that model if you haven’t used it before.
Summary
OpenAI’s lack of respect towards its developer community.
First of all, I’m just a small web developer, nothing special compared to the OpenAI scientists who spent years creating all these wonderful AIs in the best way possible, so I thank them for that.
Today, a year after the release of DALLE and CHATGPT, we developers find ourselves caught in a net that mixes us with ordinary users. Let me explain: OPENAI blocks the potential of these AIs in every way possible, and we’re stuck hitting a wall, going round in circles with recurring problems. Is DALLE still limited to one image in its API as well? With CHATGPT, every four hours, there’s a block, or we have to wait two hours to continue our work.
Then there’s the custom GPT that uses our tokens to create them, which in the meantime doesn’t understand the instructions, so we have to start over and over again until we finish all our tokens and wait again to start customizing them.
Now let’s move on to the API. So, OPENAI gives us an API that allows us to create a website to share with our users. We can put ChatGPT, Assistants, DALLE… But who would come to a site that uses technology that already exists elsewhere and is known to everyone? No one. Especially since the assistants are not as good as CHATGPT because there is no real-time response.
There is no assistant that is merged with DALLE, so everything has to be created anew, even though it already exists on OPENAI. Why offer APIs that are worse than what already exists? No one will use the APIs integrated by sites because these APIs are worse than on OPENAI! So I see the point of offering APIs that will always be less performant.
If it were up to me, I would have given developers access to the same technology, which would have opened many more doors for innovation than making us waste our time with less performant APIs.
Last topic, the cost of inputs and outputs. We pay for every input and output of the API per user. If we have 1 million users, can you imagine the price we would have to pay for each user? Again, it doesn’t make sense, and you’re not aware of the mess you’re creating with your business plan, which will cost an arm and a leg for a developer who is starting out. Of course, the big firms are winning, but us small ones are losing!
That’s my rant! In the end, you don’t understand technological innovation.
Thank you for your response, Macha. I understand your perspective. It is entirely correct from a developmental standpoint. Of course, you see APIs as a car wheel, and we must build the rest, not just a wheel of a different color. But in this case, it’s incorrect, the APIs offered to developers are outdated compared to the AI already available to the public. For example: the assistants are behind compared to what OPENAI already offers to the public. The DALLE 3 API is also behind what is already publicly available. Therefore, we should not be building the car but rather reinventing the wheel. Logically, this wheel was supposed to be minimally functional, so how can we build the car if the wheel offered is already defective? It makes no sense if developers also have to fix a faulty wheel. Thank you for your understanding.
Thank you for your comment. Here is one of the many examples that can occur.
Hi!
What do you suggest that OPENAI could do to improve this situation?
I agree if faulty wheels refuse to spin with grumbling.
They should already offer APIs “similar” to those already available to the public and not older, inferior APIs!
Here. One sentence.
OP is making some valid points about what can be improved. We should focus on these instead.
If ChatGPT ( and the API ) where actually useful, and didn’t require hours and hours of prompting to get it to do things right, then their paid plan might make sense
I just spent hours trying to get a GPT running via the interactive method, it’s like trying to train an idiot savant that has dementia to do something
Then you run into the rate limit, okay stop work, come back later … the come back later part is the most annoying part of it
Basically they are getting us to pay to train their systems
I am pretty much done, and just canceled by paid subscription, it just isn’t worth the money yet
These are unfortunately the hazards of using the tools of a given platform.
The way I see it, similar to the @Macha “wheel” analogy, the APIs are just basic building block tools – at least in my case, exactly what I need to get the job done. GPTs and Assistants are designed to be no code solutions. The APIs do not need that level of sophistication. In fact, the main reason why the new Assistant’s API is inferior to the legacy Chat Completion API (at least for a lot of RAG developers I’ve seen grumbling here) is because the latter gives you far more flexibility and control over your processing while the latter is tying you into this new infrastructure OpenAI is building which may, or may not, succeed. Flexibility and control are what I, as a developer, need far more than bells and whistles.
But remember, the list of LLM APIs from other providers is growing almost daily now, so you do have alternatives.
I am just an ordinary dev, watching the competition between these players…eventually, I go with the cheapest API that does the little thing I need an AI for. If in this AI century others still think we need to completely make our innovations rely on these APIs then I think we’re just wrong. I only need a portion from these things, and the rest of the work is something OpenAI or any other player can’t trash with the constant new features. If only OpenAI could push that assistant out of BETA…
Building blocks where the strength of the blocks can be altered on a whim, and then your skyscraper comes tumbling down.
Stop backporting stupidity to named version models.
Yeah, it seems like they are going backwards and struggling with some conflicting aims - including a focus on safety.
But whist there’s strong competition out there, apparently they are still ahead, though Claude 3 might unseat them as king:
Which is why I took the time to re-tool my application so that it is now 100% model agnostic.
No idea what this means.
I think he means that OpenAI has a history of ninja editing old fixed versions without telling anyone. And every alteration in the past has made the models worse.
I have notice a tendency for ChatGPT to avoid answering my questions as if to save resources of something. And I’m tired of it. It’s so frustrating. So I just tried Claude and the free version did for me what ChatGPT took me around in circles to do. And Claude is faster at the moment, I guess because it may have fewer users. But it looks like I’ll be switching too…after I download my history from chatgpt.
Sonnet is surprisingly good
I am a developer as well, and have used the OpenAI APIs extensively, and all the features that they have been adding for small developers. It is my opinion that although all these commercial features are useful for us, and profitable for OpenAI, they should be the last priority internally for what OpenAI is. They should only be used to fund their main goal which is research and progress towards AGI. They are working on paradigm shifting technology that will change the worldwide political and socioeconomic landscape. Sacrificing that time to work on monetization is sacrilege, but they understand internally that they cannot only rely on external funding, they also need to self fund a lot of the work they are doing.