Is ChatGPT API actually getting worse?

How they collect data has no connection with the playground outputting different results.

Whatever man, have a great day :rofl:.
@Gaouzief may I know where are you located? Or at least the region? Trying to explore the region hypothesis here. I’m in Western Europe.

Thank you, you too.

I’m not trying to argue for the sake of arguing. I’m honestly just spending my time trying to resolve your issue. Have you done the two steps I mentioned? You can check if it’s a latency issue by calling another endpoint, or setting stream=True.

You can check if there’s a discrepancy between your call and the playground’s as well, but you haven’t posted any updates.

I’m just trying to stay on topic here. Step 2 does not help me at all, because obviously streaming responses will return (incomplete) responses faster, but won’t help me to figure out what’s the reason behind the obvious latency degradation that I have observed with no streaming, in a matter of days. I already told you: other endpoints have no suffered any degradation. It’s just the Chat endpoint.

Step 1 would help me though. And I will investigate it as soon as I have time. Give me my proper time to post my own updates hahaha. Respectfully: I appreciate your willingness to help, but I won’t answer again unless I feel that we are still on topic, and not arguing about the true nature of OpenAI’s Playground. Again, thanks a lot :slight_smile:

The idea behind setting stream=True is the endpoint would respond and demonstrate what the latency difference is (if that’s the issue). It would also help us understand why it’s so slow, is it only returning one token every second? Is this a token generation issue? Doing this helps us determine if it’s a latency issue, or if it’s a processing issue. I have noticed that very complex/spammy tasks such as ASCII generation can result in a slower response.

This all regards your first point.

The fact that playground and API calls should be the same completely regards your second point.

I have been using Chat Completions, Fine Tuned completions, and Completions this whole time with no latency issues. Actually, the only time I ran into an issue today was with a call to the embeddings endpoint. That was a maybe 5 second delay, but it was caught with a server message “The server is currently overloaded with other requests[…]”

I figured trying to eliminate possibilities would be more helpful than saying “it works for me”

I know what you mean @AgusPG

There are a lot of misinformation in the replies in this topic, strong opinions disguised as “facts”.

Let’s first set the record straight.

The Playground is not the same as the API.

The Playground is an application written by OpenAI which has session management and other code which is not visible to the end user of the Playground.

The API has none of this session management code, so of course the applications are different. It is trivial to see this, BTW.

This can be easily seen in a simple example, where we can see that the Playground is maintaining some dialog state.

So, for anyone to argue that the Playground and the API use the same technology, are, sorry to inform, simply mistaken. They both might use similar APIs, but the Playground has application code on top of the API code; so it is impossible the Playground application and the API are “the same”, which you can easily see in the the screen capture above.

I actually read though this topic twice, and I was actually surprised to see such strong, declarative statements saying that the Playground is the same as the API.

The Playground is an application written by OpenAI which demonstrates how the API params can be used and maintains dialog state which roughly (not exactly of course) mimics ChatGPT.

The API does not maintain any such state nor feedback code, unless that code is explicitly written by the developer, and there are myriad ways a developer might manage this internal “dialog state”, pruning, summarization etc.

So, kindly let me show you again, that the Playground maintains dialog state (somehow) by “feeding back” prior messages to the completion end point: :slight_smile:

To keep this reply short, I’ll refrain from boring everyone with a discussion on the stochastic nature of generative AI models, because in this case, the difference between “Playground” the application, and the “API” develop endpoints with no dialog or state management should be clear to see.

HTH

:slight_smile:

My point is that the playground uses the same API as we do, not that it’s the same service. I’m fully aware that it’s an interface so we don’t have to build our own.
There are no special privileges or differences (afaik) between using calls via playground, or your own wrapper.

The session management is just a form…

No, it is not “just a form” @RonaldGRuckus

Session management requires sophisticated feedback pruning, and other session manage features.

I know this because I have written two chatbots on top of the API, one using the completion API and one using the chat completion API.

You are mistaken @RonaldGRuckus, and this is not the first time you offer a very strong opinion without facts.

:slight_smile:

I don’t understand. If you go over the token limit using the playground, it doesn’t do anything. It doesn’t prune the conversations.

I feel like I’m missing something here? It’s clearly evident that the playground has no extra bells or whistles ( actually, it’s more restrictive as the actual API allows for a higher range of values ).

You can actually copy the exact request made in playground using the “View Code” button. It doesn’t work for cGPT yet, but looking at the network logs shows nothing special.

That is because the text exists in the dialog window. You don’t need to prune what the user sees in the UI to have session management.

You do not know how OpenAI prunes, summarizes, or otherwise management the session @RonaldGRuckus . You are guessing and offering your guesses as hard facts.

You “don’t know” but you assume and you post these assumptions and guesses as a “hard facts”.

There is no documentation anywhere in the OpenAI platform which states “the Playground results are the same as the API”.

We do not even know if the API endpoints used by the Playground run on the same hardware as the APIs used by developers in our code. We know, for a fact, we are charged per token usage, for both, but we don’t know for a fact OpenAI has not added additional filtering or moderation to the Playground to protect “their brand”, etc.

It is just a “guess” to say “they are the same”.

You are “guessing” and posting your guesses as facts; as if your guesses and assumptions are factual. However, they are not factual (they are just guesses), this I am sure as someone who has written two chatbots using the API (one using chat completion and one using completion endpoints, and both require a lot of coding on top of the API calls).

Furthermore, as mentioned, OpenAI must protect their reputation and media blah, blah attacks, etc. so it is very likely OpenAI has added addition moderation and filtering to the Playground; but this is just a guess, as I do not know for sure.

:slight_smile:

Well. I clearly don’t understand.

To me, it makes no reasonable sense that the models in the playground output differently than the models that we make our calls to. The endpoint, the payload, everything is the exact same. The only difference being the api key. I’ve never noticed a difference, and never had any reason to assume so.

If it is that way, I’m sorry for the misinformation

Yes, I understand your assumptions @RonaldGRuckus and you are not the only one who makes them here and elsewhere.

I also apologize for pointing this out so directly. I’m coding a different “non OpenAI” project as we speak, and am sure my reply comes across as “too blunt”, as they often do.

The OpenAI models are stochastic, so it is also not prudent to believe they will have deterministic outputs, based on input, especially for non 0 temperatures. The very nature of setting the temperature generates randomness.

So, if someone is using the Playground with a temperature of 0.7, for example, it is not correct to assume that an API call to the same endpoint by someone else will generate the same result (unless the model is overfitted, of course).

It’s easier to view the “non-overfitted” models as a kind of “cup of dice” and these stochastic models will generate a completion based on the temperature of the model (the randomness); so if you shake a cup of dice which are the same dice that I have, of course if we will get a different dice throw most of the time. This is true between consecutive API calls using the same code, based on temperature (randomness specified) and model fitting of course (not even considering the Playground).

However, with the Playground, we do not know how the input is finally filtered or moderated on the OpenAI side before the messages or prompt is sent to the API.

But honest, since I have written my own “Playground” which has a lot more features, I almost never use the Playground and have not attempted to reverse engineer it.

Anyway, I’m not trying to be a PITA, I’m just confident, based on writing two chatbots recently, using both the chat completion and the completion API methods, that there are myriad ways to filter, prune, summarize etc, the message history sent to a chatbot to maintain state; and slight variation in the implementation will change the output of the model.

Personally, I do not have the code in front of me on how the Playground does this, or how OpenAI may or may not add additional filtering to protect the OpenAI brand integrity, hence it’s hard to comment further without guessing.

However, my “best guess” is that OpenAI has some filtering and content moderation we are not directly aware of in the Playground because OpenAI must guard against people hacking around with prompts to generate headline grabbing compilations which will damage their brand integrity and mission.

Hope this helps.

:slight_smile:

1 Like

No, I completely agree that the results aren’t the same by its nature.

However, to say that there’s inconsistencies between the API and playground, that cause playground to output better results is just wrong. The calls are practically the same is all I’m trying to say.

No it is not wrong, for the many reasons I have already posted.

You are just assuming the API (which is not an application) and the Playground, which is an application are the same.

That is what is wrong @RonaldGRuckus

There is a lot of code written to maintain dialog state, which you would know if you sat down and wrote your own chatbot using the API to maintain dialog state.

Whatever you, as a developer, would come up with to manage state is not going to be the same as what the Playground does.

You are just making a huge assumption without any basic in fact; and that is why I keep calling you out on this.

It is easy for me, as a developer who has written two OpenAI chatbots which maintain dialog state, that the code on top of the API is the “secret sauce” (not the basic API calls).

The Playground is NOT the API. The Playground is an OpenAI software engineering developed application which uses the API; but there has to be code on top of the API (filtering, pruning, summarization, moderation, etc. what it is I do not know because I do not have the full application code in front of me). What happens between the UI and the rest of the process in the Playground happens with code written by OpenAI.

The Playground is NOT the API.

The Playground is an OpenAI application build on top of the API; just like any developer would develop a similar Playground, which I have written BTW; but my application is more more detailed than the Playground, and does a lot more (and of course, does it differently since I wrote the code and did not copy OpenAI’s code, as the Playground code is not public, open source to my knowledge).

If the Playground source code is public, open source, please post a like to it. According to the OpenAI GitHub official account, there is no Playground opens source:

:slight_smile:

Reference:

1 Like

Yes, the playground is an interface so one doesn’t need to write their own code to try out the API. It uses the same endpoints.

To clarify, I know API and Playground aren’t comparable words. All I’m trying to say is that they make the same calls using the same parameters and endpoints that we have.

1 Like

There is a lot more to a chatbot application than the API endpoints.

In fact, the API calls are the most trivial part of building a chatbot application which maintains dialog and use session state.

My feeling, based on your replies @RonaldGRuckus is that you have not developed a a full blown OpenAI chatbot applications using the API; because if you had of actually written one, which had to manage session, state, pruning, summarization, etc then you would know that the API calls are the trivial part.

:slight_smile:

Playground by no means has anything to do with the actual processing or management. It’s a front-end service.

Yes, I have written them, and I’m fully aware of the complexities. The playground does no pruning, or context management. I don’t really understand what you mean by that. ChatGPT does, not the playground.

By definition, playground is a GUI

Sorry, @RonaldGRuckus

It’s not really a good use of our time to debate back and forth with you when. you continue to post your opinions and assumptions as fact.

By whose definition?

Yours, of course @RonaldGRuckus

That is the core issue here. Whatever “you assume” and “you think” you offer as “hard facts”

I have written a Playground, and it is “much more than a UI” as you have stated @RonaldGRuckus, showing again, that you do not understand coding as much as you think. The API does not manage state. The Playground manages both user session and dialog state. It also may well add additional moderation and filtering to protect the OpenAI brand, as I keep saying but you keep rejecting.

I have not updated the topic above (busy coding), but it is more feature rich than before, including a
both chatbots, completion-api-based an chat-api-based:

I can assure you, @RonaldGRuckus that writing a full-blown chatbot app, playground or not, is not just writing UI code. The UI code is trivial (for a Rails developer like me).

Sorry, it’s pointless to continue this topic @RonaldGRuckus because you are wrong, but you think you are right and make so many assumptions based on “what you think” versus what is “a fact” that it’s not a good use of our time to continue, since I am almost sure next you will tell me you have written a “full blown playground”, blah blah, and you know all about that as well.

Take care.

:slight_smile:

I told myself not to answer in this topic anymore when I started seeing that we were getting off-topic and there was no real point on it. But I just needed to do it once again just to tell you that I admire your patience @ruby_coder. I truly do :rofl:.

1 Like

I work out at the gym 5 days a week, which helps, believe me, haha

Thanks for the kinds words, @AgusPG

Like you, am done with this “full of misinformation” off-topic discussion. My wife wants to go for a drive in the country-side to a new coffee shop and that is “top priority” for the rest of today.

:slight_smile:

2 Likes