Query limits value to the Dollar$ - Come on guys!

Hi, first of all, thank you for this great service. I do think LLM’s are quite remarkable, however since using it, and now expecting it to be available since I pay for it, being throttled is actually affecting my dependency on using it. Imagine a skill saw deciding you’ve used it to much and it powering off on a jobsite.

If we talk about product, service and value … I think you need to address this soon, otherwise I am quite prepared to pay less, and use alternatives. Just as I figured out how to use GPT4 with it’s limits, I can quite easily do the same with perplexity for example.

I like GPT, great job … but I don’t think the value I am receiving through the limitations is worth it. I have subscribed again … for a month … but will be cancelling after this next run and waiting for better offerings if this isn’t addressed soon.

Your being valued at 100billion !!! but I can only get in like 20 queries !!.

2 Likes

I think you don’t understand the essence of such business decisions.

The myth of around 40 queries in 3 hours on ChatGPT 4 has nothing to do with OpenAI’s resources or the financial value of individual queries; it’s solely aimed at creating an atmosphere in which you can’t fully explore ChatGPT as you wish and at the speed you want. When you add to that the policy of intentionally frustrating users by not providing data on how many queries you have left until the end of the “blessing” bestowed upon you by the great OpenAI, it’s easy to conclude that this grandiose charade is just an attempt to create a greater false myth and false excitement around a slightly improved spelling checker currently called ChatGPT, which has existed in its current form since 1971.

Pure psychology and calculated risk of rejection or gaining users through false exclusivity of a 52-year-old product. Nothing particularly significant or essential, in essence, an insignificant policy of an insignificant upgrade.

I debated whether or not I should provide any kind of response in this topic, but I don’t like letting misinformation linger.

a slightly improved spelling checker currently called ChatGPT, which has existed in its current form since 1971.

This is both false and misleading on multiple levels.

LLMs are not “spell checkers”, and OpenAI did not exist in 1971. Therefore, this is false.

I will leave the rest of your claims alone, as I’m not OpenAI staff, and technically any other claims are speculative at best. I will say though, the OpenAI staff that does post on this forum has clearly and explicitly stated that the query limits is a resource issue, they have acknowledged people’s grievances towards the limits, and are working hard to gather more resources to improve these limitations. I personally trust the words coming from the lion’s mouth on this, but you are welcome to have your own opinions.

But please do not conflate a Large Language Transformer model with a spell checker.

Edit: Did not check the date on this post, did not mean to accidentally resurrect this post. Oops.

Finally, someone who dares to talk about human greed.

Nowhere did I write that OpenAI existed in 1971, but rather the first spelling checker. And in fact, like it or not, LLM is nothing more than the same algorithm, based on the same principles, only on a larger scale. Just because it’s called artificial doesn’t mean it’s anything more than not being organic, just like a plastic spoon, Tetris, or a table lamp. It doesn’t have any other meaning. Just because it’s called intelligence doesn’t make it intelligent; it’s just another algorithm. In today’s world, even vacuum cleaners are marketed as “with AI.” The same thing a few days ago had algorithms, and all of a sudden, everything is AI. What is the basic difference between the 1971 spelling checker and ChatGPT? Both of these algorithms work on the same principles, only before it was a few thousand words, and now significantly more of them. Don’t confuse algorithm with intelligence; it’s just an algorithm, mathematical equations, and guessing. And this guessing game makes this algorithm better every day because people are paying OpenAI to sophisticate their algorithm, not the other way around. After that, OpenAI (not ChatGPT) discriminates against people based on whether they are lonely in life or not and makes a big fuss pretending that it’s so good (or God, as some have falsely claimed) that it’s too much for our morality to have unlimited access to it. Pure psychology of supply and demand, artificial in this case.

Ah, I see what you were intending to get at now.

It was unclear what you were referencing precisely in the phrase I quoted, hence why I clarified just to be on the safe side.

Trust me, I am aware of the algorithmic nature of LLMs. I’m a big enthusiast of linguistics, NLP, and AI research and development. I am not disagreeing with you when it comes to algorithms and misunderstandings of AI as “intelligent”. I used to be the guy in class reminding my peers that very selfsame concept.

The main thing I want to be very clear about is this:

What is the basic difference between the 1971 spelling checker and ChatGPT?

The short answer: A lot. They do not work on the same principles.

GPT, or Generative Pre-trained Transformer, is a very specific deep neural network algorithm that is essentially derived from this article:

It is a very specific combination of architectures, mechanisms, and algorithms (note the plural). Ones that cannot be very functional without training.

Spell Checkers, especially in the earlier days, is typically one simple algorithm or formula. These were typically Levenshtein distance and n-grams to be precise. This is what was used in 1971.

Only in the last decade did spellcheck really evolve to include neural network algorithms, but again, these are different than transformer models. They function differently, they are built differently, and they serve different purposes.

They are not the same.