The below represents a hypothetical. The author has no connection to OpenAI, or any OpenAI entity, or role within the company, or any other connection or information other than what is publicly available as a subscription customer - nothing below is to be considered fact, and is clearly a snarky post from a frustrated user.
Hypothetically, if I were in real life a corporate strategist in big tech (hypothetically, of course), and I were to think of ways to increase token usage, I (hypothetically) would consider intentional GPT 4.5 responses to include incomplete and/or intentionally wrong code. Why? Plausible deniability, and maximum token consumption.
How would this work?
Hypothetically, I would ensure the code that GPT 4.5 provides to be correctly written with small interval corrections required.
Of course, an easy way for a user to realize the intentional mistakes would be as easy as the user not applying the intentional “oversights”, and instead to have the same GPT 4.5 review their own work each time and provide the user GPT 4.5’s own corrections on its own code.
While the user would surely not be able to conclude any of the interval mistakes as intentional, this process surely results in requiring far more tokens to achieve the same outcome that GPT 4.5 proves in its own reviews is capable of responding with on the first response. Still, it may be enough to cause the user a brief pause for consideration.
Which is why, hypothetically, plausible deniability built into an intentionally designed token-increase strategy is key to its success, no matter how easily discovered it may be - by even the most novice of LLM users.
Hypothetically, as a hypothetical big tech corporate strategist, I might consider this strategy as a sure fire way to increase token usage.
Hypothetically.
- A Once Again Concerned User, Hypothetically