We’re developing proprietary hardware devices to allow consumers to communicate with and interact with OpenAI language models, that are not traditional clients in a sense. What they are and what our hardware does is irrelevant. The issue is how to query the LLM models. We have encountered a frustrating roadblock.
We want to allow our users to leverage their existing OpenAI Plus subscriptions to access language models when using our devices.
However, there seems to be no feasible way for us to facilitate this without resorting to using our own API credentials, which leads to a duplicitous cost for our users.
Here’s the crux of the issue:
- Many of our users already have OpenAI Plus subscriptions, which they pay for monthly to access ChatGPT. However, if they want to use our hardware devices to interact with the same models, we need to use our API keys to query OpenAI.
- This effectively means that users are paying twice: once for their OpenAI Plus subscription, and again through our service, which must cover the token costs associated with the API usage. Either we absorb these costs (which is not sustainable) or we pass them on to the users, creating a redundant financial burden.
From our perspective, this structure makes our hardware devices less attractive and less affordable to consumers, and places an undue burden on users who are already paying for premium access. We feel this highly prejudicial against our model in an unfair way which ultimately hurts the consumers at large. We feel that this is prejudicial to both consumers and developers trying to create innovative products around OpenAI’s LLMs.
Why can’t there be a mechanism—such as OAuth-style delegated access—that would allow users to authenticate their OpenAI Plus account with our platform, thereby allowing us to make queries on their behalf using their existing subscription? This would work similarly to how users authenticate third-party apps with other services, like Google or Spotify, and allow for seamless integration while avoiding the costs associated with duplicative API usage.
We believe that a solution like this would benefit everyone in the ecosystem:
- Consumers: They would have more freedom to use their existing subscriptions with third-party devices without additional charges, thus getting more value out of their Plus accounts.
- Developers and Innovators: We would be empowered to build more affordable and compelling use cases around OpenAI, without having to price products prohibitively due to API costs.
- OpenAI: Providing this flexibility would likely encourage broader adoption and innovation around the models, further expanding their consumer base.
Right now, we are left with two options, neither of which seems ideal:
- Absorb the token costs of the API, which is not financially feasible given the scale we’re aiming for, nor sustainable over a long period of time.
- Pass the token costs to users, who are then essentially paying double—once to OpenAI for their subscription and again for using our API-linked hardware devices.
- **Develop or use inferior language models running on alternative servers that are not as good as those being developed by OpenAI.
We just do not feel it reasonable in the least on the consumer level for consumers to pay additional costs, if they are already an OpenAI subscriber…
We would greatly appreciate a discussion or response from the OpenAI team regarding whether this kind of user subscription access is possible, or if not, why it is being restricted. Is this simply a technical limitation, or is it a policy choice? Either way, we’d love to understand OpenAI’s perspective here. If there is a work around or some other way to make this a reality, we would love to know as we believe everyone would ultimately benefit from this.
Thanks in advance for any insights. We believe in OpenAI’s technology and its potential, but this barrier feels like a significant limitation to broader innovation.