It can seem aggressive at face-value but it’s with genuine intention. It’s s really good idea and I think any member here has seen way too many people suffer the zero-tolerance policy OpenAI has with BYOK services.

Let’s all calm down here,

@seaofarrows, you’ve rewritten your app several times over the past few days, I can understand why they’re confused, @elmstedt is not trying to make fun of you, there has, unfortunately, been a few instances where a BYOK application has led to a lot of people getting banned.

Now let’s keep the decorum on the forum

2 Likes

For everyone who may be looking for answers:

There are several ways to access OpenAI’s models. One method is through the API, tailored for developers to integrate into their own software. Note that sharing of API keys is not allowed, and that includes inputting your key on sites like website.com.

If you’re a developer and want users to shoulder their own usage fees, then I’d suggest creating a ChatGPT plugin.

3 Likes

Alright guys, why don’t we all just build an open-source billing system for single API key applications! Who’s with me!?

2 Likes

Nah mate but somehow you interpreted the following as being directed on at you

It was not:

I’ll suggest that you calm down a bit and realized that I’m not against you, or anyone else here.

You have voiced your displeasure, could you stop the name-calling now?

1 Like

Why do you insist on attacking people who give you genuine advice? You learned something and adapted. That’s pretty sweet. Roll with the punches and be humble next time.

Gosh, I feel like it’s like going to a boxing gym and working out, going into the ring with your colleagues, getting bopped and then crying out the door.

1 Like

Again, Ruckus, you didn’t give advice, you gave aspersions.

If you feel that way. Sure, maybe you’re right.

What about everybody else?

As someone who has been on both sides, seeking assistance and feeling a bit overwhelmed and trying to assist someone with something I’m very familiar with, this sort of thing just happens sometimes. You’ve got to step back and take a deep breath and look at it as logically as you can. Looking at this conversation as a 3rd party, and having interfaced with, and received helpful advice from, many of the names here, I’m pretty sure none of this is personal.

I disagree with this decision from OpenAI, and @seaofarrows has articulated my reasons even better than I have. The last thing I want to do is have to build a token billing system on top of a regular subscriber system. But, if I’m being honest, I can also see OpenAI’s position also.

I like this idea, but don’t we end up with the same exact problem – people STILL aren’t allowed to use their keys in someone else’s system?

1 Like

This is, genuinely, a great idea.

At the risk of going off-topic [1], I have a very broad-strokes idea as to how this might work[2]

  1. Set usage limits for each user[3].
  2. Set a standard billing cycle for users (say weekly, daily, monthly, etc) and have a minimum billing amount per cycle where the user has usage (say $0.50[4]).
  3. Bill based on tokens used at 120%–200%[5] of token cost to cover costs and income for the developer.

Or, alternately, just have them pre-pay whatever amount they want and deduct as they go. That’s obviously the safest and easiest for the developer to implement and just involves tracking tokens used and some simple multiplication and subtraction.

Honestly, the biggest impediment to moving away from the BYOK service is likely going to be hitting the usage cap for OpenAI’s API[6].

Regardless, I think an open-source billing system would be great.


  1. But it’s your topic and you branched it so I feel a bit okay here ↩︎

  2. Basically copy then slightly modify what OpenAI has done. ↩︎

  3. To limit exposure for the developer of the service ↩︎

  4. To cover transaction fees and the like. ↩︎

  5. The exact amount would depend on the value added by the systems around the LLM ↩︎

  6. Which at the standard $120, if you were charging 200% for tokens means a limit of only $120/month in revenue after paying OpenAI. ↩︎

4 Likes

I love this idea (hence the heart tick), and works perfectly for gpt-3.5-turbo. But when you get to gpt-4, 120%-200% of token cost seems to get fairly expensive fairly quickly. And, for me, that’s why I liked the BYOK idea – the customer can only grumble at OpenAI, who set the price in the first place.

1 Like

Then just open-source it and let the user run it locally.

If your point is to, at some point, make money off of it, you will inevitably need to charge a premium over your own usage costs.

If your point is to, at some point, make money off of it, you will inevitably need to charge a premium over your own usage costs.

That is only the case if your business model is the obvious “mark up tokens by X%” approach.

Another way to think of it is: “OpenAI fees are a cost of doing business. Elsewhere in my business, I make income from the things I do with OpenAI.”

I feel that’s more or less exactly what I wrote, so I’m not sure what your argument is.

My “argument” is clear: You need not “charge a premium over your own usage” if your model is not the obvious markup by x% scheme. It is entirely possible that usage of the API is never balanced in the ledger with equal or greater “premiums” but rather that the usage is just a part of some other business strategy.

For instance, with PlotRocket, I’m considering a co-authoring scheme where I work with an author, giving them access to the tool, and helping them to find the beats of their story, which they then write in the traditional way, and share the income with my company.

In that scenario, the usage and the business income are decoupled to the extent that they are only tangentially related. Certainly not a premium markup of usage. One author could use twice the amount of API resources of another author and it wouldn’t really matter if the entire enterprise is profitable.

My point was premium markup of API calls is the most basic strategy but not the only strategy.

I think you completely failed to understand the language used in my post.

I’m done with you and your toxicity.

You are blocked and I hope you and your project fail as spectacularly as your comprehension.

We have a SaaS platform and are integrating with OpenAI to assist our users in writing for social media, email, landing pages, etc. Each user has their own “voice” and style. Do we need to create an API Key per each of our users? I assume that the AI will learn how to write like each user, based on their communication history with the AI. How do I keep each user separate?

1 Like

Hello ls2023 and welcome to the forum,

First, If I were you I’d maybe think about starting a new thread. This one seems to be getting kind of rough. :slight_smile:

To maybe answer your question though, you will need to keep a history for each user. That is separate from API keys. And unless you want all of your users to see what the others are doing they will need their own histories.

So User A will have a list or some sort of array with their own system role, user questions and assistant answers. User B will have their own. Every now and then you send all of the questions and answers to GPT, have it summarize the conversation and continue on from there with the summary.

I hope that helps a little bit.

1 Like

Just FYI, there’s a good chance you are in violation of the OpenAI Sharing & publication policy.

Social media, livestreaming, and demonstrations

To mitigate the possible risks of AI-generated content, we have set the following policy on permitted sharing.

Posting your own prompts or completions to social media is generally permissible, as is livestreaming your usage or demonstrating our products to groups of people. Please adhere to the following:

  • Indicate that the content is AI-generated in a way no user could reasonably miss or misunderstand.

Content co-authored with the OpenAI API

Creators who wish to publish their first-party written content (e.g., a book, compendium of short stories) created in part with the OpenAI API are permitted to do so under the following conditions:

  • The role of AI in formulating the content is clearly disclosed in a way that no reader could possibly miss, and that a typical reader would find sufficiently easy to understand.

So, unless you are explicitly identifying the content as AI-generated (which seems contrary to the point of the service) the service is in violation of the terms and you could have sanctions up to and including the termination of your account and a permanent ban from accessing all OpenAI services.

Thank you Paul for the quick reply. This definitely helps.

1 Like