I have been following this thread as well. I’m finding the possible monetization of custom GPTs very hard to get my head around. I have created 6 custom GPt’s so far but I can’t seem to find a clear path to charge for these. I have explored putting them on custom websites with a subscription based access model perhaps. However this is not user friendly or easy. All this among the backdrop of the GPT store, which will not yield any significant reward for 99% of developers, as there is simply too much competition.
Any thoughts on how one might go about monetizing custom GPT’s built through openai, that might actually yield some significant revenue?
You can use GPTs as a lead collection tool and upsell them additional functionality as paid in an external site. You can limit usage of GPT and monetize the usage. You can suggest affiliate links as part of your GPT response and earn affiliate income
I think there needs to be a prepaid micro payment solution.
Top up your account, put in 10$ and then you can do 1000 invocations. Something like that.
And it’s better if that prepaid account can be used for a whole library of custom GPTs, not just one.
Yeah, OpenAI is working hard on the revenue sharing for GPT’s, in the meantime I’d just recommend working improving the quality of whatever you’re making
If you want to control how your thing is monetized, it’s much better to use the assistant API and put it on your own website.
Using the API changes the economics of the whole thing because then you, the developer are paying for API usage.
It’s also changing the orchestration: your code calls the LLM; with Plugins/GPTs the LLM calls your code.
Plus you need to write a UI.
It would be better if there was an easy way to embed a (custom) GPT on a website. If it results in a new signup to ChatGPT Plus there should be an affiliate payment.
OpenAI should be open about the revenue share they have in mind instead of letting people build and then wait for the big reveal.
There should be a way for niche GPTs with low usage to also monetize. Not everything needs to compete with AI Girlfriends.
No offense but I genuinely hope this isn’t allowed.
This is just plugins all over again. Intrusive, noisy instructions added to a GPT that inherently cannot be reliable.
Won’t be surprised to see ads soon, and then see people creating GPTs to gather leads to sell.
Why should anyone have to authenticate with a third-party provider when they have already authenticated with OpenAI? If OpenAI cannot provide these simple metrics why would they want to out-source this?
Does OpenAI seriously want people to create an account with them, pay for the service, and then have to authenticate again for each individual GPT to a separate provider?
Because, let me tell everybody who decides to do this. I would be more than happy to create GPTs just to gather leads and sell them. If you use my Workout GPT best believe your email will be blasted with cold-calls and workout-related e-mails. 10 Peloton related emails PER DAY baby
Lastly, E-Mail Addresses are considered PII. If you want to seriously consider this you need to disclose why you are collecting e-mails. Which (I am not a lawyer) could mean that you are held liable when these e-mail addresses are inevitably sold.
So, if you want to continue this, at the very least I’d recommend creating a UUID token instead of blatantly showing people’s e-mail address when you have claimed it as “authentication” but cannot control how they actually are used.
The GPT then calls an action which sends a one-time-code to the provided email address to authenticate the user.
Note: In this case the user has not entered into a separate agreement with the Plugin Builder which makes this a violation of OpenAI’s terms of service (emphasis added),
(e) Restrictions. Your API and Plugin Responses will not: (i) pose a security vulnerability or threat to our users, us, or any third party; (ii) interact with our users in a manner that is deceptive, false, misleading, or harassing; (iii) return or contain illegal, defamatory, pornographic, harmful, infringing, or otherwise objectionable content; (iv) include any malware, viruses, surveillance, or other malicious programs or code;
This statement, “I need to authenticate you for security purposes” is definitely deceptive, false, and misleading.
And tracking users’ location and device are definitely surveillance.
Users of a GPT that employs gpt-auth have no idea who they’re sharing their information with. They might be okay sharing that information with the GPT builder if given the opportunity to provide informed consent—which they do not get—but would they be okay also providing their information to gpt-auth?
And I’m just taking a US-centric view towards privacy rights right now.
Let’s talk about Europe for a second…
All of this is a minefield there and neither gpt-auth or any of the builders you provide this service for have the mechanisms in place to comply with European privacy laws.
So, I don’t know what relationship or discussions you had with @logankilpatrick, but from what I have seen of gpt-auth, my reading of the terms of service and other OpenAI policies, and privacy laws, I do not see any way in which gpt-auth is on the right side of the line here.
Thanks for sharing the info. We will work on making it more evident to end user what all info is collected by GPT builder and the reason for collecting it. Will also provide a toggle for the user to allow/disallow it as per their choice
You also need to have mechanisms in place for the deletion of user data, and you and the GPT Builders need to each have privacy policies the user expressly agrees to prior to the collection of any personal data.
The users aren’t your customers and they aren’t the customers of the GPT Builders. They’re OpenAI’s customers. They’re using the service on chat.openai.com and expecting to be covered by OpenAI’s privacy policies.
The tracking of their email address is one thing, but their IP, location, device information, and especially their prompts to the GPT is something else entirely.
As it stands there’s already a good authentication method available—OAuth. It just doesn’t play with the no-code “developer” crowd.
So, I get it—in a gold rush sell shovels—but as it stands, as currently implemented and advertised I wouldn’t touch gpt-auth with a ten-foot pole and I will continue to advise others to steer clear.
It’s an interesting idea and a slick implementation, I just don’t see how you make it in a way that complies with policy and the law.
Why?
GPTs support OAuth authentication. The end user needs to go through very clear steps if they want to use the GPT:
they need to sign up and at that point should be able to read and accept a privacy policy of the 3rd party provider of the GPT.
This is how it worked for plugins and it remains so for GPTs.
Most GPT builders didn’t implement OAuth… that’s a different story. They’re banking on revenue share or who knows what.
OpenAI is then asking the user to confirm if they want to “allow” information to be sent to the webservice endpoint of each action of a GPT.
I think the user account in the relationship between the user and OpenAI is separate from the relationship between the user and the 3rd party, the builder of the GPT.
Sign-up can be made easy on the user by allowing to sign up with Google, Microsoft, Twitter etc. accounts.
I think this makes sense. It’s understandable for a standalone service (or at the least a powerful API) that extends to GPTs to have a premium model which requires a subscription, and require some authentication to prove the user is a paying member.
What is being offered here is a thinly-veiled “analytics” dashboard which harvests user information for who knows what purpose, blatantly ignoring the laws and regulations that protect people. The fact that this is being advertised for “monetization” speaks enough volume of it’s true purpose.
Here’s a much better way to learn about how people use your GPT:
Surveys
If you want to know how people want to use your GPT then create an Action for a survey. Gather anonymous information. You can even optionally ask for their E-Mail for the purpose of following-up.
If people get value they will give value. You do not demand value without providing it first. This is a great way to lose potential. I promise, any GPT that is instantly wall’d by demanding PII will be beaten by a GPT without such limitations.
If someone wants to be good, and profit off of GPTs, create a Survey dashboard instead of Authentication/Tracking.
This is privacy 101. If I can input whatever email I want, can I also just input any email I want to be deleted as well? Any experienced developer knows that PII management is a massive PAIN ( ).
We are not lawyers. This information is elementary. You seriously need to read PII laws of each country that GPTs can operate in first, and speak with a lawyer (or atleast some specialist/consultant) before even considering selling this as a service. By the time you have met them you will have built an impressive & user-burdening authentication system.
Unless the GPT is an extension of a third-party service (like Zapier) I do not see why a secondary auth service is necessary and why it would ever been (privately) commended by OpenAI staff.
Good point. I’d like to point out that I wasn’t the person to publicize a private message as justification .
I’m no legal expert either but I do know that the usage of “authentication” is a misnomer at best, data harvesting under false pretenses at worst.
Having to jump hoops to try a GPT and provide PII is simply not what I want to see become common.
If OpenAI would like a third party data harvesting provider (which in my opinion can be okay if the rules are followed but is ultimately grey area) then announce it, make it s contest or open source collaboration