Future of the GPT store in limbo?

With the recent change in leadership, is anyone else worried about the future of the GPT store and possibility that monetization plans won’t materialize?

The value of spending time building GPTs doesn’t seem as straightforward as it did 24 hours ago.


what happened?
could you provide me with a source

Google - " Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI"

1 Like

No, it doesn’t make sense for OpenAI to STOP development on their models or products, that’s totally not what they are saying. But I’ll bet their pace slows down for a time. Honestly, I think Sam has a good point that the cat is out of the bag and we need to be advancing quickly and with agility. There’s legitimate fear of AGI and ASI, and development is ramping up around the world, and we are going to have to tread carefully. That said, we also can’t fall behind. AGI will be the only thing to counter AGI, I think, so there’s real incentive to get one first. IDK what that means for safety though, it’s the Moloch problem in full force.


It’s really upsetting, and with so little details or heads-up…

Many companies have made a lot of investments into the words and direction of this one man.

1 Like

yes, and I think revenue sharing has irked some of them at the topl! Think about it! YouTube is a wildly successful company and yet its creators get paid next to nothing for the value they add. More importantly, something just felt off about the whole sama-satya convo! For this reason, just an hour after dev day, I started building PayMeForMyAI[dot].com to let anyone create their GPT/AI and monetize it on their terms - charge users $x per chat and get 100% of that money back. It’s set to launch soon.

1 Like

I think with the $15 billion that Microsoft injected into open AI Sam Altman will be back and that board will be gone.
If that doesn’t happen, then I believe we will see an increase in cost on open AI, but to counter that Altman will start up something else and people will flock over to whatever he builds.

1 Like

Those outside board of “advisors” are utterly worthless; With Microsoft, employees and all the VCs pushing, Sam will be back and Ilya will be caged up in some lab locked far away.

And Sam was literally just about to turn their 15b/30eval into a 80-90 billion dollar evaluation, eg. TRIPLING their 15b investment… for microsoft that must be an absolute disaster

1 Like

The speculation I heard, emphasis on speculation, is that the board declared the GPT store as the final straw for them. The monetization aspect really bothered them apparently. This is rather ironic given one of those board members has a competitor product, ie the CEO of Quora has a similar GPT-like platform called Poe. Given the uncertainty around what is transpiring right now, it is very hard to know the fate of the GPT store right now.

Then, even beyond whether there is revenue sharing or not, is what sort of sharing exactly? It’s entirely possible that small independent developers (looking at some of my fellow plugin devs) will never have a shot against influencers and corporate brands that decide to launch their own GPTs.

At least for me, I have concerns about further developing with OpenAI API until I have a better sense of the opportunities or lack thereof for a bootstrapped solopreneur.


Which is why I believe that Microsoft will be in talks with the board to get to the bottom of this “coup”attemp and then I think he’ll be back. it’s too much money to invest for a handful of people that have a communications issue and oust the person that got them to where they are in the first place.


Ilya should have joined Anthropic with the rest of the dissonants.

1 Like

That’s because Ilya hates the idea of developers like us being able to create GenAI experiences and get compensated for them. The aspect of the GPT revenue sharing was one unconfirmed aspect of the last straw but his overall hatred of commercialization has been confirmed, along with conflict with Sam over the continued commercial API products rather than pure super intelligence research.

Llya has gone insane unfortunately; where you going to get the money for non profit super intelligence without having a commercial arm to fund it? They already tried this non commercial aspect back in 2015 and went broke. Yet the dude cannot see the light of day, even with his level of intelligence.


Why would he want to do that?

He doesn’t want commercialization of these products, it’s not just about safety. He uses his safety speeches to convince the mindless minions on the board that Sam had to go and Greg was a threat to the board but not the company.

Ilya belongs in pure academia but he won’t have the funding needed for his experiments so unfortunately for him, “It is what it is.”

1 Like

I see where you are coming from but your post amounts to conjecture. No one but the board knows how it went down. Also, Ilya, for his faults, is an essential part of the OpenAI team, and their Chief Science Officer. Dude is brilliant and we should pay attention when he says that there’s real risk to developing AGI. Doesn’t mean that the board decision was the right move, but I’m sure it’s more nuanced than your last two posts indicate.

Actually, Greg Brockman himself on Twitter gave quite a minute by minute detail on how the whole thing went down by Ilya, which has also been backed up by numerous employees, tech reports, Kara Swisher, investors and more.

Ilya may have been an essential part of the team, he is a gifted scientist, but he has no clue on how to run a business, a board of directors and if you believe Greg Brockman, which 99% of everyone does, it shows even further malice by Ilya.

I think everyone does pay attention to the risks of AGI, but then you also have Ilya who wants to build it himself, in secret, 100% focus of his time (see his blog post) and then we are supposed to…what? Who controls or have use of this “thing”?

On a side note, my brother was a great engineer at IBM 20 years ago, patent with his name on it, but unfortunately, his personality type caused a rift with his team and coworkers and not even his intelligence could save him from his fate at the company.

Sometimes, you can be so smart, that it causes serious flaws in other areas of your brain. You can call my comments conjecture for now and that is fine, but my friend, just watch the next few days.

1 Like

The problem is that to build LLM such as ChatGPT can take YEARS.

1 Like

Do you now understand the problem with Ilya I previewed the other day?


Imagine ChatGPT Enterprise customers this morning wondering WTF is going on with their “chief scientist” and “why should we commit to $100K annually” when he caused utter chaos?

Think. About. It. :thinking:


he’s acting like he should’ve asked ChatGPT before doing it…


Sometimes, very smart people do very stupid things, especially given him, Greg & Sam were all good friends and still had safety alignment in place. He cannot argue that releasing custom gpts, basic APIs is “too fast” while at the same time he admits he devotes 100% of his time to AGI and superintelligence. The man needs to step out into the sun.

Human paradox.

1 Like