IMO you hit the nail on the head. That is what sells, expertise in a specific domain of knowledge, not generalized gpt’s that anyone can gin up with a modicum of effort. If you build a gpt that is unique and useful it will find a market. If not then it probably deserves to die anyway.
It works and it brings up that you should register. Yes.
BUT, this doesn’t look trustworthy to me. Because I don’t really know what I am signing up to. I actually refused to do so.
I may or may not be the average. BUT, I’d definitely refuse from subscribing to that.
If there would be more info I may definitely do it.
I hope that makes sense.
That’s usually the case with companies growing rapidly—especially in a brand new field like this one they have to constantly adapt to fast-changing developments. No matter how carefully they plan, detours are intense and often unpredictable. So they keep redistributing their focus and time accordingly, that’s also what makes things risky for a startup, which we’re talking about.
this was flagged idk why though
Thank you, I really value your openness and spirit. I’d be glad to share more and explore where our thoughts align or even diverge — that’s where the good stuff usually happens. Let’s keep the dialogue going.
While I see the reasoning behind your comparison to other platforms like xxxxx in terms of pricing models, the underlying services are quite different. The similarity is limited to the micro-transaction approach across a broad user base.
That said, I find it somewhat unfair—and not entirely accurate—to compare to a monetization model rooted in a completely different industry. Mimicking such a structure may not reflect the nature or value of what we do. Instead, we should be aiming to define a new standard for evaluating and pricing our contributions—one that I propose to outline next.
On Revenue Sharing and Transparency
The split itself isn’t really the issue—as long as there’s transparency and a clear commitment to reinvest in the shared needs of the community. No one will object to a 30% cut if it’s genuine, fair, and demonstrably supports the long-term value and sustainability of the ecosystem.
On Fair Valuation and Foundational Principles
Valuation—and by extension, pricing—is only fair when it’s proportional to and directly linked with the actual outcome (after deducting operating costs, pre-agreed in a transparent budget, including fair compensation for all parties involved).
Just like an idea can only become perceptible, transmissible, and eventually an ideal reality when the motive belongs to everyone—not just to those who lead or hold interests.
This feels like common sense—a clear truth that often goes unacknowledged. Perhaps it’s because we live in systems that prioritize economic logic over moral considerations, a pattern that shapes much of our modern world.
That’s why, when we start something new, it’s wise not to automatically copy prevailing habits or embed inherited behaviors without thought. Instead, we should study the full context, examine deeply whether the starting point is truly aligned with the principle of nature—not with the surrounding status quo.
On Collective Incentives and Idea Development What if we all operated with a pre-agreed financial structure—like a result-based or pay-as-you-go model? Might that create more willingness and momentum to truly develop the ideas we’re now only commenting on? Could that shift us from brief exchanges into co-investing in turning thought into action?
On Competition vs. Co-Elevation
Competition can sometimes encourage a mindset of withholding insights or maximizing advantage at the expense of openness. But deeper value often comes when we shift toward co-elevation—where discovery is shared and progress is mutual.
True progress comes from co-elevation, not competition. What really matters isn’t the discovery itself, but the way of thinking and acting that led to it. That’s what you want to learn—so you can make your own discovery.
Now imagine: if you benefited, even slightly, from your peer’s progress—if you earned a fraction of value from everyone’s breakthroughs—wouldn’t you share more? Wouldn’t you move faster? The system would push you forward and celebrate you, not trip you up.
On our Direction and the Possibility of Token Types
Let’s work toward a “platform” of human-aligned “language models” designed to generate and share high-quality training data for domain-specific use cases.
Maybe we need to start thinking in terms of two different kinds of outputs—two token types: Quantity vs. Quality. Quantity tokens reward sheer production, overflow of info and interaction, regardless of depth. But in most real-world business contexts, effectiveness doesn’t come from more data—it comes from the right data, at the right moment, shaped by experience and intention. That’s what quality tokens would measure: precision, usefulness, insight, and clarity. If we want to build meaningful systems, maybe we need metrics that reward what truly drives progress.
(I’ll share more about that in a dedicated thread soon.)
They say that true understanding shows in simplicity—and I believe that. Often, when things get overly complex, it may signal a lack of clarity.
“Less is more.”
Yes! What truly sells is what truly works—and that is absolute!
Success in building effective custom GPTs for domain specific fields hinges on true collaboration between expertise and AI development; precision emerges when both sides invest in understanding each other’s logic and context, not just function.
This demands communication rooted in hierarchical reasoning—starting from clear observation, structured description, and refined definitions before modeling assumptions.
Crucially, deployment is never the end:
custom models evolve through continuous refinement using focused, specialized data, which should be valued not for immediate performance but for how it shapes alignment over time.
This is so much easier said than done! People ask me when my project will be finished, and sometimes I jokingly say “never” and then I wonder if that might not be true. In the old days of software development this would be known as feature creep, something to be avoided if you want to hit a release date. The challenge of gpt development is that as the model keeps evolving one’s assumptions become outdated. I’m attempting to root my reasoning in Codd; the relational model makes sense to me because it is economical with regard to storage. Extending that to a gpt is challenging when you only have control over a portion of the KB the gpt has access to, so there are tradeoffs between speed and accuracy. It’s not easy to validate responses when you don’t have control over all of the inputs (prompts).
See lots of hidden replies to this post? Humm.
A lot of forum posters are using AIs to fill this topic with meaningless bot replies. I see a few more candidates for flagging.
This is a forum for talking to others who have meaningful experience and personal anecdote and advice on AI product implementations and uses.
Not for pasted ChatGPT AI drivel.
I sent few days ago feed back for your AcePilot awesome GPT got it ?
Figuring out how to monetize GPTs outside the official ecosystem is definitely a tricky subject, so some post from this chat hidden by the OpenAI due for violations of the Community Guidelines, I use a GPT to refine some of my writings and reposted (as far as I know no bot replies here, but you never know …) no automated bot is posting on my behalf—but with these systems, sometimes even that can be misunderstood.
Sorry for the late reply. I sent you an email.
Have you tried mine after I have replied ?
Of course, I am not monetizing it. But I am not creating a GPT in the official store.
That’s if you are wondering.