Proposal: Incentive System for User-Contributed Ideas

Feedback Text for ChatGPT (copy-paste)

Title / Short Headline:
Proposal: Reward System for User-Contributed Ideas

Feedback Body:
I would like to suggest a structured system to encourage and reward users who contribute meaningful ideas for improving ChatGPT and related OpenAI products.

Key points:

  1. Idea Classification:

    • Interesting Idea → small but useful suggestion ($20 reward)

    • Good Idea → meaningful improvement ($50 reward)

    • Great / Creative Idea → high-impact or innovative idea ($100 reward + potential bonus)

  2. Implementation & Bonus:

    • Initial reward upon idea acceptance

    • Idea is tested/implemented over a period (e.g., 3 months)

    • Final bonus if the idea proves impactful (measured by engagement or user satisfaction)

  3. Optional Non-Monetary Incentives:

    • Extra free generation credits

    • Early access to new features

    • Extended limits for users who submit high-quality ideas

Reasoning:
This system would:

  • Encourage users to submit quality feedback

  • Accelerate product improvement

  • Increase user engagement and loyalty

  • Allow OpenAI to quickly implement innovative ideas ahead of competitors

Summary:
Rewarding users for constructive ideas benefits both the community and OpenAI. Users feel recognized, motivated, and empowered to help improve the platform. OpenAI gains valuable insights for faster, more effective product development.

Hi and welcome to the community!

From what I understand, you are suggesting that the most-liked ChatGPT ideas should either be implemented and/or come with an additional reward for the user who proposed them.

The Codex team is doing something along those lines. Feature requests can receive likes from the community on GitHub, and the team can then move ahead with the most-supported ones. Since Codex is open source, people can even submit PRs and, if they are merged, add that work to their CV.

I personally think that is an interesting idea. That said, some ChatGPT feature requests have been made hundreds of times over the years, so it would be very difficult to decide who should receive a reward in many cases. Aside from that, it could be a good way to keep highly engaged users involved in the development of ChatGPT.

Ps. I always thought ChatGPT should be open source.

The way you get paid is by tangibly breaking ChatGPT.

https://openai.com/index/safety-bug-bounty/

OpenAI finally has a reward for model behavior instead of the worst exfiltration and cross-context fault, even from API misconfiguration by OpenAI, merely being “out of scope”:

March 25

Today, OpenAI is launching a public Safety Bug Bounty⁠(opens in a new window) program focused on identifying AI abuse and safety risks across our products…
This new program will complement OpenAI’s Security Bug Bounty⁠(opens in a new window) by accepting issues that pose meaningful abuse and safety risks, even if they don’t meet the criteria for a security vulnerability.

Of course, with reasoning models, the AI going off the rails 1% of the time on the same inputs has a reproducibility problem, because of the “raw” sampling done with minimum top-p and temperature moderation, and that when using ChatGPT itself, you cannot know if you yourself are being “tested” on a provisional AI model.


Now: reward more than one API key leak report per month. Currently, they pool all submissions and just pick one to reward. That way I don’t have to hold back submissions until the org graduates tiers and has lots of fine-tunings and data to be deleted, to win a single reward by competing against all other submissions.