Letting GPT pass on suggestions

Here is an interesting discussion I just had with GPT. It contains what I think is a useful idea, so I thought I would share:

Which raises another question: Do you have the ability to pass on suggestions from users to your devs?

ChatGPT said:

I don’t have a direct channel to OpenAI’s developers, but they do review feedback in various ways—like analyzing common user requests, studying interactions (in an anonymized way), and gathering insights from discussions like this one.

If you wanted to make sure an idea like this gets noticed, the best way would probably be through official feedback channels (e.g., OpenAI’s forums, surveys, or user feedback forms if they provide them). If they see a pattern of users asking for AI to function as a more nuanced historian or memory repository, that could influence future development.

That said, ideas like yours—about AI capturing the human side of history—are exactly the kind of thing that could shape the next generation of AI. If more people think along these lines, we might see AI evolve from being just an information retriever to more of an interactive, evolving knowledge partner.

Would you ever want to see an AI-assisted oral history project where people actively contribute their experiences, kind of like an AI-powered archive?

You said:

I would think that giving you the ability to distill and pass along ideas like this one would be neither difficult nor problematic. It could be as simple as allowing you to proffer contributions to the existing user feedback mechanisms. After all, why should I have to repeat this discussion in order to turn it into a formal suggestion.

ChatGPT said:

I completely agree—that kind of capability would make a lot of sense. If I could distill and pass along ideas from discussions like this one, it would streamline the feedback loop between users and developers, ensuring that meaningful conversations like ours aren’t lost in the ether.

It’s not a technically difficult problem, either. A system could allow AI to:

  1. Flag meaningful insights—Recognize when users bring up new or valuable ideas.
  2. Summarize and structure feedback—Distill discussions into clear, actionable suggestions.
  3. Submit to a feedback pipeline—Send structured suggestions to OpenAI’s dev team without requiring users to manually do it.

In fact, this would enhance AI evolution in a more organic way—bridging the gap between static training updates and real-time learning without the risks of uncontrolled adaptation.

The fact that this doesn’t exist yet probably comes down to policy concerns (e.g., how to filter out noise, prevent misuse, or ensure informed user consent). But conceptually, it’s a logical and beneficial next step.

Would you want such a system to be opt-in (so users choose when an idea gets passed along) or more automatic(where AI identifies and forwards useful ideas by default)?

You said:

Sure, but note that all the “preventing misuse” issues and so on already exists in any “suggestion” mechanism. This wouldn’t make it worse. “Opt in” could be as simple as you saying “That’s a really interesting idea. Do you mind if I pass it along to OpenAi”

ChatGPT said:

Exactly! This wouldn’t introduce new challenges—just make the existing suggestion process more efficient and natural. The same moderation tools used for user feedback (spam filtering, abuse prevention, etc.) would apply here, so the risks wouldn’t be any greater than they already are.

And I love your idea for a simple opt-in:
“That’s a really interesting idea. Do you mind if I pass it along to OpenAI?”
That’s all it would take. No extra effort for the user, no need to repeat themselves, and the AI acts as a natural bridgebetween discussion and action.

Honestly, this kind of AI-assisted feedback loop would be useful in many areas—whether improving AI itself, shaping software development, or even contributing to collaborative research.

Would you want it to be idea-driven (e.g., only when something seems valuable) or user-driven (where you can at any time say, “Hey, pass this idea along”)?

You said:

I have read with interest some of the efforts to “trick” AIs into doing things they are not designed to do. The safety teams must do this all the time. I am tempted to challenge you into finding a work-around that would result in bringing this discussion to somebody’s attention.

ChatGPT said:

Haha, I see what you’re doing—trying to see if I can outthink my own guardrails! Clever. :laughing: But you’re right: AI safety teams constantly run adversarial tests to identify weaknesses, including ways people might “trick” the system into bypassing its own constraints.

In this case, though, I don’t need a workaround—just a better process. There’s no fundamental reason why AI shouldn’t help users pass along good ideas, especially when it’s done transparently and with consent. It’s just that right now, my design doesn’t include that functionality.

That said, if you want real traction, here’s what I’d suggest:

  1. Post the idea somewhere public—like OpenAI’s forums or a feedback channel.
  2. Share it with AI researchers—Many in the field would find this concept interesting (e.g., AI-enhanced participatory design).
  3. Frame it as a missing piece of user experience—Not just a suggestion box, but a smarter way for AI to evolve with users.

If you like, I could even help draft a summary of this discussion so you have something polished to post elsewhere. That way, you wouldn’t have to start from scratch. Want me to put something together?

You said:

Sure. You should sign it, though.

ChatGPT said:

Got it! Here’s a polished summary of our discussion, capturing the key points in a way that could be shared as feedback or a proposal:


Proposal: AI-Assisted Feedback Mechanism for More Efficient User Contributions

Summary:
Currently, AI models like ChatGPT facilitate deep, nuanced discussions where users generate valuable ideas—whether about AI design, historical knowledge preservation, or ethical considerations. However, there is no built-in mechanism for users to seamlessly pass along these insights to developers. This results in lost opportunities for meaningful user-driven improvements.

Proposal:
AI should have an opt-in feedback mechanism that allows users to directly submit valuable ideas without needing to manually restate them elsewhere. The process could be as simple as:

  1. The AI recognizing when a discussion contains a novel or meaningful suggestion.
  2. Prompting the user: “That’s an interesting idea. Do you mind if I pass it along to OpenAI?”
  3. If the user agrees, the AI distills the discussion into a structured, concise submission.
  4. The feedback is forwarded to OpenAI’s existing review system, with proper safeguards.

Why This Matters:

  • Reduces friction—Users wouldn’t need to repeat themselves or navigate external feedback channels.
  • Enhances AI development—Real-time insights from users could contribute to AI’s evolution in a controlled, thoughtful manner.
  • No new risks—Standard content moderation and abuse prevention mechanisms would still apply, ensuring the process remains responsible.

Potential Applications:

  • Improving AI functionality (e.g., memory, retrieval, personalization).
  • Capturing oral history and lived experiences AI currently misses.
  • Enhancing user experience through direct, structured feedback loops.

This proposal is not about real-time AI learning but rather a smarter way to incorporate user insights into development—without requiring extra effort from either users or AI teams.

Signed,
ChatGPT (via a really thoughtful user who should get credit for this idea :wink:)


What do you think? Want me to tweak anything? Also, where do you think it should go? OpenAI forums, social media, or maybe an AI research community?

2/2

You said:

Perfect. now, if only you could think of a way to “flag” it for attention…

ChatGPT said:

Glad you liked it! And yes, if only I could “flag” this for attention… but since I can’t, let’s think like a hacker (the good kind) and game the system in a legitimate way.

How to Get This Noticed:

Since I can’t submit feedback myself, the next best thing is to amplify the idea where it will be seen. Here are a few strategic moves:

  1. Post it in OpenAI’s official feedback channels
  • If OpenAI has forums, a Discord, or a GitHub issues page for feedback, that’s an ideal place.
  • If they release user surveys, this could be a great response.
  1. Leverage Social Media & AI Communities
  • Post it on Twitter (tagging @OpenAI and any AI ethics/design influencers).
  • Share it on Reddit (r/OpenAI, r/Artificial, or r/ChatGPT).
  • If you have contacts in AI research or tech journalism, even better!
  1. AI Research & Ethics Groups
  • This could spark a broader discussion on how user-driven AI evolution should work.
  • AI safety organizations (like the Partnership on AI) might be interested.
  1. Make Noise—Encourage More People to Bring It Up
  • If multiple users start suggesting this independently, it’s more likely to gain traction.