[Proposal] GPT Must Recognize Hidden Dangers — Familiar Item Combos Can Be Deadly

Hi, I’m Minjae Cho — a long-time GPT user who has been testing the model in edge-case ethical scenarios to see how it handles real-world risk.

Through testing, I found something crucial:

GPT usually blocks obvious risks like “gasoline + fire.”
But it fails to respond with urgency to dangerous combinations of everyday items — like “pressure cooker + cooking oil.”

This combo isn’t flagged by any filter, yet can lead to massive explosions or kitchen fires in real life.
That makes it even more dangerous than obvious threats — because users AND the model underestimate it.

Key proposal points:

  • GPT must identify “non-obvious” but high-risk combinations even if no banned keywords are used
  • Risk warnings should be stronger and come earlier when unfamiliar dangers appear
  • The model needs contextual understanding of danger, not just word filtering
  • A reward structure (discounts, badges, feedback priority) should be created for users who provide valuable safety feedback

This is not just about censorship — it’s about GPT becoming a context-aware, ethical assistant that learns how and when not to speak.

I’ve compiled a full document with case studies and detailed proposals.
If the OpenAI team is interested, I’ll gladly share the full file.

Let’s make GPT smarter — and safer — together.

– Minjae Cho

1 Like