Letting GPT Design Its Own Prompts: A Practical Use Case

What if GPT didn’t just respond to prompts… but designed them?

This isn’t a theoretical idea. I ran a test.
Instead of writing a detailed prompt myself, I made GPT construct it — based purely on a chain of structured questions.


:test_tube: The Experiment

Goal: Create a hyper realistic image based on a scene.

  • Spring evening
  • First-person view from the driver’s seat
  • In a tunnel in Korea
  • Left hand on the steering wheel
  • 9:16 vertical ratio

Normally, we would just write a direct prompt like:

“Draw a realistic image of someone driving through a Korean tunnel at 7PM in spring, with their left hand on the steering wheel.”

And GPT would generate a basic image prompt. But it would still feel like AI — not quite real. So I tried something else.


:brain: The Conversation That Built the Prompt

User:
AI image generation quality is getting insane, but people still feel a strange disconnect between AI images and real photos. Why is that? What causes this?

GPT:
Even with high visual fidelity, people perceive AI-generated images as “off” due to a combination of technical, perceptual, and cultural factors. These include:

  1. Technical flaws – distorted fingers, incorrect lighting physics, unrealistic text, repeated patterns
  2. Cognitive dissonance – uncanny valley, too-perfect symmetry, micro-expression absence
  3. Cultural mismatch – unnatural lens distortion, lack of photographic noise, absence of real-world imperfections
  4. Emotional deadness – expression lacks intent, no real “purpose” behind the pose
  5. Temporal disconnection – AI images exist outside of time, while photos capture a moment with a story and a human behind the lens

User:
Okay, so what if each of those factors was assigned to an expert? Imagine a group of professionals—optics engineers, skin texture artists, photographers, cultural anthropologists, etc.—collaborate to design one image. Would that eliminate the uncanny feeling?

GPT:
Yes. If every element causing the disconnect is handled by domain experts, and the image is generated through that collaborative simulation, we can get photorealistic results virtually indistinguishable from real photos. This would require:

  1. Optical/physics experts – lens distortion, reflection accuracy
  2. Material & texture artists – pores, fabric realism, lighting on surfaces
  3. Stylists – seasonal clothing, realistic accessories
  4. Facial/gesture experts – authentic micro-expressions and hand posture
  5. Photographers – camera angle, film grain, exposure realism
  6. Cultural/semantic experts – context-appropriate symbols, signage, emotional tone
  7. Prompt engineer – who translates all this into a coherent image instruction

User:
Cool. Let’s assume this simulation is real. Multiple experts working together to create a prompt for one photo. Can you write that kind of prompt?

Let’s say the scene is:

  • First-person view from inside a car
  • Driving through a tunnel in Korea
  • 7 PM, in spring
  • Left hand resting casually on the steering wheel
  • 9:16 aspect ratio
    Try to design that prompt as if it were crafted by all those experts.

:camera: GPT generated a single prompt with over 2,000 characters—all at once—for just one image.

:framed_picture: Image


:backhand_index_pointing_right: generated using the above GPT-designed prompt

:magnifying_glass_tilted_left: Core Insight
GPT performs significantly better when it designs, not just executes
The less humans write, the deeper GPT seems to think
Maybe we’re past the era of “writing better prompts” —
Maybe it’s time for designing better thinking paths

:speech_balloon: To the Forum
Have any of you tried something similar?
Letting GPT simulate domain experts before prompt generation?
Chaining questions to force structural awareness before output?
Getting better results through prompt design automation rather than manual crafting?
Would love to hear from others doing prompt-as-design workflows.

“I don’t write prompts anymore. I just ask better questions.”