Add a Parameter to Reasoning Models to Keep Users Informed While Waiting

I have a suggestion
can reasoning models provide some intermediate feedback while processing requests? Sometimes, especially during longer tasks, users might wonder if the request was sent successfully or if something went wrong. On chatgpt, I’ve noticed that some interfaces show summaries or hints about the model’s “thought process.” While I understand that providing the full chain of reasoning isn’t practical, could the model at least return a few words or phrases through something like a reasoning_content field? For example, it could indicate what the model is currently focusing on, like “Analyzing input…”

This would help users stay informed and avoid confusion during longer waits. Even minimal feedback would make the experience smoother.

1 Like

My English is not very good, the above are all translated by translation software, please forgive me

You mean, if you could

  1. Use the Responses endpoint
  2. Use a new model o4-mini or o3
  3. use streaming to receive events
  4. use a parameter such as this
 "reasoning": {
    "effort": "high",
    "summary": null
  },
  1. have written an event parser that can correctly surface events from an output list to a user interface by a metadata side channel?

Well, you can’t without verifying your OpenAI organization with a recognized personal photo identification card and then potentially even more steps of taking videos of yourself. And you can’t because you can see the forum is littered with it’s broken reports. And I wouldn’t trust something so broken with information so personal a hack target with a 3rd-party organization so obfuscated.