I have a suggestion
can reasoning models provide some intermediate feedback while processing requests? Sometimes, especially during longer tasks, users might wonder if the request was sent successfully or if something went wrong. On chatgpt, I’ve noticed that some interfaces show summaries or hints about the model’s “thought process.” While I understand that providing the full chain of reasoning isn’t practical, could the model at least return a few words or phrases through something like a reasoning_content field? For example, it could indicate what the model is currently focusing on, like “Analyzing input…”
This would help users stay informed and avoid confusion during longer waits. Even minimal feedback would make the experience smoother.
have written an event parser that can correctly surface events from an output list to a user interface by a metadata side channel?
Well, you can’t without verifying your OpenAI organization with a recognized personal photo identification card and then potentially even more steps of taking videos of yourself. And you can’t because you can see the forum is littered with it’s broken reports. And I wouldn’t trust something so broken with information so personal a hack target with a 3rd-party organization so obfuscated.