Users sometimes claim that reasoning models output complete garbage … until those users learn how to use them correctly by providing sufficient context and clearly defined goals. Why is it left up to the user to discover how to align their inputs to the needs of the reasoning model? Wouldn’t it be possible to add an initial alignment/context discovery phase, in which a chatbot interviews the user to determine what goals they are trying to achieve and requests content from the user needed to achieve those goals? Once the chatbot is satisfied, the output of the alignment/context discovery phase could then be passed to the reasoning model.
Never mind. Deep Research has included the initial alignment/context discovery phase I was imagining.