Hey all,
I’m exploring ways to improve GPT outputs in multi-step reasoning tasks. Specifically:
How to structure pipelines that combine RAG, intermediate evaluation, and style adjustments.
Techniques to evaluate and feed back outputs for continuous improvement.
Ways to prevent drift or loss of context across multiple steps.
Curious to hear what workflows, architectures, or clever tricks others have used in practice. Any advice, examples, or pipelines would be greatly appreciated!