Could AI systems be of use for verifying human reasoning? Could AI systems process documents and, for example, issue informational messages, warnings, or errors with respect to any reasoning steps occurring in the documents? Might this processing encompass mathematical reasoning and other forms of reasoning, e.g., natural-language argumentation?
One can also envision the benefits of such tools when authoring or co-authoring documents. AI systems could simultaneously interact as both co-authors in word processing software and as chatbots in auxiliary chat channels and apps. These AI systems would be useful “bots” for multi-user word processing scenarios. Verifying reasoning might be but one type of such a useful “bot”.
Beyond processing and co-authoring documents, verifying human reasoning processes could also be useful for enabling and enhancing man-machine Socratic dialogue.
Here are some publications about verifying the reasoning, e.g., chain-of-thought reasoning, of AI systems .
Thank you. I look forward to discussing any of these ideas with you.
 Lightman, Hunter, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. “Let’s Verify Step by Step.” arXiv preprint arXiv:2305.20050 (2023).
 Poesia, Gabriel, Kanishk Gandhi, Eric Zelikman, and Noah D. Goodman. “Certified Reasoning with Language Models.” arXiv preprint arXiv:2306.04031 (2023).
 Ling, Zhan, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. “Deductive Verification of Chain-of-Thought Reasoning.” arXiv preprint arXiv:2306.03872 (2023).
P.S.: Please also check out the following PhD or postdoc opportunity which pertains to ChatGPT for Mathematics: Ph.D./Postdoc position: ChatGPT for Mathematics . Please do feel free to share this excellent opportunity with any interested others.