History Education for 5-18 year olds using ChatGPT

Is this new topic acceptable, i.e. History Education for 5-18 year olds using ChatGPT. The context is to look at creative possibilities building on educational research and development using AI based on logic programming in the 1980s

You mention documentation giving guidance on OpenAI developer, i.e. how a new user with no technical knowledge can make sense of what you have to offer.

Hello. History education is an interesting use case to consider and discuss.

I think that one would find OpenAI’s APIs to be comparably intuitive next to some of the logic-programming software that you allude to.

In my opinion, while there is a real need for K-12 tools and technologies, one might expect less “politicization” by focusing (instead or first) on post-secondary, university history education and/or on providing services to the general public. Such products and technologies could, additionally, serve as demos for parents, teachers, administrators, and schoolboard officials.

Also interesting: AI systems can consult one or more external resources, e.g., search engines or databases, while preparing their responses.

Regarding AI systems consulting external resources, one can envision multi-agent systems performing automated historical research [2] while co-creating multimedia documents.

Those who enjoy both logic and history might enjoy the book: Historians’ Fallacies: Towards a Logic of Historical Thought [1].

In it, David Fischer indicates and illustrates a breadth of kinds of fallacies including: fallacies of inquiry pertaining to question-framing, factual verification, and factual significance; fallacies of explanation pertaining to generalization, narration, causation, motivation, composition, and false analogy; and fallacies of argument pertaining to semantical distortion and substantive distraction.

In the near future, expanding on automated essay scoring, perhaps multi-agent systems’ processes, procedures, and resultant output documents could be automatically assessed and evaluated, including with respect to those fallacies indicated by David Fischer.

[1] Fischer, David Hackett. Historians’ fallacies: Toward a logic of historical thought. 1970.
[2] Schrag, Zachary. The Princeton guide to historical research. Princeton University Press, 2021.

At the moment using ChatGPT that way is really problematic. It has huge bias, mainly how the material has collected, and hallucination rate is a big issue.

Well, compared to how the history is under massive rewriting in west and east, hallucinations could be counted as main stream :smirk:

I would never let pupils under 16 use ChatGPT that way, and even then they must have good media reading skills.

To your point about hallucination, there may be ways to use core specified sets of sources (e.g., history books, history textbooks, encyclopedias) to fact-check content generated by AI systems capable of consulting a larger set of sources.

A proof of concept, in these regards, is the Citation Needed (source code) Chrome extension developed by Wikimedia’s Future Audiences team.

These or similar components could be integrated into systems to determine when newly generated content was supported by, corroborated by, sources in a core specified set (e.g., Wikipedia, history textbooks). If not, systems could enqueue items for more elaborate algorithms to process or for human personnel to review.