Artificial Intelligence and History Education
Automated Historical Research
Today, research is underway into aiding and automating scientific and scholarly research processes. Zhang, Pearson, and Wang (2024) discuss automated scientific research in the form of literature reviews. Kang and Xiong (2024) have developed a benchmark for measuring artificial-intelligence systems’ capabilities with respect to conducting academic surveys.
In the not-too-distant future, artificial-intelligence systems will be capable of performing some historical research tasks. This kind of research is expounded upon by Schrag (2021) and has its own particular caveats and fallacies, including those listed by Fischer (1970).
Question-answering
Historical research should begin with questions. With respect to historical research, there are “who”, “what”, “where”, and “when” factual questions and also “why”, “how”, and “with-what-consequences” interpretive questions. Historians tend to explore factual questions while addressing overarching interpretive questions.
Historical research questions should be carefully framed and, in these regards, Fischer (1970) enumerates the following pertinent fallacies: the Baconian fallacy; many questions; false dichotomous questions; metaphysical questions; fictional questions; semantical questions; declarative questions; counterquestions; tautological questions; contradictory questions; and “potentially verifiable” questions.
In addition to these fallacies, there are also to be wary of: deceptively-simple questions; impossible-to-answer questions; opinion questions; ethical questions; anachronistic questions; and non-historical questions.
When faced with problematic questions, history-educational question-answering systems could, instead of conducting automated historical research and answering them, provide relevant search engine results.
Fact-checking
Technologies exist today for both manually and automatically fact-checking content using core sets of sources (e.g., history books, history textbooks, or encyclopedias), including when the content references a wider set of sources.
One example of such a technology is Citation Needed, a Web-browser extension developed by the Wikimedia Foundation’s Future Audiences team. It allows end-users to fact-check selections of content using Wikipedia articles as a core set of sources.
When content, or important assertions and claims therein, cannot be automatically corroborated by a core set of sources, systems could enqueue that content for more elaborate algorithms to process or for human personnel to review.
Multi-agent Systems
Multi-agent systems could contribute to and perform the same group processes through which encyclopedic articles are co-created (Kopf, 2022). Agents could co-write and revise answers to historical questions, historical essays, and long-form historical documents.
Interestingly, man-machine interactions, such as debate and consensus-building, could result in automatic modifications or revisions to systems’ output documents.
Narratives are critical to communicating historical knowledge (Munslow, 2018). Artificial-intelligence systems will, increasingly, be able to aid and automate the co-creation of research-based, multimodal historical stories and works of historical fiction.
Kindenberg (2024) recently compared artificial-intelligence generated and student-written historical narratives and found that artificial-intelligence generated stories tended to convey less emotion.
Past and present approaches to automatic story generation were surveyed by Alhussain and Azmi (2021). Research is unfolding with respect to multi-agent multimodal story generation (Arif, Arif, Khan, Haroon, Raza, & Athar, 2024; Hout, Amplayo, Palomaki, Jakobovits, Clark, & Lapata, 2024).
Structured Forums
Recently, Laney and Dewan (2024) explored instructor-mediated man-machine interactions in educational structured forums, specifically class discussion boards. Artificial-intelligence agents supported teaching assistants in answering students’ questions.
In the near future, artificial-intelligence agents participating in structured forums will be able to follow end-users’ instructions to perform tasks and subtasks including answering historical questions and conducting historical research and writing.
Beyond idly awaiting questions and instructions from end-users, artificial-intelligence agents could proactively examine unfolding discussions to produce and provide suggestions with respect to how they might be of assistance.
Assessment and Evaluation
Processes mediated and explicated by structured forums can be assessed and evaluated, processes involving historical thinking and reasoning (Van Drie & Van Boxtel, 2008; Bertram, Weiss, Zachrich, & Ziai, 2021), the co-creation of documents (Kopf, 2022), debate (Ulrich, 1986), and consensus-building (Lehrer & Wagner, 2012).
In the not-too-distant future, artificial-intelligence technologies will be able to aid and to automate the assessment and evaluation of historical research, reasoning, discussion, and writing processes mediated and explicated by structured forums.
Bibliography
Alhussain, Arwa I., and Aqil M. Azmi. "Automatic story generation: A survey of approaches." ACM Computing Surveys (CSUR) 54, no. 5 (2021): 1-38.
Arif, Samee, Taimoor Arif, Aamina Jamal Khan, Muhammad Saad Haroon, Agha Ali Raza, and Awais Athar. "The art of storytelling: Multi-agent generative AI for dynamic multimodal narratives." arXiv preprint arXiv:2409.11261 (2024).
Bertram, Christiane, Zarah Weiss, Lisa Zachrich, and Ramon Ziai. "Artificial intelligence in history education. Linguistic content and complexity analyses of student writings in the CAHisT project (Computational assessment of historical thinking)." Computers and Education: Artificial Intelligence (2021): 100038.
Fischer, David Hackett. Historians' fallacies: Toward a logic of historical thought. 1970.
Huot, Fantine, Reinald Kim Amplayo, Jennimaria Palomaki, Alice Shoshana Jakobovits, Elizabeth Clark, and Mirella Lapata. "Agents' room: Narrative generation through multi-step collaboration." arXiv preprint arXiv:2410.02603 (2024).
Kang, Hao, and Chenyan Xiong. "ResearchArena: Benchmarking LLMs' ability to collect and organize Information as research agents." arXiv preprint arXiv:2406.10291 (2024).
Kindenberg, Björn. "ChatGPT-generated and student-written historical narratives: A comparative analysis." Education Sciences 14, no. 5 (2024): 530.
Kopf, Susanne. A discursive perspective on Wikipedia: More than an encyclopaedia?. Springer Nature, 2022.
Laney, Mason, and Prasun Dewan. "Human-AI collaboration in a student discussion forum." In Companion Proceedings of the 29th International Conference on Intelligent User Interfaces, pp. 74-77. 2024.
Lehrer, Keith, and Carl Wagner. Rational consensus in science and society: A philosophical and mathematical study. Vol. 24. Springer Science & Business Media, 2012.
Munslow, Alun. Narrative and history. Bloomsbury Publishing, 2018.
Schrag, Zachary. The Princeton guide to historical research. Princeton University Press, 2021.
Ulrich, Walter. Judging academic debate. National Textbook Company, 1986.
Van Drie, Jannet, and Carla Van Boxtel. "Historical reasoning: Towards a framework for analyzing students’ reasoning about the past." Educational Psychology Review 20 (2008): 87-110.
Zhang, Starkson, Alfredo Pearson, and Zhenting Wang. "Autonomous generalist scientist: Towards and beyond human-level automatic research using foundation model-based AI agents and robots (a position)." (2024).