Interactive stories can be utilized as classroom and homework exercises for character education courses. Adaptive instructional systems can select and sequence these interactive stories to provide individualized assessment and instruction for learners. While presenting learners with interactive stories, at scale, adaptive instructional systems can simultaneously measure and evaluate the interactive stories. These evaluations and related data about learners’ bulk interactions can be of use for continually producing new and better interactive stories.
In (Arthur, Kristjánsson, Harrison, Sanderse, & Wright, 2016), the authors state that “rather than remaining satisfied with eliciting self-evaluations of virtue, an Aristotelian approach would ideally explore how people do in fact react – attitudinally, emotionally, behaviorally – to morally-charged situations. Could this perhaps be done by exposing them to scenarios involving moral dilemmas and recording their responses?” The authors continue, noting that it seems plausible “to conceive of dilemma tests that would attempt to home in on the virtues.”
Types of interactive stories include: guided play, role-playing games, the case method, decision games, simulations, literature and literary discussions, story-based assessment items, digital gamebooks, interactive films, and serious games. With these latter four types, story-based assessment items, digital gamebooks, interactive films, and serious games, it is straightforward to record and analyze learners’ responses and these latter four types are focused on herein.
In addition to presenting learners with a-priori choices such as “what should character X do next?”, learners could be presented with after-the-fact questions resembling “did character X do the right thing?” and, potentially, with follow-up questions resembling “why or why not?”. Interactive stories could unfold as a result of all of these varieties of choices and questions.
Not every choice or question presented to learners need have one simply correct answer. Some choices or questions may have more than one simply correct answer and others may have none.
Horace Mann, a pioneer of public schooling and modern education, felt that “one of the most important concepts for teachers to understand and implement pertaining to character education is the correct use of instructional timing, as well as the proper implementation strategy, when considering moral development in students” (Watz, 2011).
Interactive stories can be of use for both assessment and instruction and adaptive instructional systems can optimize the instructional timing of each interactive story’s instructional strategies for each individual learner.
Adaptive instructional systems can select interactive stories from arbitrarily large storybanks and sequence these stories for individual learners to play as classroom and homework exercises. Adaptive instructional systems can do so in accordance with individual learners’ pedagogical enrichment goals and while mindful of individual learners’ affective states, motivation, engagement, and flow.
Learner modeling is the means by which adaptive instructional systems can best select and sequence interactive stories for individualized learning. As learners make decisions and provide responses to the choices and questions in presented interactive stories, adaptive instructional systems can update their models of learners.
In (Paradeda, Ferreira, Martinho, & Paiva, 2017), the authors describe the use of an interactive storytelling scenario to identify players’ personality traits according to the Myers-Briggs Type Indicator theory. In (de Lima, Feijó, & Furtado, 2018), the authors describe a system which models players’ traits according to the Big Five model. By extracting the decisions made and the responses provided by players in interactive storytelling scenarios, these authors were able to predict players’ personality traits.
Beyond modeling and predicting personality traits, systems can model and predict character traits and virtues – intrapersonal values, interpersonal values, and civic virtues – via the decisions made and the responses provided as learners play interactive stories.
Open learner modeling, presenting learners with systems’ assessments of their performance, often has a positive effect on learners’ progress. Open learner modeling can promote reflection with respect to learners’ knowledge and skills, can encourage self-assessment, can support planning and monitoring, and can allow learners to take greater control and responsibility over their learning.
Psychometric tools for the assessment of moral development, moral judgment, and moral reasoning include: the moral judgment interview (MJI; Colby & Kohlberg, 1987), the sociomoral reflection measure (SRM; Gibbs & Widaman, 1982), the defining issues test (DIT, DIT-2; Rest, 1979), and the intermediate concepts measures (ICM; Bebeau & Thoma, 1999).
Most of these psychometric tools (e.g. MJI, DIT, ICM) utilize forms of interactive stories where complex situations or moral dilemmas are described and questions are subsequently presented about what story characters should do next.
Adaptive psychometric testing techniques could vary the sequencing of items selected from arbitrarily large storybanks based on learners’ decisions and responses. These techniques could also vary the contents of the items themselves, in particular when interactive story items provide instructional narrative material after learners’ decisions and responses.
Learner models and their dynamics across character education courses could be of use to educators for performing assessment, for integrating learners’ performance and progress with respect to exercises and activities into character education course grades.
Other potential components for the grading of character education courses include: the study of the history, theory, and philosophy of character and virtue, classroom discussions and participation, essays, individual and group projects, effort, and overall progress.
With new and emerging tools, technologies, and techniques for the assessment of character and virtue, character education programs can be more precisely evaluated.
While presenting learners with interactive stories, at scale, adaptive instructional systems can simultaneously measure and evaluate the stories. These evaluations and related data about learners’ bulk interactions can be of use for continually producing new and better interactive stories.
In (Stefnisson & Thue, 2018), the authors indicate that manually creating interactive stories is inherently difficult and that there is a need for advanced authoring tools. This inherent difficulty is further pronounced when the matter is not only one of creative writing, but is also one of data-driven and efficacious interactive story design and engineering.
Artificial intelligence tools can be of use for both developing and evaluating interactive stories, screenplays, storyboards, and production schedules.
The first automatic story generation system was Automatic Novel Writer (Klein, Aeschlimann, Balsiger, Converse, Court, Foster, Lao, Oakley, & Smith, 1973) followed by TALE-SPIN (Meehan, 1977), Author (Dehn, 1981), Universe (Lebowitz, 1983), Minstrel (Turner, 1993), Mexica (Pérez y Pérez, 1999), Brutus (Bringsjord & Ferrucci, 1999), and Fabulist (Riedl & Young, 2010).
In (Riedl & Young, 2006), the authors describe techniques for generating, beyond linear stories, branching or interactive stories such as those found in digital gamebooks, interactive films, and some serious games.
Interactive drama systems include: Oz (Bates, 1992), DEFACTO (Sgouros, 1997), the Virtual Theater Project (Hayes-Roth, van Gent, & Huber, 1997), I-Storytelling (Cavazza, Charles, & Mead, 2002), Façade (Mateas & Stern, 2003), IDtension (Szilas, 2003), Mimesis (Young, Riedl, Branly, Jhala, Martin, & Saretto, 2004), NOLIST (Bangsø, Jensen, Jensen, Andersen, & Kocka, 2004), OPIATE (Fairclough, 2004), the Interactive Drama Architecture (IDA; Magerko, 2005), FAtiMA (Aylett, Dias, & Paiva, 2006), IN-TALE (Riedl & Stern, 2006), U-Director (Mott & Lester, 2006), SASCE (Nelson, Roberts, Isbell, & Mateas, 2006), Bards (Pizzi, Charles, Lugrin, & Cavazza, 2007), PaSSAGE (Thue, Bulitko, Spetch, & Wasylishen, 2007), DED (Arinbjarnar & Kudenko, 2008), GADIN (Barber & Kudenko, 2009), and Erasmatron (Crawford, 2012).
Utilizing reader models while generating stories is discussed in (Mawhorter, 2013) where the author states that, while non-interactive story generation systems have explored reader modeling for discourse generation or presentational purposes, several interactive drama systems, such as IDtension and U-Director, have utilized formal models of their users to evaluate narrative possibilities. The author notes that IDtension, in particular, has a formal model which “addresses the users’ perceptions of ethical consistency, motivation, relevance, complexity, progress, and conflict.”
In (Barber & Kudenko, 2007), the authors describe a system which adaptively models users to generate interesting dilemma-based stories, noting that such stories require “fundamentally difficult decisions within the course of the story.” In a 2009 publication, the authors present the GADIN system (Barber & Kudenko, 2009).
Contemporary approaches to generating stories also include neural story generation (Alabdulkarim, Li, & Peng, 2021) as well as hybrid, or neurosymbolic, techniques.
Artificial intelligence systems for ethics education include: PETE (Goldin, Ashley, & Pinkus, 2001), AIENS (Hodhod, Kudenko, & Cairns, 2009), Conundrum (McKenzie & McCalla, 2009), and Umka (Sharipova, 2015).
Interactive stories can be utilized as classroom and homework exercises for character education courses. Adaptive instructional systems can select and sequence these interactive stories to provide individualized assessment and instruction for learners. While presenting learners with interactive stories, at scale, adaptive instructional systems can simultaneously measure and evaluate the interactive stories. These evaluations and related data about learners’ bulk interactions can be of use for continually producing new and better interactive stories.
Teaching Character and Virtue in Schools by James Arthur, Kristján Kristjánsson, Tom Harrison, Wouter Sanderse and Daniel Wright, An Historical Analysis of Character Education by Michael Watz, The Measurement of Moral Judgment: Theoretical Foundations and Research Validation by Anne Colby and Lawrence Kohlberg, Social Intelligence: Measuring the Development of Sociomoral Reflection by John C. Gibbs and Keith F. Widaman, Development in Judging Moral Issues by James R. Rest, “Intermediate” Concepts and the Connection to Moral Education by Muriel J. Bebeau and Stephen J. Thoma, Introducing PETE: Computer Support for Teaching Ethics by Ilya M. Goldin, Kevin D. Ashley and Rosa L. Pinkus, AEINS: Adaptive Educational Interactive Narrative System to Teach Ethics by Rania Hodhod, Daniel Kudenko and Paul Cairns, Serious Games for Professional Ethics: An Architecture to Support Personalization by Adam McKenzie and Gord McCalla, Supporting Students in the Analysis of Case Studies for Professional Ethics Education by Mayya Sharipova, Automatic Novel Writing: A Status Report by Sheldon Klein, John F. Aeschlimann, David F. Balsiger, Steven L. Converse, Claudine Court, Mark Foster, Robin Lao, John D. Oakley and Joel Smith, Using Planning Structures to Generate Stories by James R. Meehan, The Metanovel: Writing Stories by Computer by James R. Meehan, TALE-SPIN: An Interactive Program That Writes Stories by James R. Meehan, Story Generation after TALE-SPIN by Natlie Dehn, Creating a Story-telling Universe by Michael Lebowitz, Minstrel: A Computer Model of Creativity and Storytelling by Scott R. Turner, MEXICA: A Computer Model of Creativity in Writing by Rafael Pérez y Pérez, Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus, a Storytelling Machine by Selmer Bringsjord and David Ferrucci, Narrative Generation: Balancing Plot and Character by Mark O. Riedl and R. Michael Young, From Linear Story Generation to Branching Story Graphs by Mark O. Riedl and R. Michael Young, Virtual Reality, Art, and Entertainment by Joseph Bates, Computers as Theatre by Brenda Laurel, Hamlet on the Holodeck by Janet H. Murray, Dynamic, User-centered Resolution in Interactive Stories by Nikitas M. Sgouros, Acting in Character by Barbara Hayes-Roth, Robert van Gent and Daniel Huber, Character-based Interactive Storytelling by Marc Cavazza, Fred Charles and Steven J. Mead, Façade: An Experiment in Building a Fully-realized Interactive Drama by Michael Mateas and Andrew Stern, IDtension: A Narrative Engine for Interactive Drama by Nicolas Szilas, A Computational Model of an Intelligent Narrator for Interactive Narratives by Nicolas Szilas, An Architecture for Integrating Plan-based Behavior Generation with Interactive Game Environments by R. Michael Young, Mark O. Riedl, Mark Branly, Arnav Jhala, R. J. Martin and C. J. Saretto, Non-linear Interactive Storytelling Using Object-oriented Bayesian Networks by Olav Bangsø, Ole G. Jensen, Finn V. Jensen, Peter B. Andersen and Tomas Kocka, Story Games and the OPIATE System by Chris Fairclough, Story Representation and Interactive Drama by Brian Magerko, Narrative and the Split Condition of Digital Textuality by Marie-Laure Ryan, An Affectively Driven Planner for Synthetic Characters by Ruth Aylett, Joao Dias and Ana Paiva, Believable Agents and Intelligent Story Adaptation for Interactive Storytelling by Mark O. Riedl and Andrew Stern, U-Director: A Decision-theoretic Narrative Planning Architecture for Storytelling Environments by Bradford W. Mott and James C. Lester, Reinforcement Learning for Declarative Optimization-based Drama Management by Mark J. Nelson, David L. Roberts, Charles L. Isbell and Michael Mateas, Interactive Storytelling with Literary Feelings by David Pizzi, Fred Charles, Jean-Luc Lugrin and Marc Cavazza, Interactive Storytelling: A Player Modelling Approach by David Thue, Vadim Bulitko, Marcia Spetch and Eric Wasylishen, Schemas in Directed Emergent Drama by Maria Arinbjarnar and Daniel Kudenko, Generation of Adaptive Dilemma-based Interactive Narratives by Heather Barber and Daniel Kudenko, Chris Crawford on Interactive Storytelling by Chris Crawford, Reader-model-based Story Generation by Peter Mawhorter, A User Model for the Generation of Dilemma-based Interactive Narratives by Heather Barber and Daniel Kudenko, Mimisbrunnur: AI-assisted Authoring for Interactive Storytelling by Ingibergur Stefnisson and David Thue, Using Interactive Storytelling to Identify Personality Traits by Raul Paradeda, Maria J. Ferreira, Carlos Martinho and Ana Paiva, Player Behavior and Personality Modeling for Interactive Storytelling in Games by Edirlei S. de Lima, Bruno Feijó and Antonio L. Furtado and Automatic Story Generation: Challenges and Attempts by Amal Alabdulkarim, Siyan Li and Xiangyu Peng.
A breadth of topics are indicated and discussed with respect to the training and testing of artificial intelligence systems in simulations and game environments.
The study of animal behavior is multidisciplinary and includes the scientific fields of: ethology, behavioral ecology, evolutionary psychology, comparative psychology and, more recently, comparative cognition. Comparative cognition is the study of cognitive processes across all species of animals, including humans.
Psychometrics is a scientific field of study concerned with the objective measurement of skills, knowledge, abilities, mental capacities, mental processes, attitudes, personality traits and educational achievement. Psychometrics topics include the assessment of cognitive development and intelligence. Psychometric measurements can be made of non-human animals, humans and AI systems.
Collective intelligence is a shared or group intelligence that emerges from collaboration, collective efforts and competition. Collective intelligence can emerge from the interactions of multiple AI systems in simulations and game environments.
The field of artificial intelligence makes use of simulations and game environments to train and to test AI systems. Examples of such simulations and game environments include: the Arcade Learning Environment, the OpenAI Gym, the Behavior Suite for Reinforcement Learning, the Obstacle Tower, and the Animal-AI Environment.
Machine teaching is the control of machine learning. Machine learning algorithms define dynamical systems where states, or models, are driven by training data. Machine teaching designs optimal training data with which to drive learning algorithms to target models.
AI systems can be envisioned which accelerate and optimize the training and testing of other AI systems.
Intelligent tutoring systems are AI systems which provide personalized instruction to learners. Traditionally, the learners are human students. The techniques of intelligent tutoring systems, however, generalize to the training and testing of AI systems.
Computerized adaptive testing is a form of examination that adapts to the exhibited capabilities of examinees. Items to be administered to examinees depend upon the nature of or the correctness of examinees’ previous responses.
Interactive storytelling is a form of digital entertainment in which storylines are not predetermined. While authors may create any settings, characters and situations which a narrative must address, readers or players experience unique stories based upon their interactions with storyworlds.
Intelligent narrative technologies can be envisioned which generate dynamic narratives in simulations and game environments for the training and testing of AI systems. Such narratives would unfold based upon the behaviors exhibited by and the decisions made by AI systems.
Automatic item generation uses computer algorithms to produce items, the basic building blocks of exams, tests, questionnaires and other instruments of psychometric measurement.
Procedural content generation uses computer algorithms to produce elements of simulations and game environments. Procedurally-generated content could be puzzles, tasks, tests or other varied content useful for the training or testing of AI systems.
Item response theory is a paradigm for the design, scoring, analysis and evaluation of items, exams, tests, questionnaires and other instruments of psychometric measurement.
Content evaluation is the analysis and evaluation of procedurally-generated content and narrative elements, for instance in terms of their utility with respect to the training and testing of AI systems.
Can the internals of AI systems be inspected and monitored during training and testing or are such systems effectively black boxes?
If one can inspect and monitor the internals of AI systems, then metrics based upon their algorithms, e.g. deep reinforcement learning, could be obtained.
If, instead, AI systems are effectively black boxes, then such systems might be modeled as one models players or students. In this regard, models of AI systems undergoing training or testing would update based upon observations of their behaviors or decisions in simulations or game environments.
Event processing is the analysis of streams of events and the deriving of conclusions from them. This includes the processing of events which occur in simulations and game environments.
Psychometric measurements and other important metrics can be obtained from processing those event streams which originate in simulations and game environments during the training and testing of AI systems.
A breadth of topics were indicated and discussed with respect to the training and testing of artificial intelligence systems in simulations and game environments.
Some technical topics are broached toward the crowdsourcing of dialogue system content and behavior.
The content and behavior of a dialogue system can be represented in a number of ways.
Firstly, the content and behavior of a dialogue system can be represented in programming language source code files. Collaborative authoring, in this case, is a matter of integrated development environments and source code repositories, version control systems.
Secondly, the content and behavior of a dialogue system can be separated from source code files as data stored in some data format or in a database. Collaborative authoring, in this case, could require custom software tools.
Thirdly, a number of services, cognitive services, can encapsulate the content and behavior of a dialogue system. Collaborative authoring, in this case, could require utilization of such services or related user interfaces.
Fourthly, the content and behavior of a dialogue system can be represented as a set of interrelated, URL-addressable, editable pages. Servers can provide content for a number of different content types, for example hypertext for content authors and other formats for dialogue system user agents. Server-side scripting can be utilized to generate pages and generated pages can contain client-side scripting.
Fifthly, the content and behavior of a dialogue system can be represented as a set of interrelated, URL-addressable, editable diagrams.
Sixthly, the content and behavior of a dialogue system can be represented in transcript form. Transcript-based user interfaces may resemble instant messaging applications, scrollable sequences of speech bubbles, with speech bubbles coming from the left and right sides, such that users can edit the content in dialogue systems’ speech bubbles. Users could opt to view more than plain text in speech bubbles. There could also be vertical, colored bands in one or both margins, visually indicating discourse behaviors, moves, objectives or plans which span one or multiple utterances.
Debugging dialogue systems is an important topic. Debugging scenarios include switching from interactions with dialogue systems to authoring processes such that dialogue context data is preserved.
Natural language generation can produce editable structured documents from the data stored in databases and knowledgebases. Generated content can contain, beyond natural language, data and program logic to facilitate the processing of constrained or unconstrained edits. Edits to generated content can result in changes to stored data.
Computer-aided writing can convenience content authors and assure quality. Software can, generally speaking, provide users with information, warnings and errors with regard to tentative edits. Software can support users including with regard to their spelling, grammar, word selection, readability, text coherence and cohesion. Software can measure the neutral point of view of natural language. Software can also process tentative edits with regard to their logical consistency with respect to data stored in databases and knowledgebases.
Exploration into the collaborative authoring and debugging of dialogue systems could result in new wiki technologies. Wiki dialogue systems could resemble spoken language dialogue systems with transcript-based user interfaces, users able to easily switch between dialogue-based interactions and the editing of dialogue system content and behavior.
Instructional design is the theory and practice of designing instructional experiences and encompasses the design of educational resources such as hypertext, images, animations, charts, graphs, infographics, audio, video, 3D graphics, computer simulations, games, learning objects, courses, textbooks and the contents of intelligent tutoring systems.
Diagrams are symbolic representations of information according to visualization techniques. Diagrams, conveying information visually, are a powerful and expressive medium. Diagram authoring software applications allow individuals without computer programming expertise to participate in the design of resources, the entry of data and the development of software.
Educational resources such as learning objects, courses, textbooks and the contents of intelligent tutoring systems can be represented as diagrams.
Crowdsourcing leverages the intelligence and wisdom of crowds toward the collaborative design of resources and the solving of problems.
The crowdsourcing of instructional design is facilitated by software applications for the collaborative authoring of extensible diagrams.
Relevant software systems include wiki software, collaborative document authoring software, integrated development environments, extensible workflow and diagram authoring software and version control systems.
Quality control is essential to crowdsourced endeavors. Learning management systems may interoperate with crowdsourced diagrams. Intelligent tutoring systems may draw dialogue, behaviors and other contents from crowdsourced diagrams.
With a model of quality – a definition of quality – quality control can be described as matters of quality assessment and quality assurance.
The quality of crowdsourced instructional design relates to the quality of other crowdsourced endeavors, such as encyclopedias, insofar as the veracity and factual accuracy of the resultant information is paramount. Also important to the quality of instructional design is its pedagogical efficacy. As education science is considered to be the study of improving the teaching process, quality with respect to crowdsourced instructional design can be phrased as a moving target.
The assessment of quality can utilize educational data mining, learning analytics, learner feedback and other techniques for obtaining educational metrics.
The assurance of quality is approached as a matter of computer-aided design. Software can apprise users of pertinent design constraints, educational standards and recommendations. Software can provide users with insights into what is working and why with respect to design elements, structures and patterns in the contexts of learning objectives, plans and strategies. Software can provide users with information, warnings and errors with respect to courses being designed and with respect to existing courses. Software bots can process collections of educational resources, for example performing verification and validation services.
We need to standardize models for the diagrams of educational resources such as learning objects, courses, textbooks and tutorial contents. Diagrams of such educational resources should interoperate with both learning management systems and intelligent tutoring systems.
We can envision educators and instructional designers utilizing diagram authoring software. Both real-time collaboration and larger-scale crowdsourcing scenarios should be considered with respect to such software.
One promising area of research and development is computer-aided crowdsourced instructional design which leverages the intelligence and wisdom of crowds toward the design of diagrams of educational resources while the recommendations of software tools and the insights of education science are brought to bear for quality assurance.
Another promising area of research and development is computer-automated instructional design which encompasses the synthesis, selection and sequencing of multimedia educational resources per learning objectives, plans and strategies. Approaches may utilize machine learning and discovery upon collections of educational resources, such algorithms informed by educational metrics.
Improving the modeling of exercises and activities is an important task, one which advances the theory and practice of intelligent tutoring systems. The variety of modeling discussed, herein, is domain-independent, spanning multiple courses and the development of cognition. The proper, rigorous modeling of exercises and activities is non-trivial, an inherently complex task; doing so involves the modeling of cognition, modeling cognitive processes for numerous, relevant paths of progression through the exercises or activities.
With properly modeled exercises and activities, possibly a standardized such modeling, intelligent tutoring systems could be described as querying repositories of modeled exercises and activities, modifying existing exercises and activities or generating new exercises and activities, as needed, and loading and interactively presenting sequences of such exercises and activities to students. Essentially, case-based reasoning.
Repositories of exercises and activities require a querying language, possibly a standardized such language. Such querying languages depend strongly upon the modeling of exercises and activities.
Repositories of exercises and activities are more than case-bases, in a number of ways, and require an update language with expressiveness for educational data mining, for providing educational-scientific measurements pertaining to the utility and quality of exercises and activities.
Repositories of exercises and activities are envisioned as interoperable with numerous intelligent tutoring systems and future versions thereof. The modeling of exercises and activities, then, must encompass the specific features and approaches of each contemporary intelligent tutoring system.
A number of approaches to the automatic generation of exercises and activities can be interoperable with repository architectures. While repositories may interface as collections of existing exercises and activities, some may, in response to some queries, generate new exercises and activities on the fly.
Learning objects and digital textbooks also contain exercises and activities. Software components can encapsulate learning objects and digital textbooks, or collections thereof, implementing the interfaces of repositories. That is, intelligent tutoring systems can utilize specific exercises and activities, e.g. from syllabi or course materials, and intelligent tutoring systems can query collections of learning objects and digital textbooks for exercises and activities.
In the upcoming years, we can envision artificial intelligence systems utilizing educational exercises and activities as training data.
Developmental cognitive neuroscience, educational neuroscience and cognitive modeling will continue to advance and curricula should expand to include computer-mediated exercises and activities designed to activate, strengthen, coordinate and integrate specific regions and processes of the developing human brain.
Exercises and activities for cognitive development and enrichment can be generated by computer technology. Both human- and machine-generated exercises can be sequenced and scheduled by computer technology.
A focus on cognitive development results in a variety of exercises and activities. Variety also arises from mixing exercises and activities from multiple courses during coursework. Variety and the strategic sequencing and scheduling of exercises and activities promote student engagement.
Student affect should be monitored during the performance of exercises and activities. So doing facilitates the strategic sequencing and scheduling of exercises and activities. Sequencing and scheduling can be responsive to student affect; for example, scheduling categories of exercises known to be enjoyable to specific students to elevate mood.
Cognitive load refers to the total amount of effort being used in the working memory. Advancements to student modeling and cognitive modeling will facilitate better estimation of cognitive load during the performance of exercises and activities.
Discussion of the topics of variety in the sequencing and scheduling of exercises and activities entails a consideration of multitasking and task switching during problem solving. Modeling multitasking and task switching facilitates smoother experiences during and transitions between groups of exercises and activities.
As developmental cognitive neuroscience, educational neuroscience and cognitive modeling continue to advance, so too will our knowledge of transfer of learning and the cognitive neuroscience thereof. Transfer of learning pertains to how learning resulting from one category of exercise or activity effects performance in another.
The objectives of longer-scale exercise and activity sequencing and scheduling include preparing students for coursework days, weeks, months or years in advance. Students can, for example, enjoy: exercises and activities with boxes for arithmetical values, preparing them for eventual curricular topics from algebra; exercises and activities involving 3D visuospatial reasoning, preparing them for eventual curricular topics from trigonometry and geometry; and various exercises and activities designed to introduce concepts from calculus.
Intelligent tutoring systems facilitate computer-aided coursework, computer-mediated exercises and activities, providing mixed-initiative tutorial dialogue, explanation, hints and encouragement.
Intelligent tutoring systems can be advanced in a number of ways.
Intelligent tutoring at scale involves the tutoring of large populations of students. Interesting areas of research and development include: learning from interactions with students to improve the quality of tutoring, educational experimentation, A/B testing and multivariate testing.
Intelligent tutoring systems can be more interoperable with learning objects, digital textbooks and courseware. Intelligent tutoring systems could, for example, utilize exercises and activities from learning objects, digital textbooks and courseware.
The integration of multiple domains facilitates the mixing and scheduling of exercises and activities from multiple courses.
Tutoring across schoolyears involves longer-term student modeling and longer-term planning and scheduling of exercises and activities.
Intelligent tutoring systems can generate and sequence exercises and activities for purposes of activating, strengthening, coordinating and integrating specific regions and processes of the developing human brain.
Intelligent tutoring systems can tutor groups of students simultaneously.