Agents with Internal States
Introduction
With respect to creating believable artificial-intelligence agents, rule-based approaches have been explored for four decades and modern approaches include uses of large language models (Park, O’Brien, Cai, Morris, Liang, & Bernstein, 2023).
Generative artificial-intelligence models hold the potential to advance believable agent architectures through the integration of various emotional models (Hutson & Ratican, 2023). Emotional models can be utilized in the training, fine-tuning, prompting, and evaluation of generative models.
Considered here are hybrid artificial-intelligence architectures which leverage both large language models and other rule-based systems or ancillary models to manage their internal states.
For agents, internal states have uses and applications which include the simulation of ethological drives, desires, motivations, emotions, and moods. Internal states can also enhance task-solving (Wu, Yue, Zhang, Wang, & Wu, 2024).
Possibilities for transitioning agents’ internal states include consulting large language models informed by agents’ prompts, timers, cues, and other game or program logic.
Upon changes or updates to agents’ rule-based components or ancillary models representing aspects of their internal states, agents’ prompts for large language models could be modified, i.e., their task or instruction descriptions or their character or persona descriptions. Such modifications to agents’ prompts would enable stateful aspects to their dialogue and behavior.
Bibliography
Hutson, James, and Jay Ratican. "Leveraging generative agents: Autonomous AI with simulated personas for interactive simulacra and collaborative research." Journal of Innovation and Technology 2023, no. 15 (2023).
Park, Joon Sung, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. “Generative agents: Interactive simulacra of human behavior.” In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1-22. 2023.
Wu, Yiran, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. “StateFlow: Enhancing LLM task-solving through state-driven workflows.” arXiv preprint arXiv:2403.11322 (2024).