Developing intelligent agents for training systems that learn their strategies from expert players
MetadataShow full item record
Computer-based training systems have become a mainstay in military and private institutions for training people how to perform certain complex tasks. As these tasks expand in difficulty, intelligent agents will appear as virtual teammates or tutors assisting a trainee in performing and learning the task. For developing these agents, we must obtain the strategies from expert players and emulate their behavior within the agent. Past researchers have shown the challenges in acquiring this information from expert human players and translating it into the agent. A solution for this problem involves using computer systems that assist in the human expert knowledge elicitation process. In this thesis, we present an approach for developing an agent for the game Revised Space Fortress, a game representative of the complex tasks found in training systems. Using machine learning techniques, the agent learns the strategy for the game by observing how a human expert plays. We highlight the challenges encountered while designing and training the agent in this real-time game environment, and our solutions toward handling these problems. Afterward, we discuss our experiment that examines whether trainees experience a difference in performance when training with a human or virtual partner, and how expert agents that express distinctive behaviors affect the learning of a human trainee. We show from our results that a partner agent that learns its strategy from an expert player serves the same benefit as a training partner compared to a programmed expert-level agent and a human partner of equal intelligence to the trainee.
Whetzel, Jonathan Hunt (2005). Developing intelligent agents for training systems that learn their strategies from expert players. Master's thesis, Texas A&M University. Texas A&M University. Available electronically from