Driven by a vision of human-centered computing centered on media-rich intelligent systems that are highly effective and extraordinarily engaging, we pursue a tightly integrated research and development agenda. Our research in artificial intelligence and human-computer interaction investigates intelligent user interfaces, natural language processing, and affective computing. Our development efforts are shaped by concerns of advancing K-12 learning technologies with an emphasis on intelligent tutoring systems, game-based learning environments, and virtual humans for learning and teaching.
Motivated by the belief that the complexities of human-computer interaction require innovations spanning multiple disciplines, we employ a broad range of models, techniques and methodologies from artificial intelligence, linguistics, statistics, computer graphics, and cognitive science. These perspectives are complemented by long-term collaborations with learning sciences colleagues in the College of Education, William and Ida Friday Institute for Educational Innovation and with colleagues in new media in the College of Design.
Our research and development work is conducted in six interrelated thrust areas:
Intelligent Game-based Learning Environments
Recent years have seen a growing recognition of the transformative potential of game-based learning technologies. We are exploring intelligent game-based learning environments that are based on commercial game engines and provide intelligent tutoring systems’ adaptive pedagogical functionalities for creating highly effective customized learning experiences that maximize learning gains. We are also designing a creativity-enhancement environment based on these and natural language processing technologies.
Affective Computing for Interactive Learning
Students’ emotions play a central role in their learning processes, and effective interaction between teachers and students is guided by teachers’ and students’ ability to accurately recognize one another’s affective states and to appropriately express affect. To improve computer learning environments’ ability to recognize and generate affect, we devise computational models of affect recognition (automatically recognizing students’ affective states) and affect expression (automatically generating appropriate affective responses).
Virtual Humans for Learning and Teaching
Intelligent virtual tutors are “embodied” artificial intelligence-driven characters that interact with students to provide engaging, personalized tutorial support. Also known as pedagogical agents, intelligent virtual tutors employ language, facial expressions, and gesture to create effective learning experiences for students. Interacting with students in virtual learning environments, intelligent virtual tutors provide multimodal explanations, interactive demonstrations, and problem-solving advice to improve learning.
Computational Models of Interactive Narrative
Narrative plays a central role in communication and cognition, and there is a growing interest in devising interactive storytelling environments that create engaging narrative experiences. With an emphasis on narrative-centered learning environments, we are designing decision-theoretic models of interactive narrative, devising narrative-centered pedagogical planners for narrative-centered learning environments, and creating goal recognition systems to monitor students’ problem-solving actions and predict their goals.
Natural Language Tutorial Dialogue
Human-human tutorial dialogue offers an excellent model for effective learning. By understanding the pedagogical mechanisms of human-human tutorial dialogue, we can design natural language tutorial dialogue systems that offer similar benefits. We conduct corpus studies of human-human tutorial dialogue to explore how learner characteristics influence the structure of tutorial dialogue, how human tutors balance cognitive and motivational scaffolding, and how these impact learning gains and self-efficacy gains.
Intelligent Multimedia Interfaces
Two complementary technologies leveraging artificial intelligence have emerged that afford significant opportunities for learning: intelligent user interfaces and intelligent tutoring systems. To promote effective learning through rich interactions, we are designing intelligent multimedia interfaces that enable students to create graphical representations to model physical phenomena that come to life as interactive media artifacts combining animation, sound, and narration.
Our research and development work spans a broad range of age groups and subject matters. We create science education learning environments for elementary grade science (Crystal Island – Uncharted Discovery, Leonardo) with an emphasis on life sciences, physical sciences, and earth sciences. We create language arts (Narrative Theater) and science education (Crystal Island – Outbreak) learning environments for middle school students. The language arts work focuses on reading and writing, while the science education focuses on microbiology. We create computer science education environments for middle school students (Engage), which focuses on introductory computer science education, and for post-secondary students (JavaTutor), which focuses on first year computer science education.