JavaTutor


Principal Investigators: James Lester (PI, Computer Science), Kristy Elizabeth Boyer (co-PI), Eric Wiebe (co-PI, Mathematics, Science, & Technology Education)

Primary Participants: Alok Baikadi (Computer Science), EunYoung Ha (Computer Science), Joe Grafsgaard (Computer Science), Megan Hardy (Psychology), Chris Mitchell (Computer Science), Bradford Mott (Computer Science), Rob Phillips (Computer Science), Mladen Vouk (Computer Science)

Sponsor: National Science Foundation – Research and Evaluation on Education in Science and Engineering Program (2010-2014)

Objectives: Providing dialogue systems with the ability to engage users in rich natural language dialogue has been a long-term goal of the computational linguistics community. Tutorial dialogue systems, which are designed to support students completing a learning task, are an increasingly active area of research. Though effective, these systems have not yet attained learning gains on par with those observed in expert human tutoring. Additionally, handcrafted tutorial dialogue systems require significant development time. Even with the emergence of authoring tools for rapid development, the current generation of tutorial dialogue systems requires significant manual authoring.

In response to these challenges, the JavaTutor project investigates human-human natural language tutorial dialogue as a model for human-computer tutorial dialogue. With a curricular focus of first-year post-secondary computer science education and a task focus of problem-solving dialogues, we collect corpora of human-human tutorial dialogues, annotate them with rich dialogue act tags, and then use machine learning techniques to automatically acquire the structure of effective tutorial dialogue.

The JavaTutor project is concerned with 1) designing rich dialogue act coding schemes for coding cognitive and affective dimensions of task-oriented tutorial dialogue interactions, and 2) learning hidden Markov models to discover the structure of task-oriented tutorial dialogue. Our studies to date have found that learner characteristics influence tutorial dialogue structure in significant ways, even when these characteristics are not revealed to the tutors, and that human tutors naturally attempt to strike a balance between cognitive and motivational scaffolding: in response to a mistake, positive cognitive feedback may better facilitate student learning, and may provide equally beneficial affective outcomes, compared with overt encouragement. The long-term objective of this line of investigation is to create computational models of tutorial strategies, design intelligent tutoring systems that utilize these models, and study the differential impact of alternative strategies on a broad scale.