Graduation Year

2009

Document Type

Dissertation

Degree

Ph.D.

Degree Granting Department

Computer Science and Engineering

Major Professor

Robin R. Murphy, Ph.D.

Co-Major Professor

Lawrence O. Hall, Ph.D.

Committee Member

Jennifer L. Burke, Ph.D.

Committee Member

John Ferron, Ph.D.

Committee Member

Jodi Forlizzi, Ph.D.

Committee Member

Dewey Rundus, Ph.D

Committee Member

Kristen Salomon, Ph.D.

Keywords

robotics, appearance-constrained robots, affective robotics, psychophysiology, experimental design

Abstract

Non-facial and non-verbal methods of affective expression are essential for naturalistic social interaction in robots that are designed to be functional and lack expressive faces (appearance-constrained)such as those used in search and rescue, law enforcement, and military applications. This research identifies five main methods of non-facial and non-verbal affective expression (body movement, posture, orientation, color, and sound). From the psychology, computer science, and robotics literature a set of prescriptive recommendations was distilled for the appropriate non-facial and non-verbal affective expression methods for each of three proximity zones of interest(intimate: contact - 0.46 m, personal: 0.46 - 1.22 m, and social: 1.22 - 3.66 m). These recommendations serve as design guidelines for adding retroactively affective expression through software with minimal or no physical modifications to a robot or designing a new robot. This benefits both the human-robot interaction (HRI) and robotics communities.

A large-scale, complex human-robot study was conducted to verify these design guidelines using 128 participants, and four methods of evaluation (self-assessments, psychophysiological measures, behavioral observations, and structured interviews) for convergent validity. The study was conducted in a high-fidelity, confined-space simulated disaster site with all robot interactions performed in the dark. This research investigated whether the use of non-facial and non-verbal affective expression provided a mechanism for naturalistic social interaction between a functional, appearance-constrained robot and the human with which it interacted.

As part of this research study, the valence and arousal dimensions of the Self-Assessment Manikin (SAM) were validated for use as an assessment tool for future HRI human-robot studies. Also presented is a set of practical recommendations for designing, planning, and executing a successful, large-scale complex human-robot study using appropriate sample sizes and multiple methods of evaluation for validity and reliability in HRI studies.

As evidenced by the results, humans were calmer with robots that exhibited non-facial and non-verbal affective expressions for social human-robot interactions in urban search and rescue applications. The results also indicated that humans calibrated their responses to robots based on their first robot encounter.

Share

COinS