University of Notre Dame
Browse
- No file added yet -

Using Facially Expressive Robots to Increase Realism in Patient Simulation

Download (10.11 MB)
thesis
posted on 2017-07-09, 00:00 authored by Maryam Moosaei

Social robots are a category of robots that can have social interaction with humans. The presence of these robots is growing in different fields such as healthcare, entertainment, assisted living, rehabilitation, and education. Within these domains, human robot interaction (HRI) researchers have worked on enabling social robots to interact more naturally with people using different verbal and nonverbal channels. Considering that most of human-human interactions are non-verbal, it is worth considering how to enable robots to both recognize and synthesize non-verbal behavior.

My research focuses on synthesizing facial expressions, a type of nonverbal behavior, on social robots. During my research, I developed several new algorithms for synthesizing facial expressions on robots and avatars, and experimentally explored how these expressions were perceived. I also developed a new control system for operators which automates synthesizing expressions on a robotic head. Additionally, I worked on building a new robotic head, capable of expressing a wide range of expressions, which will serve as a next generation of patient simulators.

My work explores the application of facially expressive robots in patient simulation. Robotic Patient Simulators (RPS) systems are the most frequent application of humanoid robots. They are human-sized robots that can breathe, bleed, react to medication, and convey hundreds of different biosignals. RPSs enable clinical learners to safely practice clinical skills without harming real patients, and can include: patient communication, patient condition assessment, and procedural practice.

Although commonly used in clinical education, one capability of RPSs is in need of attention: enabling face-to-face interactions between RPSs and learners. Despite the importance of patients' facial expressions in making diagnostic decisions, commercially available RPS systems are not currently equipped with expressive faces. They have static faces that cannot express any visual pathological signs or distress. Therefore, clinicians can fall into the poor habit of not paying attention to a patient's face, which can result in them going down an incorrect diagnostic path.

One motivation behind my research is to make RPSs more realistic by enabling them to convey realistic, clinically-relevant, patient-driven facial expressions to clinical trainees. We have designed a new type of RPS with a wider range of expressivity, including the ability to express pain, neurological impairment, and other pathologies in its face. By developing expressive RPSs, our work serves as a next generation educational tool for clinical learners to practice face-to-face communication, diagnosis, and treatment with patients in simulation. As the application of robots in healthcare continues to grow, expressive robots are another tool that can aid the clinical workforce by considerably improving the realism and fidelity of patient simulation.

During the course of my research, I completed four main projects and designed several new algorithms to synthesize different expressions and pathologies on RPSs.

First, I designed a framework for generalizing synthesis of facial expressions on robots and avatars with different degrees of freedom.

Then, I implemented this framework as an ROS-based module and used it for performing facial expression synthesis on different robots and virtual avatars. Currently, researchers cannot transfer the facial expression synthesis software developed for a particular robot to another robot, due to having different hardware. Using ROS, I developed a general solution that is one of the first attempts to address this problem.

Second, I used this framework to synthesize patient-driven facial expressions of pain on virtual avatars, and studied the effect of an avatar's gender on pain perception. We found that participants were able to distinguish pain from commonly conflated expressions (anger and disgust), and showed that patient-driven pain synthesis can be a valuable alternative to laborious key-frame animation techniques,

This was one of the first attempts to perform automatic synthesis of patient driven expressions on avatars without the need of an animation expert. Automatic patient driven facial expression synthesis will reduce the time and cost needed for operators of RPS systems.

Third, I synthesized patient-driven facial expressions of pain on a humanoid robot and its virtual model, with the objective of exploring how having a clinical education can affect pain detection accuracy. We found that clinicians have lower overall accuracy in detecting synthesized pain in comparison with lay participants. This supported other findings in the literature, showing that there is a need to improve clinical learners' skills in decoding expressions in patients. Furthermore, we explored the effect of embodiment (robot, avatar) on pain perception by both clinicians and lay participants. We found that all participants are overall less accurate in detecting pain from a humanoid robot in comparison to a comparable virtual avatar. Considering these effects are important when using expressive robots and avatars as an educational tool for clinical learners.

Fourth, I developed a computational (mask) model of atypical facial expressions such as stroke and Bell's Palsy. We used our developed computational model to perform masked synthesis on a virtual avatar and ran an experiment to evaluate the similarity of the synthesized expressions to real patients. Additionally, we used our developed mask model to design a shared control system for controlling an expressive robotic face.

My work has multiple contributions for both the HRI and healthcare communities. First, I explored a novel application of facially expressive robots, patient simulation, which is a relatively unexplored area in the HRI literature. Second, I designed a generalized framework for synthesizing facial expressions on robots and avatars. My third contribution was to design new methods for synthesizing naturalistic patient-driven expressions and pathologies on RPSs. Experiments validating my approach showed that the embodiment or gender of the RPS can affect perception of their expressions. Fourth, I developed a computational model of atypical expressions and used this model to synthesize Bell's palsy on a virtual avatar. This model can be used as part of an educational tool to help clinicians improve their diagnosis of conditions like Bell's palsy, stroke, and Parkinson's disease. Finally, I used this mask model to develop a shared control system for controlling a robotic face. This control model automates the job of controlling an RPS's expressions for simulation operators.

My work enables the HRI community to explore new applications of social robots and expand their presence in our daily lives. Moreover, my work will inform the design and development of the next generation of PRSs that are able to express visual signs of diseases, distress, and pathologies. By providing better technology, we can improve how clinicians are being trained and this will eventually improve patient outcomes.

History

Date Created

2017-07-09

Date Modified

2018-10-05

Defense Date

2017-06-02

Research Director(s)

Laurel D. Riek

Degree

  • Doctor of Philosophy

Degree Level

  • Doctoral Dissertation

Program Name

  • Computer Science and Engineering

Usage metrics

    Dissertations

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC