More about HKUST
Towards Modeling and Mutual Understand of Poses in Human-Robot Interaction
PhD Thesis Proposal Defence
Title: "Towards Modeling and Mutual Understand of Poses in Human-Robot
Interaction"
by
Mr. Mingfei SUN
Abstract:
The goal of this proposal is to enable interactive robots to understand
human poses, and generate human-like poses to improve the mutual
understanding. In this work, we consider two directions, the inference
from human poses (from human poses to robots' perception) and the
generation of human-like robot poses (from robot poses to humans'
perception), and study the mutual pose understanding in Human-Robot
Interaction (HRI) from the following three aspects:
First, we consider the computational models that help robots infer the
humans' cognitive and affective dynamics from their body poses.
Particularly, we propose two models: corpus-based state transition model
for engagement dynamic sensing, and learning-based long short-term memory
(LSTM) models for emotion intensity estimation. We evaluate the models'
effectiveness in capturing the humans' cognitive and affective dynamics in
user studies, and find that the robots equipped with the proposed models
are significantly more intelligent in handling complex interactions with
peripheral interference and in perceiving human partners with incomplete
observations.
Second, we study the generation of understandable poses for robots and
propose to adopt Behavior Cloning to produce human-like feedback behavior
(hesitation and confirmation poses) and the learning engagement poses. We
evaluate this method on two different forms of robots (a robot arm and a
humanoid robot) in a simulated interaction environment, and demonstrate
that the generated robot poses significantly improve the interaction
transparency and influence the human participants' perception towards the
robot capability and the interaction outcomes.
Third, we employ the idea of Learning from Demonstration (LfD) to scale up
the generation of human-like robot poses, and re-frame the pose generation
as an inverse reinforcement learning problem, in which the robot tries to
interpret the underlying motivation of human poses rather than blindly
cloning them.We propose a novel algorithm to enable robots to robustly
learn poses from demonstrations and study a new learning setting to
maximize the utility of a single demonstration, which we call
demonstration retargeting. We present some preliminary results
(quantitative and qualitative) to demonstrate the potential of
demonstration retargeting in the generation of human-like robot poses.
In sum, this proposal presents the computational models for cognitive and
affective inference from human body poses, and explores the generation of
human-like poses to improve mutual understanding in HRI. We show
insightful findings and design guidelines by evaluating the proposed
models and methods through high-fidelity simulations and practical user
studies. To the best of our knowledge, this proposal takes the first step
to systematically fill the gap of mutual pose understanding in HRI.
Date: Monday, 4 November 2019
Time: 3:00pm - 5:00pm
Venue: Room 5560
lifts 27/28
Committee Members: Dr. Xiaojuan Ma (Supervisor)
Prof. Chiew-Lan Tai (Chairperson)
Dr. Pedro Sander
Dr. Sai-Kit Yeung (ISD)
**** ALL are Welcome ****