A Workshop of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2020.
Afternoon Session, Monday March 23, 2020
This workshop focuses on issues surrounding human-robot interaction for robot self-assessment of system proficiency. For example, how should a robot convey predicted ability on a new task? How should it report performance on a task that was just completed? Communities in both computer science and robotics have addressed questions of introspection to monitor system performance and adjust behavior to guarantee or improve performance. Self-assessment can range from simple detection of proficiency up through evaluation, explanation, and prediction. Robots need the ability to make assessments and communicate them a priori, in situ, and a posteriori in order to support effective autonomy and utilization by human partners and supervisors. This is a pressing challenge for human-robot interaction for a variety of reasons. Prior work has shown that robot expression of performance can alter human perception of the robot and decisions on control allocation. There is also significant evidence in robotics that accurately setting human expectations is critical, especially when proficiency is below human expectations. Therefore, more knowledge is needed on how systems should communicate specifics about current and future task competence.
Due to COVID-19, this workshop has moved to a set of online talks. Some authors have agreed to publish their papers and we have posted them in a structured arXiv collection. Direct links to videos and papers are below.
Regular
Robotic Self-Assessment of Competence by Gertjan J. Burghouts, Albert Huizing, and Mark A. Neerincx. Video Paper
“Can you do this?” Self-Assessment Dialogues with Autonomous Robots Before, During, and After a Mission by Tyler Frasca, Evan Krause, Ravenna Thielstrom, Matthias Scheutz. Video Paper
The Need of Verbal Robot Explanations and How People Would Like a Robot To Achieve This by Zhao Han, Elizabeth Phillips, and Holly A. Yanco. Video
Position
Towards Transparency of TD-RL Robotic Systems with a Human Teacher by Marco Matarese, Silvia Rossi, Alessandra Sciutti, and Francesco Rea. Video Paper
Trust Consideration for Explainable Robots: A Human Factors Perspective by Lindsay Sanneman and Julie Shah. Video Paper
Automated Failure-Mode Clustering and Labeling for Informed Car-To-Driver Handover in Autonomous Vehicles by Aaquib Tabrez, Matthew B. Luebbers, and Bradley Hayes. Video Paper
Submissions (closed)
We welcome contributions focused on assessing, explaining, and conveying robot proficiency to human teammates.
We are requesting 2 page (position) and 6 page (regular) papers using the HRI 2020 format. Anonymization is not required. When emailing the paper, please include the author list, affiliations, and email addresses in the body of the email.
Regular papers should focus on research results. All submissions will be peer-reviewed and authors of accepted papers will be asked to do either a poster or podium presentation at the workshop. At least one author of each accepted paper must register for the workshop.
We will send acceptance notifications by February 21. Acceptance notifications have been sent. If you have not received an email, please contact us via the submission email below.
Authors of accepted papers have the option of having their papers uploaded to a workshop-specific archive on arxiv.org. Inclusion in this archive will not be mandatory since it may create problems for authors who wish to submit follow-on work to venues with strict prior publication rules.
Submission Deadline (Extended Closed): January 30 February 9, 2020, 23:59 Anywhere on Earth (AoE)
Submit via email to: steinfeld+hri2020@cmu.edu
Organizers
Aaron Steinfeld, Carnegie Mellon University, steinfeld@cmu.edu
Michael Goodrich, Brigham Young University, mike@cs.byu.edu
Acknowledgements
Organizational effort is supported under the SUCCESS MURI, a project funded by the Office of Naval Research (N00014-18-1-2503).