top of page

Sensory Integration and Augmentation

Human Preferred Augmented Reality Visual Cues for Remote Robot Manipulation Assistance: from Direct to Supervisory Control

IROS2023_Cover.png

When humans control or supervise remote robot manipulation, augmented reality (AR) visual cues overlaid on the remote camera video stream can effectively enhance human's remote perception of task and robot states, and comprehension of the robot autonomy's capability and intent. In this work, we conducted a user study (N=18) to investigate: (RQ1) what AR cues humans prefer when controlling the robot with various levels of autonomy, and (RQ2) whether this preference can be influenced by the way humans learn to use the interface. We provided AR visual cues of various types (e.g., motion guidance, obstacle indicator, target hint, autonomy activation and intent) to assist humans to pick and place an object around an obstacle on a counter workspace. We found that: 1) Participants prefer different types of AR cues based on the level of robot autonomy; 2) The AR cues the participants prefer to use after hands-on robot operation converged to the recommendation of experienced users, and may largely differ from their initial selection based on video instruction.

Comparison of Haptic and Augmented Reality Visual Cues for Assisting Tele-manipulation

Robot teleoperation via human motion tracking has been proven to be easy to learn, intuitive to operate, and facilitates faster task execution than existing baselines. However, precise control while performing the dexterous tele-manipulation tasks is still a challenge. In this paper, we implement sensory augmentation in terms of haptic and augmented reality visual cues to represent four types of information critical to the precision and performance of a tele-manipulation task, namely: (1) target location; (2) constraint alert; (3) grasping affordance; and (4) grasp confirmation. We further conduct two user studies to investigate the effectiveness and preferred modality of the sensory feedback against no sensory support, and how the preference will be influenced by the different types of simulated real-world additional workload. We asked 8 participants to perform a general manipulation task using a KINOVA robotic arm. Our results indicate that: (1) the haptic and AR visual cues can significantly reduce the task completion time, occurrences of errors, the total length traversed by the robot end-effector, the operational effort while increasing the interface usability; (2) the haptic feedback trended in the direction of presenting the information that needs a prompt response, while the AR visual cues are suitable to monitor the system status; (3) the participants chose their preferred feedback with the purpose of reducing the cognitive workload despite increased extra effort.

Related Publication

  • A. U. Krishnan, T. C. Lin and Z. Li, "Human Preferred Augmented Reality Visual Cues for Remote Robot Manipulation Assistance: from Direct to Supervisory Control", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023.

  • T. C. Lin, A. U. Krishnan and Z. Li, "Perception and Action Augmentation for Teleoperation Assistance in Freeform Tele-manipulation", submitted to ACM Transactions on Human-Robot Interaction (THRI), 2022.

  • A. U. KrishnanT. C. Lin and Z. Li, "Design Interface Mapping for Efficient Free-form Tele-manipulation", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022.

  • T. C. Lin, A. U. Krishnan and Z. Li, "Comparison of Haptic and Augmented Reality Visual Cues for Assisting Tele-manipulation", International Conference on Robotics and Automation (ICRA), 2022.

bottom of page