作者: Jaime F Fisac , David Fridovich-Keil , Andrea Bajcsy , Sylvia Herbert , Sampada Deglurkar
DOI:
关键词:
摘要: In order to safely share the space with people, it is important for robots to reason about future human motion. While predictive models often perform acceptably for a single human, safe navigation among multiple people presents two serious challenges. First, interaction with others is an inherently complex aspect of human behavior, making reliable predictions difficult to obtain. Second, reasoning jointly about many interacting agents faces a combinatorial explosion in computation, making joint distributions of multiple human trajectories highly intractable. Making simplifying assumptions may seem like the only feasible approach, yet inaccurate predictions can compromise the safety of a robot’s motion plan. In recent work, we proposed a Bayesian framework to reason about the reliability of model predictions in real time, allowing the robot to quickly adapt its uncertainty about future human actions when its model performed poorly; combining its confidence-aware predictions with a robust motion planning and control scheme the robot could successfully plan and execute probabilistically safe plans around a single human. In this work, we leverage a natural strength of this method to tackle multi-human predictions in a way that allows safe robot navigation: if the robot’s predictive model fails to accurately capture a human’s behavior while interacting with others, predictions will naturally become uncertain when interactions significantly affect her motion, and will regain confidence once the effect diminishes. The robot can then use simple but highly scalable predictive models that simplify or even fully neglect the interaction component between multiple …