作者: Anish Muthali , Haotian Shen , Sampada Deglurkar , Michael H Lim , Rebecca Roelofs
DOI:
关键词:
摘要: We investigate methods to provide safety assurances for autonomous agents that incorporate learning-based predictions of other, uncontrolled agents' behavior into their own trajectory planning. Given a learning-based forecasting model that predicts agents' trajectories, we introduce a method for providing probabilistic assurances on the model's prediction error with calibrated confidence intervals. Through quantile regression, conformal prediction, and reachability analysis, our method generates probabilistically safe and dynamically feasible prediction sets. We showcase their utility in certifying the safety of planning algorithms, both in simulations using actual autonomous driving data and in an experiment with Boeing vehicles.