作者: Ruicheng Yan , Zhiguo Cao , Jianhu Wang , Sheng Zhong , Dominik Klein
关键词:
摘要: In lunar landing missions, it is very important to closely estimate the horizontal velocity of a descending spacecraft for achieving successful and safe landing. The purpose this paper present novel, vision-aided approach accurate, efficient, robust estimation such velocity. Our algorithm processes images from downward-looking camera, as well attitude altitude information other sensors, velocities. During descent, vary greatly in scale, orientation, viewpoint. To begin, scale-invariant feature transform (SIFT) copes with shifts, so one able use extracted keypoints establish correspondences between consecutive descent images. Then, matched SIFT are projected level ground plane according measurement camera state central projection imaging collinear equation. A 1-point random sample consensus (RANSAC) adopted remove mismatched keypoints. From each correctly keypoint pair, derives hypothesis displacement relative ground, since those represent measurements same position on surface. bundle hypotheses, our robustly recovers mode distribution. This final obtained by using mean shift method search an appropriate answer among these hypotheses. combination time interval shots, estimated. Additionally, digital signal processor field-programmable gate array architecture also presented implement real time. We evaluate performance based numerous simulated image sequences flight compared motion system extended Kalman filter monocular simultaneous localization mapping method.