作者: Semih Günel , Helge Rhodin , Daniel Morales , João Campagnolo , Pavan Ramdya
DOI: 10.7554/ELIFE.48571
关键词: Software 、 Deep learning 、 Level of detail 、 Pose 、 Computer science 、 Computer vision 、 3D pose estimation 、 Appendage 、 Active learning (machine learning) 、 Artificial intelligence 、 Tracking (particle physics)
摘要: Studying how neural circuits orchestrate limbed behaviors requires the precise measurement of positions each appendage in three-dimensional (3D) space. Deep networks can estimate two-dimensional (2D) pose freely behaving and tethered animals. However, unique challenges associated with transforming these 2D measurements into reliable 3D poses have not been addressed for small animals including fly, Drosophila melanogaster. Here, we present DeepFly3D, a software that infers tethered, adult using multiple camera images. DeepFly3D does require manual calibration, uses pictorial structures to automatically detect correct estimation errors, active learning iteratively improve performance. We demonstrate more accurate unsupervised behavioral embedding joint angles rather than commonly used data. Thus, enables automated acquisition at an unprecedented level detail variety biological applications.