From Virtual to Real World: Applying Animation to Design the Activity Recognition System

作者: Yuta Sugiura , Chengshuo Xia

DOI: 10.1145/3411763.3451677

关键词:

摘要: Following the conventional pipeline, training dataset of a human activity recognition system relies on detection significant signal variation regions. Such position-specific classifiers provide less flexibility for users to alter sensor positions. In this paper, we proposed employ simulated generate corresponding from motion animation as dataset. Visualizing items real world, user can determine sensor’s placement arbitrarily and obtain accuracy feedback well classifier interface get relief cost model. With cases validation, trained by data effectively recognize real-world activity.

参考文章(14)
Eduardo Velloso, Andreas Bulling, Hans Gellersen, MotionMA: motion modelling and analysis by demonstration human factors in computing systems. pp. 1309- 1318 ,(2013) , 10.1145/2470654.2466171
Robert Xiao, Jeffrey P. Bigham, Chris Harrison, Gierad Laput, Walter S. Lasecki, Jason Wiese, Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds human factors in computing systems. pp. 1935- 1944 ,(2015) , 10.1145/2702123.2702416
R. A. Dias, L. A. Rocha, L. Mol, R. F. Wolffenbuttel, E. Cretu, Time-based micro-g accelerometer with improved damper geometry instrumentation and measurement technology conference. pp. 672- 675 ,(2010) , 10.1109/IMTC.2010.5488163
Ahmad Jalal, Yeon-Ho Kim, Yong-Joong Kim, Shaharyar Kamal, Daijin Kim, Robust human activity recognition from depth video using spatiotemporal multi-fused features Pattern Recognition. ,vol. 61, pp. 295- 308 ,(2017) , 10.1016/J.PATCOG.2016.08.003
Sung-Gwi Cho, Masahiro Yoshikawa, Kohei Baba, Kazunori Ogawa, Jun Takamatsu, Tsukasa Ogasawara, Hand motion recognition based on forearm deformation measured with a distance sensor array 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). ,vol. 2016, pp. 4955- 4958 ,(2016) , 10.1109/EMBC.2016.7591839
Chi-Jui Wu, Steven Houben, Nicolai Marquardt, EagleSense: Tracking People and Devices in Interactive Spaces using Real-Time Top-View Depth-Sensing human factors in computing systems. pp. 3929- 3942 ,(2017) , 10.1145/3025453.3025562
Ayumi Ohnishi, Tsutomu Terada, Masahiko Tsukamoto, A Motion Recognition Method Using Foot Pressure Sensors augmented human international conference. pp. 10- ,(2018) , 10.1145/3174910.3174938
Raghav H. Venkatnarayan, Muhammad Shahzad, Gesture Recognition Using Ambient Light Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. ,vol. 2, pp. 40- ,(2018) , 10.1145/3191772
Shingo Takeda, Tsuyoshi Okita, Paula Lago, Sozo Inoue, A Multi-Sensor Setting Activity Recognition Simulation Tool ubiquitous computing. pp. 1444- 1448 ,(2018) , 10.1145/3267305.3267509
Shanyan Guan, Shuo Wen, Dexin Yang, Bingbing Ni, Wendong Zhang, Jun Tang, Xiaokang Yang, Human Action Transfer Based on 3D Model Reconstruction national conference on artificial intelligence. ,vol. 33, pp. 8352- 8359 ,(2019) , 10.1609/AAAI.V33I01.33018352