The multi-relationship evaluation design framework: creating evaluation blueprints to assess advanced and intelligent technologies

作者: Brian A. Weiss , Linda C. Schmidt

DOI: 10.1145/2377576.2377603

关键词: Law enforcementManufacturingFormative assessmentIntelligent decision support systemEmerging technologiesArtificial intelligenceData collectionBlueprintKey (cryptography)Systems engineeringComputer science

摘要: Technological evolutions are constantly occurring across advanced and intelligent systems a range of fields including those within the military, law enforcement, automobile, manufacturing industries. Testing performance these technologies is critical to (1) update system designers areas for improvement, (2) solicit end-user feedback during formative tests so that modifications can be made in future revisions, (3) validate extent technology's capabilities both sponsors, purchasers end-users know exactly what they receiving. Evaluation events minimally designed include few basic key technology or evolve into extensive test emphasize multiple components along with complete system, itself. Tests typically assume latter occur frequently based upon complexity. Numerous evaluation design frameworks have been produced create designs appropriately assess systems. While most allow broad plans created, each framework has focused address specific project and/or technological needs therefore bounded applicability. This paper presents expands current development Multi-Relationship Design (MRED) framework. Development MRED motivated by desire automatically an capable producing detailed blueprints while receiving uncertain input information. The authors will build their previous work developing through initial discussion elements. Additionally, elaborate previously-defined relationships among personnel define structural pertaining scenarios, environment, data collection methods. These terms demonstrated example emerging technology.

参考文章(21)
Michelle Potts Steves, Craig Schlenoff, Sherri L. Condon, Dan Parvaz, Brian A. Weiss, Jon Phillips, Gregory A. Sanders, Performance Evaluation of Speech Translation Systems. language resources and evaluation. ,(2008)
George A. Bekey, Gaurav S. Sukhatme, An Evaluation Methodology for Autonomous Mobile Robots for Planetary Exploration ECPD Conference. ,(1995)
Jean Scholtz, Mary Theofanos, Brian Antonishek, Development of a test bed for evaluating human-robot performance for explosive ordnance disposal robots human-robot interaction. pp. 10- 17 ,(2006) , 10.1145/1121241.1121246
K. Richardson, Robots to the rescue? Engineering & Technology. ,vol. 6, pp. 52- 54 ,(2011) , 10.1049/ET.2011.0406
Brian A. Weiss, Craig Schlenoff, The impact of evaluation scenario development on the quantitative performance of speech translation systems prescribed by the SCORE framework Proceedings of the 9th Workshop on Performance Metrics for Intelligent Systems - PerMIS '09. pp. 238- 245 ,(2009) , 10.1145/1865909.1865957
Craig Schlenoff, Michelle Potts Steves, Brian A. Weiss, Mike Shneier, Ann Virts, Applying SCORE to field‐based performance evaluations of soldier worn sensor technologies Journal of Field Robotics. ,vol. 24, pp. 671- 698 ,(2007) , 10.1002/ROB.20211
Adam Jacoff, Elena Messina, Urban search and rescue robot performance standards: progress update Unmanned Systems Technology IX. ,vol. 6561, ,(2007) , 10.1117/12.719692
Barry Bodt, Richard Camden, Harry Scott, Adam Jacoff, Tsai Hong, Tommy Chang, Rick Norcross, Tony Downs, Ann Virts, Performance measurements for evaluating static and dynamic multiple human detection and tracking systems in unstructured environments Proceedings of the 9th Workshop on Performance Metrics for Intelligent Systems - PerMIS '09. pp. 166- 173 ,(2009) , 10.1145/1865909.1865944