Evaluating the Use of Sound in Static Program Comprehension

作者: Lewis Berman , Keith Gallagher , Suzanne Kozaitis

DOI: 10.1145/3129456

关键词: Auditory displaySound designComprehensionProgram comprehensionHuman–computer interactionSoftwareContext (language use)Empirical researchSonificationComputer science

摘要: Comprehension of computer programs is daunting, due in part to clutter the software developer's visual environment and need for frequent context changes. Previous research has shown that nonspeech sound can be useful understanding runtime behavior a program. We explore viability advantages using an ecological framework help understand static structure software. describe novel concept auditory display program elements which sounds indicate characteristics relationships among Java program's classes, interfaces, methods. An empirical study employing this was used evaluate 24 sighted professionals students performing maintenance-oriented tasks 2×2 crossover. Viability strong differentiation characterization entities, less so identification. The results suggest sonification advantageous under certain conditions, though they do not overall advantage terms task duration at 5% level significance. uncover other findings such as differences comprehension strategy based on available tool environment. participants reported enthusiasm idea sonification, mitigated by lack familiarity with brittleness tool. Limitations present include restriction particular types tasks, single mapping, programming language, limited training time, but use shows sufficient promise continued research.

参考文章(44)
Marilyn McGee-Lennon, Ross McLachlan, Stephen Brewster, The sound of musicons: investigating the design of musically derived audio cues Georgia Institute of Technology. ,(2012)
Johan Kildal, Stephen A. Brewster, EXPLORE THE MATRIX: BROWSING NUMERICAL DATA TABLES USING SOUND Georgia Institute of Technology. ,(2005)
Lewis Irwin Berman, Program comprehension through sonification Durham University. ,(2011)
Luca A. Ludovico, Giorgio Presti, The sonification space International Journal of Human-computer Studies \/ International Journal of Man-machine Studies. ,vol. 85, pp. 72- 77 ,(2016) , 10.1016/J.IJHCS.2015.08.008
J Louise Finlayson, Chris Mellish, THE 'AUDIOVIEW' - PROVIDING A GLANCE AT JAVA SOURCE CODE Georgia Institute of Technology. ,(2005)
Bogdan Dit, Meghan Revelle, Malcom Gethers, Denys Poshyvanyk, Feature location in source code: a taxonomy and survey Journal of Software: Evolution and Process. ,vol. 25, pp. 53- 95 ,(2013) , 10.1002/SMR.567
Timothy Neate, Interactive Spatial Auditory Display of Graphical Data University of York. ,(2013)
Sven Anderson, Maria L. M. Vargas, COMBINING SPEECH AND EARCONS TO ASSIST MENU NAVIGATION Georgia Institute of Technology. ,(2003)
Paul Vickers, Chris Laing, Tom Fairfax, Sonification of a Network's Self-Organized Criticality arXiv: Human-Computer Interaction. ,(2014) , 10.1016/J.DISPLA.2016.05.002
Lorna M. Brown, Stephen A. Brewster, DRAWING BY EAR: INTERPRETING SONIFIED LINE GRAPHS International Conference on Auditory Display. ,(2003)