作者: Jonggi Hong , Christine Vaing , Hernisa Kacorri , Leah Findlater
DOI: 10.1145/3382039
关键词:
摘要: Speech input is a primary method of interaction for blind mobile device users, yet the process dictating and reviewing recognized text through audio only (i.e., without access to visual feedback) has received little attention. A recent study found that sighted users could identify about half automatic speech recognition (ASR) errors when listening text-to-speech output ASR results. Blind screen reader in contrast, may be better able due their greater use increased ability comprehend synthesized speech. To compare experiences with errors, as well audio-only interaction, we conducted lab 12 participants. The included semi-structured interview portion qualitatively understand ASR, followed by controlled task quantitatively participants’ dictated text. Findings revealed differences between participants terms how they level concern (e.g., were more highly concerned). In task, 40% which, counter our hypothesis, was not significantly different from performance. depth analysis input, strategy identifying scrutinized entered reviewed it. Our findings indicate need future work on support confidently using generate accurate, error-free