作者: Paul Vickers , Robert Höldrich
DOI:
关键词:
摘要: Sonification and audification create auditory displays of datasets. Audification translates data points into digital audio samples the display's duration is determined by playback rate. Like audification, graphs maintain temporal relationships while using parameter mappings (typically data-to-frequency) to represent ordinate values. Such direct approaches have advantage presenting stream `as is' without imposed interpretations or accentuation particular features found in indirect approaches. However, datasets can often be subdivided short non-overlapping variable length segments that each encapsulate a discrete unit domain-specific significant information current cannot these. We present Direct Segmented (DSSon) for highlighting segments' distributions as individual sonic events. Using domain knowledge segment data, DSSon presents gestalts retaining overall regime dataset. The method's structural decoupling from sound stream's formation means speed independent event durations, thereby offering highly flexible time compression/stretching allow zooming out data. Demonstrated three models applied biomechanical high directness, letting `speak' themselves.