作者: Shane Griffith , Vladimir Sukhoy , Todd Wegter , Alexander Stoytchev
DOI:
关键词:
摘要: This paper explores whether auditory and propri-oceptive information can be used to bootstrap learning about how objects interact with water. Our results demonstrate that a robot can categorize objects into “containers” and “noncontainers” based on how the objects sound like and feel like when water is flowing onto them. Using a behavior–grounded approach, the robot performed five different exploratory behaviors on the objects and captured auditory and proprioceptive data as the behaviors changed the spatial configuration between the objects and the water stream. Using this data, the robot first learned perceptual outcome classes for each behavior–modality combination. Functionally meaningful object categories were then formed based on the frequency with which different outcome classes occurred with each object.