Unsupervised Learning of Visuomotor Associations
MPI Series in Biological Cybernetics, Bd. 11
188 Seiten, Erscheinungsjahr: 2005
Preis: 40.50 EUR
Form- und Raumwahrnehmung, Sehsystem, Sensomotorik, Maschinelles Lernen, Robotik
Several scientists suggested that perception in biological agents is not a pure bottom-up process, but is partly resulting from the interaction between motor commands and their sensory consequences. So far, this hypothesis has only been a theoretical possibility backed by some behavioral experiments. The present work supports this concept by demonstrating that it can be made real for the control of robotic agents.
The building blocks for shape and space perception are visuomotor associations. This thesis introduces methods to learn these associations in an unsupervised way. First, an agent collects motor and sensory data by random exploration. These data are distributed in a high-dimensional space whose dimensions comprise all sensory variables and all motor variables. Second, in the training process, the distribution of these data is described by a mixture of local principal component analyzers (local PCA) or alternatively by a non-linear extension of PCA, here, kernel PCA. Finally, based on such a description, a recall mechanism completes a partially given pattern: for example, given visual information about an object, the visuomotor completion contains a robot-arm posture suitable to grasp the object. This completion is analogous to a recall in a recurrent neural network. With recurrent networks the new method shares two advantages: input and output dimensions can be chosen after training, and the association does not fail if the training data contain many alternative output patterns for a given input pattern.
In addition, this thesis demonstrates that some perceptual tasks can be solved by using a series of forward models. A forward model predicts the sensory input for the next time step given the current sensory input and motor command. Thus, the sensory result of a sequence of motor commands can be computed without executing any motor command. Movements can be simulated `mentally' and used for perception: for example, a mobile robot can perceptually judge that it is standing in the middle of a circle if the visual input stays constant during a simulated turn.
Wollen auch Sie Ihre Dissertation veröffentlichen?