Interacting With Space: Body, Objects, and Motor Control

To flexibly interact with the space around us, suitable representations of the body in space and of goals relative to the body are mandatory. We present the modular, modality frame model (MMF), which represents the body in the surrounding space. MMF modularizes the body into parts. Moreover, each of these body parts is represented redundantly in multiple frames of reference. We show that MMF can effectively integrate multiple sources of sensory information and is even able to detect temporarily inaccurate sensory sources, appropriately decreasing the estimated information contents in these sensors. Also goal-directed motor control commands can be issued. Currently, we are extending the model to encode the modularized spatial representations with neural population codes for increasing the motor control flexibility further. Moreover, we are considering current results on object manipulation to guide motor-oriented attention and to generate object-oriented, spatial interactions. In future research, we intend to integrate these results in one encompassing cognitive model.

Information

Sprecher

Herr Prof. Dr. Martin V. Butz
Computer Science, Cognitive Modeling
Eberhard Karls Universität Tübingen

Datum

Montag, 21. Januar 2013, 16 Uhr c.t.

Ort

Universität Ulm, N27, Raum 2.033 (Videoübertragung zur Otto-von-Guericke-Universität Magdeburg G26.1-010)

Videoaufzeichnung

Video in neuem Fenster ansehen