Reinforcement learning models of visually guided behavior

Human behavior in extended visuomotor tasks is not well understood. This talk considers the visuomotor task of navigating along a walkway while avoiding obstacles and approaching targets. Behavioral data of humans executing this task is presented together with a model of sidewalk navigation based on the reinforcement-learning framework. The connection between model and empirical data is made by using a new inverse reinforcement learning algorithm that estimates the parameters of the reward model so as to best match the observed human behavior. Thus, this work proposes to understand human visuomotor behavior in terms of learned solutions to specific tasks. If human vision is understood as an active process that has to learn how to select relevant information in time, then algorithms for the solution of complex visuomotor control tasks have to be developed. To this end, this talk presents a credit assignment algorithm for modular reinforcement learning that allows solving the same walking tasks in a virtual agent. The main feature of this algorithm is that it allows for multiple component tasks to learn their respective contributions to a single global reward signal. A further implication of the active vision framework is that the internal representations used in visuomotor computations depend on the agent´s task. The talk presents results from learning of such representations with classic algorithms and demonstrates that different task execution policies lead to different such representations. It is shown that specific visuomotor policies result in a distribution of preferred orientations of learned model receptive fields as found in primates.

Information

Sprecher

PhD Constantin A. Rothkopf
Frankfurt Institute for Advanced Studies
Goethe-University, Frankfurt

Datum

Mittwoch, 15. Juli 2009, 14 Uhr

Ort

Universität Ulm, Oberer Eselsberg, N27, Raum 2.033
Universität Magdeburg, Raum G26.1-010 (Videoübertragung)