Semantic-based regularization

While the never-ending discussions at the bridge of neuroscience and artificial intelligence on discovery vs invention are increasingly providing strong motivations for biologically inspired models of learning, the impressive difficulty behind any reasonable attempt to reverse cognition from biology has left the doors open to truly artificial models that neglect any apparently insightful information from nature. On the other hand, artificial models of learning processes typically neglect that most intriguing human learning skills are due, to a large extent, to the acquisition of relevant semantic attributes on the task. Beginning from the Tikhnov regularization theory, in this talk, I discuss learning processes that are somehow related to semantic attributes of the learning, represented by corresponding real-valued functions. In so doing, the development of those functions is not merely the result of smoothness assumption, but turns out to reflect the semantics that the agent acquires on the attributes. Hence, unlike classic regularization, the adopted learning from examples and constraints scheme, yields a semantic-based regularization, where the constraints represent a general linguistic specification of prior knowledge, ranging from the request to belong to convex domains to high level logic specifications given by First-Order Logic. Unlike examples of the learning environment, the rich structure of the constraints is transferred to novel kernel-based learning models, in which some high level cognitive features take place, like learning under the active role of both and learner and the teacher, and the induction/deduction reinforcement principle.

Information

Sprecher

Prof. Dr. Marco Gori
Universita di Siena, Italy

Datum

Montag, 13. Juli 2009, 14 Uhr

Ort

Universität Ulm, Oberer Eselsberg, N27, Raum 2.033
Universität Magdeburg, Raum G26.1-010 (Videoübertragung)