Proseminar Künstliche Intelligenz

Artificial Intelligence is a central discipline nowadays in computer science. Research at the Institute of Artificial Intelligence focusses on knowledge-based techniques, i.e., how can we formalize and reason about knowledge within intelligent systems. Knowledge-based techniques complement learning-based techniques, e.g., to provide far-ranging foresight based on map knowedge for autonomous vehicles or for planning actions that an intelligent agent needs to perform in order to reach a certain goal. The seminar will introduce students to selected research topics in this area.

Dienstags von 10-12 Uhr wird es im Zuge dieser Veranstaltungen in unregelmäßigen Abständen Treffen geben, bei denen Anwesenheitspflicht gilt.

Themen

With proliferation of learning contents on the web, finding suitable ones has become a very difficult and complicated task for online learners. Nevertheless, recommender systems can be a solution to the problem. However, recommendation systems haven’t been sufficiently used in e-learning, in comparison with other fields (i.e. commerce, medicine and so on). In this paper, a semantic recommender system for e-learning is proposed, by means of which, learners will be able to find and choose the right learning materials suitable to their field of interest. The proposed web based recommendation system comprises ontology and web ontology language (OWL) rules. Rule filtering will be used as recommendation technique. The proposed recommendation system architecture consists of two subsystems; the Semantic Based System and the Rule Based System. The paper provides a practical example for applying Semantic-Web technology in an e-learning environment.

Referenzen:

  1. A Proposed Semantic Recommendation System for E-Learning (only available via Uni-Ulm internal network or VPN)

Betreuer: Michael Welt

Solving games with AI has been an longstanding research benchmark in the field. While chess was  “solved”, go has been elusive due to the much larger search space. In this work we want to take a look how large search-spaces can be searched efficiently with learned heuristics.

Referenzen:

  1. Mastering the game of Go with deep neural networks and tree search

Betreuer: Jakob Karalus

As creating computer processable knowledge representation systems is deemed to be a sometimes tedious task that needs the help and background knowledge of domain experts, such as medical doctors, engineers or a pizza baker, research for automatic ontology population is an ongoing topic in contemporary research. In order to extract ontological knowledge out of unstructured data i.e. large text corpora it is necessary to find algorithms that can detect concepts and relations between them reliably. In 1992 Marti A. Hearst and her group presented an algorithm to automatically detect and extract a certain linguistic relation out of large collections of English texts. Her pattern-based approach presented in this paper was widely recognized and adopted for automatic relation extraction, and became even more prominent with the rise of the World Wide Web and the massive amounts of text data along with it.

Referenzen:

  1. Automatic Acquisition of Hyponyms from Large Text Corpora

Betreuer: Michael Welt

While learning with a reward function gives great results, but unfortunately it is not always possible to define a good reward function. In this topic we want to take a look a Human-in-the-loop Reinforcement Learning and investigate one approach how one could replace the reward function with human feedback.

Referenzen:

  1. Deep Reinforcement Learning from Human Preferences

Betreuer: Jakob Karalus

In classical planning, the task is to drive a system from a given initial state into a goal state by applying actions whose effects are deterministic and known. Classical planning can be formulated as search problem whose nodes represent the states of the system or enviroment, and whose edges capture the state transitions that the actions make possible. State-of-the-art methods in classical planning search towards the goal using heuristic functions that are automatically derived from the problem.

Referenzen: 

  1. A Concise Introduction to Models and Methods for Automated Planning (nur Teile von Kapitel 2)

Betreuer: Conny Olz

While the question how an Agent has to balance exportation vs exploitation is an essential one, the question how an Agent can effectively explore an unknow high dimensional space (with potential sparse rewards) is also highly complex. In this topic we want to take a look at one particular method how agents can perform efficient exploration by “First return, then explore”

Referenzen:

  1. First return, then explore

Betreuer: Jakob Karalus

As an example for a structured knowledge representation system the WordNet database is established and well-cited in the literature for over three centuries now. Although first created in the 1980s, it is still developed and extended as of today by Princeton University. WordNet is an on-line lexical reference system whose design was inspired by psycholinguistic theories of human lexical memory. English nouns, verbs, and adjectives are organized into synonym sets, each representing one underlying lexical concept. Different relations link the synonym sets.

Referenzen:

  1. Introduction to WordNet: An On-line Lexical Database (only available via Uni-Ulm internal network or VPN)
  2. WordNet | A Lexical Database for English (princeton.edu)

Betreuer: Michael Welt

Games have always been a fertile ground for advancements in computer science, operations research and artificial intelligence. Solitaire card games, and Freecell in particular, have been the subject of study in both the academic literature, where they are used as a benchmark for planning heuristics, and in popular literature. Here an approach which provides provably optimal solutions to solitaire games shall be studied. It uses A* search together with an admissible heuristic function that is based on analyzing a directed graph whose cycles represent deadlock situations in the game state.

Referenzen:

  1. Optimal Solitaire Game Solutions using A∗ Search and Deadlock Analysis

Betreuer: Conny Olz

Dots-And-Boxes is a well-known and widely-played combinatorial game. While the rules of play are very simple, the state space for even very small games is extremely large, and finding the outcome under optimal play is correspondingly hard. In this paper we introduce a Dots-And-Boxes solver.
Our approach uses Alpha-Beta search and applies a number of techniques that reduce the search space to a manageable size.

Referenzen:

  1. Solving Dots-And-Boxes

Betreuer: Conny Olz

The unification of statistical (data-driven) and symbolic (knowledge-driven) methods is widely recognized as one of the key challenges of modern AI. While there as several hybrid neuro-symbolic AI systems around, the field is highly diverse, research is mostly empirical and lacking a unifying view of the large variety of these hybrid systems. Following a systematic literature review, Bekkum et al. propose a set of modular design patterns for such hybrid, neuro-symbolic systems. The proposed design patterns are applied in two realistic use-cases for hybrid AI systems and the proposed patterns revealed similarities between systems that were not recognized before. Prerequisites: The paper gives a high-level overview and no deep technical knowledge is required to follow the contents.

Referenzen:

  1. Modular design patterns


Betreuerin: Birte Glimm

In relational and deductive database systems, a view describes a (virtual) relation derived from stored base relations, which can be defined in the form of Datalog rules, for example. By materializing such a view, the derived data tuples are explicitly stored in the database such that they can directly be accessed. To efficiently update a materialized view based on changes in the database, this paper introduces two algorithms: The first one is based on counting and tracks the number of alternative derivations (counts) for each derived tuple in a non-recursive view. The second algorithm is called Delete and Rederive (DRed) and can also incrementally maintain recursive views.

Referenzen:

  1. https://dl.acm.org/doi/pdf/10.1145/170036.170066


Betreuer: Moritz Illich

The Resource Description Framework (RDF) is a standardized data model for representing machine processable knowledge within the Semantic Web. Stream Reasoning deals with reasoning over streams of RDF data. This article presents a technique for incrementally maintaining logical consequences over windows of RDF data streams. The technique exploits time information to determine expired and new logical consequences. The provided experimental evidence shows that the approach significantly reduces the time required to compute valid inferences at each window change.

Referenzen:

  1. Incremental Reasoning on Streams and Rich Background Knowledge | SpringerLink

Betreuer: Moritz Illich