UIB-1994-01 On the Complexity of Consistency Problems for Neurons with Binary Weights

Autoren: Michael Schmitt

We inquire into the complexity of training a neuron with binary weights when the training examples are Boolean and required to have bounded coincidence and heaviness. Coincidence of an example set is defined as the maximum inner product of two elements, heaviness of an example set is the maximum Hamming-weight of an element. We use both as parameters to define classes of restricted consistency problems and ask for which values they are NP-complete or solvable in polynomial time. The consistency problem is shown to be NP-complete when the example sets are allowed to have coincidence at least 1 and heaviness at least 4. On the other hand, we give linear-time algorithms for solving consistency problems with coincidence 0 or heaviness at most 3. Moreover, these results remain valid when the threshold of the neuron is bounded by a constant of value at least 2, whereas consistency can be decided in linear time for neurons with threshold at most 1. We also study maximum consistency problems and obtain NP-completeness for example sets of coincidence at least 1 and heaviness at least 2, whereas we again find linear-time algorithms for the complementary cases. The same is shown to be true for neurons with bounded thresholds the bound being at least 1. On the other hand, maximum consistency can be decided in linear time for a neuron with threshold 0.


UIB-1994-02 A Pumping Lemma for Output Languages of Attributed Tree Transducers

Autoren: Armin Kühnemann, Heiko Vogler

The concept of attributed tree transducer is a formal model for studying properties of attribute grammars. In this paper, for output languages of noncircular, producing, and visiting attributed tree transducers, we introduce and prove a pumping lemma. We apply this pumping lemma to gain two results: (1) there is no noncircular, producing, and visiting attributed tree transducer which computes the set of all monadic trees with exponential height as output and (2) there is a hierarchy of noncircular, producing, and visiting attributed tree transducers with respect to their number of attributes.


UIB-1994-03 On Functions Computable with Nonadaptive Queries to NP

Autoren: Harry Buhrman, Jim Kadin, Thomas Thierauf

We study FP|NP, the class of functions that can be computed with nonadaptive queries to an NP oracle. We show that optimization problems stemming from the known NP complete sets, where the optimum is taken over a polynomially bounded range, are hard for FP|NP. This is related to (and, in some sense, extends) work of Chen and Toda. In addition, it turns out that these optimization problems are all equivalent under a certain functional reducibility. By studying the question whether these function classes arecomplete for FP|NP, i.e., whether it is possible to compute an optimal value for a given optimization problem in FP|NP, we show that this is exactly as hard as to compute membership proofs for NP complete sets in FP|NP. On the other hand, FP|NP can be characterized as the class of functions that reduces to the above mentioned optimization functions. We will call this property quasi-completeness. A subclass of FP|NP is NPSV, the class of functions that can be computed by single-valued NP transducers. We exhibit function classes that are quasi-complete for NPSV but not complete unless the Polynomial Time Hierarchy collapses.


UIB-1994-04 Implementation of a Deterministic Partial E-Unification Algorithm for Macro Tree Transducers

Autoren: Heinz Faßbender, Heiko Vogler, Andrea Wedel

During the execution of functional logic programs, particular E-unification problems have to be solved quite frequently. In this paper we contribute to the efficient solution of such problems in the case where E is induced by particular term rewriting systems called macro tree transducers. We formalize the implementation of a deterministic partial E-unification algorithm on a deterministic abstract machine, called twin unification machine. The unification algorithm is based on a particular narrow\-ing strategy which combines leftmost outermost narrowing with a local constructor consistency check and a particular occur check. The twin unification machine uses two runtime stacks; it is an extension of an efficient leftmost outermost reduction machine for macro tree transducers. The feasibility of the presented implementation technique is proved by an implementation which has been developed on a SPARCstation SLC.


UIB-1994-05 On Helping and Interactive Proof Systems

Autoren: V. Arvind, J. Köbler, R. Schuler

We investigate the complexity of honest provers in interactive proof systems. This corresponds precisely to the complexity of oracles helping the computation of robust probabilistic oracle machines. Further, interactive protocols with provers that are reducible to sets of low information content are considered. Specifically, if the verifier communicates only with provers in Ppoly, then the accepted language is low for Sigma_2^p. In the case that the provers are polynomial-time reducible to {lgasparse


UIB-1994-06 Incorporating record subtyping into a relational data model

Autoren: Christian Kalus, Peter Dadam

Most of the current proposals for new data models support the construction of heterogeneous sets. One of the major challenges for such data models is to provide strong typing in the presence of heterogenity. Therefore the inclusion of as much as possible information concerning legal structural variants is needed. We argue that the shape of some part of a heterogeneous scheme is often determined by the contents of some other part of the scheme. This relationship can be formalized by a certain type of integrity constraint we have called attribute dependency. Attribute dependencies combine the expressive power of general sums with a notation that fits into relational models. We show that attribute dependencies can be used, besides their application in type and integrity checking, to incorporate record subtyping into a relational model. Moreover, the notion of attribute dependency yields a stronger assertion than the traditional record subtyping rule as it considers some refinements to be caused by others. To examine the differences between attribute dependencies and traditional record subtyping and to be able to predict how attribute dependencies behave under transformations like query language operations we develop an axiom system for their derivation and prove it to be sound and complete. We further investigate the interaction between functional and attribute dependencies and examine an extended axiom system capturing both forms of dependencies.


UIB-1994-07 A Classification of Multi-Database Languages

Autoren: Markus Tresch, Marc H. Scholl

This paper defines a formal classification of multi-database languages into five levels of database integration with increasing degree of global control and decreasing degree of local autonomy. First, the fundamental interoperability concepts and mechanisms are identified for each of these levels. Their consequences on local autonomy as well as implementation draw-backs are discussed. Second, various multi-database languages are classified into these categories. In addition to our own language COOL*, other proposals are analyzed, including SQL*Net, Multibase, Superviews, VODAK, Pegasus, and O*SQL.


UIB-1994-08 Arbeitstreffen Typtheorie: Zusammenfassung der Beiträge

Autoren: Friedrich von Henke, Harald Rueß

This report contains summaries and abstracts of talks presented at the type theory workshop at the University of Ulm in April 1994.


UIB-1994-09 Construction and Deduction Methods for the Formal Development of Software

Autoren: F.W. von Henke, A. Dold, H. Ruess, D. Schwier, M. Strecker

In this paper we present an approach towards a framework based on the type theory ECC (Extended Calculus of Constructions) in which specifications, programs and operators for modular development by stepwise refinement can be formally described and reasoned about. We show that generic software development steps can be expressed as higher-order functions and demonstrate that proofs about their asserted effects can be carried out in the underlying logical calculus. For transformations requiring syntactic manipulations of objects, a two-level system comprising a Meta- and an Object-level is provided, and it is shown how transformations can be formalized that faithfully represent operators on the object level.


UIB-1994-10 Formalisierung schematischer Algorithmen

Autoren: Axel Dold

Diese Arbeit beinhaltet die Formalisierung und Verifizierung eines generischen Software-Entwicklungsschrittes mittels Funktionen höherer Ordnung in einem typtheoretischen Rahmen am Beispiel des schematischen Algorithmus Globale Suche. Wir zeigen, daß sowohl der Entwicklungsschritt selbst als auch dessen Korrektheit in ein und demselben (typtheoretischen) Formalismus (QED) dargestellt werden kann. Die Anwendungsmöglichkeit dieses Entwicklungsschrittes wird anhand eines Beispiels (Binäre Suche) verdeutlicht. Die hier vorgestellte Arbeit basiert im wesentlichen auf den Arbeiten von Doug Smith, der zahlreicheAlgorithmentheorien untersucht und in seinem KIDS-System, welches eine sehr leistungsstarke Umgebung zur transformationellen Software-Entwicklung zur Verfügung stellt, implementiert hat. Teilaspekte werden dabei allerdings nur informell erfaßt. Diese Dinge werden hier genauer formalisiert. Christoph Kreitz hat in seiner Habilitationsschrift neben zahlreichen elementaren Theorien insbesondere Programmierkonzepte, Entwicklungsschritte und bestehende Ansät\-ze zur Programmsynthese formalisiert, u.a. auch den hier betrachteten schematischen Algorithmus Globale Suche. Diese Arbeit ist jedoch in einem anderen Kontext und unabhängig davon entstanden. Die Durchführung der Verifikation des Entwicklungsschrittes erfolgt mit Hilfe des Spezifikations-und Beweissystems PVS, welches mittels eines Gentzen-Beweisers für Logik höherer Ordnung adäquate Beweisunterstützung bietet.


UIB-1994-11 New Collapse Consequences of NP Having Small Circuits

Autoren: Johannes Köbler, Osamu Watanabe

We show that if a self-reducible set has polynomial-size circuits, then it is low for the probabilistic class ZPP(NP). As a consequence we get a deeper collapse of the polynomial-time hierarchy PH to ZPP(NP) under the assumption that NP has polynomial-size circuits. This improves on the well-known result of Karp, Lipton, and Sipser (1980) stating a collapse of PH to its second level under the same assumption. As a further consequence, we derive new collapse consequences under the assumption that complexity classes like UP, FewP, and C=P have polynomial-size circuits. Finally, we investigate the circuit-size complexity of several language classes. In particular, we show that for every fixed polynomial s, there is a set in ZPP(NP) which does not have O(s(n))-size circuits.


UIB-1994-12 On Average Polynomial Time

Autoren: Rainer Schuler

One important reason to consider average case complexity is whether all, or at least some significant part of NP can be solved efficiently on average for all reasonable distributions. Let P-p-comp be the class of problems, which are solvable in average polynomial-time for every polynomial-time computable input distribution. Following the framework of Levin [Lev84,Lev86] the above question can now be formulated as: Is NP included in P-p-comp ? In this paper, it is shown that P-p-comp contains P as a proper subset. Thus, even if P is not equal to NP, it is still possible that NP is a subset of P-p-comp. As a further step it is shown that P-p-comp is different from NP. Therefore, either NP is not solvable efficiently on average, or (since P-p-comp is a subset of E) E is not a subset of NP and hence NP is a proper subset of Exp.


UIB-1994-13 Towards Average-Case Complexity Analysis of NP Optimization Problems

Autoren: Rainer Schuler und Osamu Watanabe

For the worst-case complexity measure, if P=NP then P= OptP, i.e., all NP optimization problems are polynomial-time solvable. On the other hand, it is not clear whether a similar relation holds when considering average-case complexity. We investigate the relationship between the complexity of NP decision problems and that of NP optimization problems under polynomial-time computable distributions, and study what makes them (seemingly) different. It is shown that the difference between P(NP)-truth-table-samplable and P(NP)-samplable distributions is crucial.


UIB-1994-14 Linking Reactive Software to the X-Window System

Autoren: Wolfram Schulte und Ton Vullinghs

We discuss our experience with linking (existing) reactive applications to X11 based graphical user interfaces. For implementing the user interface we choose to use the Tcl/Tk toolkit, whereas the application itself may be written in any language (even a declarative one) that provides means to perform primitive I/O. The application and the graphical user interface run as separate processes and communicate in either a synchronous or asynchronous way using a bidirectional communication channel. The proposed approach separates software engineering concerns, is easy to use, is reasonably efficient and enables the linking of arbitrary languages to graphical user interfaces.


UIB-1994-15 Namensverwaltung und Adressierung in Distributed Shared Memory-Systemen

Autoren: Alfred Lupper  

Distributed Shared Memory (DSM) ist neben Message Passing und Tuple Spaces eine mogliche Methode fur verteiltes Rechnen auf Arbeitsplatzrechnern. DSM erlaubt einzelnen Rechnern ihren verfugbaren Speicherbereich transparent auf den Speicher weiterer Rechner auszudehnen und uber den gemeinsamen Speicher miteinander zu kommunizieren. Der Vorteil von DSM gegenuber anderen Methoden liegt in der einfachen Programmierbarkeit eines solchen Systems. DSM ist fur den Entwickler verteilter Anwendungen wesentlich einfacher zu handhaben als das Message Passing-Paradigma, das ihm die explizite Partitionierung der Daten und die Verwaltung der Kommunikation abverlangt. Bisher wurde verteilter gemeinsamer Speicher vor allem in enggekoppeltenMultiprozessorsystemen verwendet. Mit zunehmender Zuverlassigkeit und Geschwindigkeit der Netzwerke - besonders im lokalen Bereich - wird diese Methode der Kommunikation auch fur lose gekoppelte Systeme interessant.Der folgende Artikel beschaftigt sich mit Fragen der Namensverwaltung und Adressabbildung fur Distributed Heap Storage, einem flexiblen DSM-System fur lose gekoppelte Rechner, das einen gemeinsamen Heap-Speicher realisiert. Es wird gezeigt, wie eine Namensverwaltung fur DSM-Systeme zu gestalten ist, um das verteilte Memory Management zu unterstutzen und die positiven Eigenschaften von Distributed Heap Storage fur die Realisierung eines Naming-Systems selbst zu nutzen. Die Plazierung der Namenstabellen im logischen Adressraum ermoglicht es, das Naming-System ohne direkten Zugriff auf Netzwerkdienste und ohne explizite Kenntnis der Verteiltheit zu realisieren.


UIB-1994-16 Verteilte Unix-Betriebssysteme

Autoren: Robert Regn

Während es bereits kommerzielle Betriebssysteme für speichergekoppelte Rechner gibt, sind Betriebssysteme für verteilte Rechnerkonfigurationen weiterhin im Forschungsstadium. Dieser Artikel erläutert zunächst die Ziele eines verteilten Betriebssystems und führt dann zu den Entwurfskonzepten, unterteilt nach Kommunikation, Dateiservice und Prozeßmanagement. Dabei werden auch Ansätze zur Verteiltheit, wie das NFS von Sun, diskutiert. In der zweiten Hälfte des Artikels werden die beiden UNIX-kompatiblen Betriebssysteme Locus und Chorus eingehend beschrieben.


UIB-1994-17 Again on Recognition and Parsing of Context-Free Grammars: Two Exercises in Transformational Programming

Autoren: Helmuth Partsch

This paper deals with two aspects of transformational programming, viz. the formal derivation of logic programs from non-operational specifications and the construction of algorithms to solve problems formally specified by ``inverse properties''. Both aspects are illustrated by sample derivations of standard algorithms for recognition and parsing of context-free grammars.  


UIB-1994-18 Transformational Development of Data-Parallel Algorithms: an Example

Autoren: Helmuth Partsch

This paper deals with a case study in the formal derivation of data-parallel algorithms by means of program transformations. Particular emphasis is on the observation that a careful choice of suitable operations on abstract data structures and a thorough investigation of their algebraic properties can reduce substantial parts of the development activities to pure algebraic calculation.