UIB-1992-01 On Bounded Truth-Table and Conjunctive Reductions to Sparse and Tally Sets
Autoren: Vikraman Arvind, Johannes Köbler and Martin Mundhenk
In this paper we study the consequences of the existence of sparse hard sets for different complexity classes under certain types of deterministic, randomized and nondeterministic reductions. We show that if an NP-complete set is bounded-truth-table reducible to a set that conjunctively reduces to a sparse set then P=NP. Relatedly, we show that if an NP-complete set is bounded-truth-table reducible to a set that co-rp reduces to some set that conjunctively reduces to a sparse set then RP=NP. We also prove similar results under the (apparently) weaker assumption that some solution of the promise problem (1-SAT,SAT) reduces via the mentioned reductions to a sparse set. Finally we consider nondeterministic polynomial time many-one reductions to sparse and co-sparse sets. We prove that if a co-NP-complete set reduces via a nondeterministic polynomial time many-one reduction to a co-sparse set then PH=Theta_2. On the other hand, we show that nondeterministic polynomial time many-one reductions to sparse sets are as powerful as nondeterministic Turing reductions to sparse sets.
UIB-1992-02 Top-down Parsing with Simulataneous Evaluation of Noncircular Attribute Grammars
Autoren: Thomas Noll, Heiko Vogler
This paper introduces a machinery called attributed top-down parsing automaton which performs top-down parsing of strings and, simultaneously, the evaluation of arbitrary noncircular attribute grammars. The strategy of the machinery is based on a single depth-first left-to-right traversal over the syntax tree. There is no need to traverse parts of the syntax tree more than once, and hence, the syntax tree itself does not have to be maintained. Attribute values are stored in a graph component, and values of attributes which are needed but not yet computed are represented by particular nodes. Values of attributes which refer to such uncomputed attributes are represented by trees over operation symbols in which pointers to the particular nodes at their leaves are maintained. Whenever eventually the needed attribute value is computed, it is glued into the graph at the appropriate nodes.
UIB-1992-03 17. Workshop über Komplexitätstheorie, effiziente Algorithmen und Datenstrukturen
Autoren: Fakultät für Informatik
UIB-1992-04 Lowness and the Complexity of Sparse and Tally Descriptions
Autoren: Vikraman Arvind, Johannes Köbler and Martin Mundhenk
We investigate the complexity of obtaining sparse descriptions for sets in various reduction classes to sparse sets. Let A be a set in a certain reduction class R(SPARSE) to the class of sparse sets. Then we are interested in finding upper bounds for the complexity (relative to A) of sparse sets S such that A is in R(S). By establishing such upper bounds we are able to derive the lowness of A. In particular, we show that the closure of sparse sets under bounded truth-table reductions (as well as the closure under disjunctive reductions) is located in the third theta level of the extended low hierarchy. Finally, we construct for every set A in IC[log,poly] a tally set T in P(A+SAT) such that A is in P_ctt(T) and in P_dtt(T). This implies that the class IC[log,poly] of sets with low instance complexity is contained in first level of the extended low hierarchy.
UIB-1992-05 Locating P/poly Optimally in the Extended Low Hierarchy
Autoren: Johannes Köbler
The low and high hierarchies inside NP were introduced by Schöning and have turned out to be an important structural tool for classifying decision problems in NP not known to be NP-complete or in P. This idea was extended by Balcazar, Book, and Schöning who defined the extended low and high hierarchies in order to classify decision problems and language classes not contained in NP. Various classes of sets reducible to sparse and tally sets have been shown to be included in the low hierarchies. Allender and Hemachandra and Long and Sheu proved the optimality of the location of almost all these classes. However, until now, the exact location of P/poly remained an open problem. Part of the interest in the class P/poly stems from the fact that it consists exactly of the languages that can be computed by polynomially size-bounded circuits. Ko and Schöning showed that all NP sets in P/poly are contained in the L_3^p,Sigma level of the low hierarchy. This result was extended by Balc\'azar, Book, and Schöning who located all of P/poly in the EL_3^p,Sigma level of the extended low hierarchy. In this paper we show that P/poly is contained in the third theta level EL_3^p,Theta of the extended low hierarchy. Since there exist sparse sets that are not contained in EL_2^p,Sigma P/poly is not contained in the next lower level of the extended low hierarchy, and therefore the location of P/poly in EL_3^p,Theta is optimal. As a consequence to the containment of P/poly in EL_3^p,Theta we can locate all NP sets in P/poly in the third theta level L_3^p,Theta of the low hierarchy. In order to proof our main result we also show that for every set A in P/poly correct advice can be computed in FP(NP}(A) oplus Sigma^p_2. The previous upper bound for the complexity of advice functions for a set A in P/poly was F\Delta^p_3(A). Very recently, Gavalda obtained an FP NP (A) oplus Sigma^p_3 upper bound which is incomparable to the previously known upper bound but is also subsumed by our result.
UIB-1992-06 Synthesized and inherited functions -a new computational model for syntax-directed semantics
Autoren: Armin Kühnemann and Heiko Vogler
In this paper we introduce a new formal model for the concept of syntax--directed semantics, called macro attributed tree transducer (for short: mat tree transducer). This model is based on (noncircular) attributed tree transducers and on macro tree transducers. In the first type of transducer, semantic values are computed by means of meaning names called synthesized attributes, and by means of context names called inherited attributes. Both, synthesized and inherited attributes represent basic semantic values. In the second type of transducer, semantic values are computed by meaning names only which are called states. However, in order to have a means of handling context information, states represent functions over semantic values. The new model integrates attributed tree transducers and macro tree transducers by allowing both, meaning names and context names to represent functions over semantic values. In analogy to the terminology of attributed tree transducers, we call such meaning names and context names also synthesized functions and inherited functions, respectively. We present an inductive characterization of the tree transformation computed by a mat tree transducer. We prove that mat tree transducers are more powerful than both,attributed tree transducers and macro tree transducers. We characterize mat tree transducers by the two--fold composition of attributed tree transducers. This characterization has three consequences: (1) the height of output trees of mat tree transducers is bounded exponentially in the size of the input tree, (2) the composition hierarchy of mat tree transducers is strict, and (3) mat tree transducers are closed under right--composition with top--down tree transducers, but not under left--composition. Moreover, we prove that the addition of inherited attributes does not increase the computational power of macro tree transducers.
UIB-1992-07 A Universal Unification Algorithm Based on Unification-Driven Leftmost Outermost Narrowing
Autoren: Heinz Fassbender and Heiko Vogler
We formalize a universal unification algorithm for the class of equational theories which is induced by the class of canonical, totally-defined, not strictly subunifiable term rewriting systems (for short: ctn-trs). For a ctn-trs R and for two terms t and s, the algorithm computes a ground-complete set of E_R-unifiers of t and s, where E_R is the set of rewrite rules of R viewed as equations. The algorithm is based on the unification-driven leftmost outermost narrowing relation (for short: ulo narrowing relation) which is introduced in this paper. The ulo narrowing relation combines usual leftmost outermost narrowing steps and unification steps. Since the unification steps are applied as early as possible, some of the nonsuccessful derivations can be stopped earlier than in other approaches to E_R-unification. Furthermore, we formalize a deterministic version of our universal unification algorithm that is based on a depth-first left-to-right traversal through the narrowing trees.
UIB-1992-08 On Random Reductions from Sparse Sets to Tally Sets
Autoren: Uwe Schöning
We show that every sparse set S can be many-one reduced to an appropriate tally set T by a polynomial-time, randomized reduction (see formal definitions below.) Since T is in NP if S is in NP, this result can be used to show that there is a tally set in NP being randomized many-one complete for all sparse sets in NP. This partially answers an open problem posed by Hartmanis and Yesha [Ha].
UIB-1992-09 Consistency in Stochastic Network
Autoren: Hermann von Hasseln, Laura Martignon
Stochastic networks have been proposed as exact inference machines in expert systems dealing with uncertain reasoning by Pearl and his school. When modelling a real life situation into a Markov or Bayes network one of the problems is that the confidence numbers describing the dependencies between facts are obtained statistically from experiments and may have little to do with exact probabilistic dependencies. The dilemma of establishing criteria for the consistency of confidence numbers is of epistemological nature. Choosing stochastic networks as models of reality means that we are (explicitly or implicitly) assuming that every phenomenon we are treating can be described in terms of a certain ``expected value'' associated with it. This means that our systems are governed by probability distributions (and therefore allow no ``dissipation'' or ``contraction'') and that consistency of data has to be defined as stochastic coherence or compatibility with those probability distributions. In our work we propose a natural definition of stochastic consistency of confidence numbers and introduce an algorithm that corrects inconsistent data, while producing coherent stochastic models. We discuss updating in Markov networks, present a slight modification of the Gibbs sampler introduced by Geman and Geman and give a short simple proof of their theorem on stochastic relaxation. We discuss the question of consistency of confidence numbers and define consistency in terms of local characteristics of Markov networks. The Confidence Correcting Algorithm we introduce, is a strongly ergodic inhomogenous Markov chain, whose convergence is guaranteed by theorems on positive matrices. In the case that a given set of confidence numbers is already consistent, the algorithm coincides with the Gibbs sampler. Also we present results of computer simulations on small networks and list some conjectures and open problems.
UIB-1992-10 A Slightly Improved Upper Bound on the Size of Weights Sufficient to Represent Any Linearly Separable Boolean Function
Autoren: Michael Schmitt
The maximum absolute value of integral weights sufficient to represent any linearly separable Boolean function is investigated. It is shown that upper bounds exhibited by Muroga (1971) for rational weights satisfying the normalized system of inequalities also hold for integral weights. Therewith, the previous best known upper bound for integers is improved by approximately a factor of 1/2.
UIB-1992-11 On the Power of Generalized MOD-Classes
Autoren: Johannes Köbler and Seinosuke Toda
We investigate the computational power of the new counting class ModP which generalizes the classes ModpP, p prime. We show that ModP is polynomial-time truth-table equivalent in power to #P and that ModP is contained in the class AmpMP. As a consequence, the classes PP, ModP and AmpMP are all Turing equivalent, and thus AmpMP and ModP are not low for MP unless the counting hierarchy collapses to MP. Furthermore, we show that every set in C=P is reducible to some set in ModP via a random many-one reduction that uses only logarithmically many random bits. Hence, ModP and AmpMP are not closed under polynomial-time conjunctive reductions unless the counting hierarchy collapses.
UIB-1992-12 Reliable Reductions, High Sets and Low Sets
Autoren: Vikraman Arvind, Johannes Köbler and Martin Mundhenk
Measuring the information content of a set by the space-bounded Kolmogorov complexity of its characteristic sequence, we investigate the (non-uniform) complexity of sets A in EXPSPACE/poly that reduce to some set having very high information content. Specifically, we show that if the reducibility used has a certain property, called 'reliability', then A in fact is reducible to a sparse set (under the same reducibility). As a consequence, the existence of hard sets (under `reliable' reducibilities) of very high information content is unlikely for various complexity classes as for example NP, PP, and PSPACE.
UIB-1992-13 On a monotonic semantic path ordering
Autoren: Alfons Geser
The semantic path ordering preceq_spo is an ordering that allows to prove termination of term rewriting systems. Unlike other such orderings, it is not monotonic. We construct two monotonic suborderings preceq_cspo, preceq_mspo, of preceq_spo. Both orderings rely on reasonable assumptions on the underlying semantic ordering, and mirror Kamin/L'evy's termination proof method. Moreover, preceq_mspo is shown to cover preceq_spo up to the subterm property. In the case of the semantic ordering being a simplification quasiordering, the three orderings even coincide. Thus the Knuth/Bendix ordering turns out to be a special case of the semantic path ordering.
UIB-1992-14 The Translation Power of Top-Down Tree-To-Graph Transducers
Autoren: Joost Engelfriet, Heiko Vogler
We introduce a new syntax-directed translation device called top-down tree-to-graph transducer. Such transducers are very similar to the usual top-down tree transducers except that the right-hand sides of their rules are hypergraphs rather than trees. Since we are aiming at a device which also allows to translate trees into objects different from graphs, we focus our attention on so-called tree-generating top-down tree-to-graph transducers. Then the result of every computation is a hypergraph which represents a tree, and in its turn the tree can be interpreted in any algebra of appropriate signature. Although for both devices, top-down tree transducers and tree-generating top-down tree-to-graph transducers, the translation of a subtree of an input tree does not depend on its context, the latter transducers have much more transformational power than the former. In this paper we prove that tree-generating top-down tree-to-graph transducers are equivalent to macro tree transducers, which are transducers for which the translation of a subtree may depend on its context.