Interest Group in Pure and Applied Logics (IGPL)
European Association for Logic, Language and Information (FoLLI)
Association for Symbolic Logic (ASL)
Sociedade Brasileira de Computação (SBC)
Sociedade Brasileira de Lógica (SBL)
Centro de Informática, Universidade Federal de Pernambuco (CIn-UFPE)
Departamento de Matemática, Universidade de Brasília (MAT-UnB)
(title to be announced)
by Gianluigi Bellin (Facoltà di Scienze, Università di Verona)
(abstract to be announced)
(title to be announced)
by Walter Carnielli (Centro de Lógica, Epistemologia e História da Ciência, Universidade Estadual de Campinas, Brazil)
(abstract to be announced)
Graphs Trees and Monadic Second Order Logic: Tutorial
by Bruno Courcelle (Université Bordeaux-1, LaBRI, France)
By considering a graph as a relational structure consisting of the set of vertices as domain and of a binary relation representing the edges, one can express graph properties in logical languages like First-Order Logic or Second-Order Logic. The purpose of Descriptive Complexity is to relate the complexity of graph properties (or more generally of properties of finite relational structures) with the syntax of their logical expressions, and to characterize complexity classes in logical terms, independently of computation models like Turing machines.
As main logical language, we will consider Monadic Second-Order Logic, i.e., the extension of First-Order Logic with variables denoting sets of elements of the considered structures. Despite the fact that it does not correspond exactly to any complexity class, this language enjoys a number of interesting properties.
First, it is rich enough to express nontrivial graph properties like planarity, vertex $k$-colorability (for fixed $k$), connectivity, and many others (that are not expressible in First-Order Logic).
Second, because the verification of a property
expressed in Monadic Second-Order Logic
can be done in linear time on certain graphs constructed
in an appropriate hierarchical way (for an example, the graphs
of tree-width at most $k$). This linearity result extends to
optimization problems consisting in
computing for a given graph, the minimum (or maximum) cardinality of a set of vertices satisfying a fixed monadic second-order formula with one free set variable.
Counting problems consisting in counting the number of sets satisfying a given formula can also be handled efficiently. This applies to NP-complete problems like Hamiltonicity or to the Traveling Salesman Problem; to take another example, structured ("goto"-less) programs can be represented by graphs of tree-width at most 6, and register allocation can be done efficiently for them (M. Thorup). However, although the algorithms are linear for fixed tree-width, the constant factors depend exponentially or more upon the tree-width and the sizes of the formulas.
Finally, Monadic Second-Order Logic is an essential tool for studying context-free graph grammars.
These interesting algorithmic and grammatical properties are based on the equivalence between monadic second-order logic and finite automata on finite trees (i.e., trees that represent finite well-formed terms written over a finite set of operation symbols). A graph constructed from "small" basic graphs by means of graph operations (generalizing the concatenation of words) can be considered as the value of a finite term. We are interested by graph operations such that the corresponding mapping from trees (representing terms) to graphs is MS-compatible, which means that for every monadic second-order formula $P$ expressing a graph property, one can construct an "equivalent" monadic second-order formula over trees expressing that the graph that is the value of the considered tree satisfies $P$.
In many cases we have a stronger property than MS-compatibility: the graph can be defined "in the tree" by monadic second-order formulas. We say that the graph is obtained from the tree by a Monadic Second-Order definable transduction, an MS-transduction in short.
We will review the various operations on graphs for which the evaluation mapping is an MS-transduction. They are the binary operation of union of two disjoint graphs, a unary operation that fuses all vertices of a graph satisfying some predicate $p$, unary operations like the edge-complement that redefine the edge relation by quantifier-free formulas.
The two main classes of context-free graph grammars, the HR Grammars (HR stands for hyperedge replacement) and the VR Grammars (VR stands for vertex replacement) can be defined by systems of recursive set equations over these operations. The corresponding classes of sets of graphs are closed under MS-transductions, and are generated from the set of binary trees by such transductions. Hence, the classes HR and VR are robust and have characterizations independent of the graph operations used to build the corresponding systems of equations. This establishes a strong connection between context-free graph grammars and monadic second-order logic. Certain "classical" families of graphs can be described by context-free grammars. However, many interesting families like that of all finite planar graphs cannot be represented by a single context-free grammar. This is unavoidable unless P = NP.
The graph operations on which VR-grammars are based yield the notion of clique-width of a graph. This is a complexity measure comparable to tree-width, but stronger in the sense that, for a set of finite graphs, bounded tree-width implies bounded clique-width, but not vice versa. It serves as a parameter in the sense of Parametrized Complexity, defined by Downey and Fellows. Problems expressible in monadic second-order logic are thus parametrically tractable.
We will discuss a still open conjecture by D. Seese saying that if a set of finite or countable graphs has a decidable satisfiability problem for monadic second-order formulas, then it is obtained from a set of trees by an MS-transduction. We will present the main special cases (that of planar graphs for instance) for which this conjecture is known to hold. This will be the occasion to review some applications to this field of the deep results of Robertson and Seymour on graph minors.
After having presented these situations where graphs are defined from trees by MS-compatible mappings, we will then consider cases where a tree is defined from a finite or countably infinite directed graph, by means of an MS-compatible mapping, namely the Unfolding. Intuitively, the graph represents a process and the unfolded tree, formally defined as the (usually infinite) tree of finite paths originating at a specified initial vertex, represents its behaviour. We will present methods for defining new MS-compatible mappings on graphs motivated by formalization of semantics and program verification.
We will conclude with the presentation of some open problems related to this topic.
Note: Most of these results, stated here for "graphs" for simplicity, actually hold for relational structures.
B. Courcelle: The expression of graph properties and graph transformations
in monadic second-order logic, Chapter 5 of the Handbook of graph grammars
and computing by graph transformations, Vol. 1 : Foundations, G. Rozenberg ed.,
World Scientific (New-Jersey, London), 1997, pp. 313-400.
B. Courcelle, J. Makowsky: Operations on relational structures and their compatibility with monadic second-order logic, 2000, submitted.
B. Courcelle, J. Makowsky, U. Rotics: Linear time solvable optimization problems on certain structured graph families, Theory of Computer Science (formerly Mathematical Systems Theory) 33 (2000) 125-150.
B. Courcelle, S. Olariu: Upper bounds to the clique-width of graphs, Discrete Applied Mathematics 101 (2000) 77-114.
B. Courcelle, I. Walukiewicz: Monadic Second-Order Logic, graph coverings and unfoldings of transition systems, Annals of Pure and Applied Logic 92 (1998) 35-62.
R. Downey, M. Fellows: Parametrized complexity, Springer-Verlag, 1997.
D. Seese: The structure of the models of decidable monadic theories of graphs, Annals of Pure and Applied Logic 53 (1991) 169-195.
Full texts of submitted or unpublished papers and other references can be found from:
A list of results on clique-width is maintained on the page: http://www.laria.u-picardie.fr/~vanherpe/cwd/cwd.html
Mixing deductions and computations: an overview
by Gilles Dowek (INRIA, France)
The goal of this tutorial is to give an overview of various formalisms (equational resolution, higher-order logic, intuitionistic type theory, the Calculus of Constructions, deduction modulo) that share a common idea of mixing deduction rules and computation rules.
Controlling the exchange rule
by Arnaud Fleury (Facoltà di Scienze, Università degli Studi di Verona, Italy)
The most important rules of logic are probably the structural rules. Most of the time, formal languages leave them implicit. This is for a very good reason, avoiding to drown the intuitive meaning under syntactic complications. But even from a user perspective, there is at least two reasons to be more explicit.
First reason : In some context the use of those rules is extremely restricted, natural languages being the most typical example.
Second reason : Their explicitation allows a great increase in expressive power. Usual linear logic, which uses a modal operator to control contraction and weakening, is now able to represent programs through formalizations which begin to look like real programming languages and can speak adequately about many of their properties.
The third rule, the exchange rule is probably the less understood of the three. Syntactically, we have on one side the Lambek Calculus : No structural rules at all, one can catch context free grammar and that's all. This calculus remained isolated from the main stream of proof theory for years. On the other side, in usual Gentzen-type systems, the exchange is completely free and in fact handled at the metalevel. One can say for example that a sequent is a multiset of formulae and that's it: No more need to speak about this rule. At the semantical level, for a non trivial control of exchange, we have to deal with non commutative algebra and perhaps with non commutative geometry. This mathematical semantics is not well understood today. The exchange rule fundamentally speaks about space. To deal with parallel computing (see for example the work of Paul Ruet on models of concurrent constraint programing) its study will probably be unavoidable.
Mathematical fuzzy logic - state of art 2001
by Petr Hájek (Institute of Computer Science, Academy of Sciences, Czech Republic)
In the lecture, basic notions of mathematical fuzzy logic (or: logic in the narrow sense) will be surveyed: starting from the notion of a continuous t-norm, syntax and semantics of fuzzy propositional calculus BL and the corresponding predicate calculus will be defined and main results will be stated (completeness etc.) Particular stronger fuzzy logics (Lukasiewicz, Gödel nad product logic) will be presented. As applications we present e.g. axiomatizations of fuzzy notions "very true" and "probably" as well as a result on the paradox of liar. The following topics will be also surveyed: computational complexity of propositional calculi, arithmetical complexity of predicate calculi, combined systems, results on underlying algebraic structures (BL-algebras).
P. Hájek: Metamathematics of fuzzy logic. Kluwer 1998.
P. Hájek: Mathematical logic - state of art. In: S. Buss et al., ed., Logic Colloquium'98. Association for Symbolic Logic 2000,197-205
P. Hájek: On the metamthematics of fuzzy logic. In: V Novak et al., ed., Discovering world with fuzzy logic. Physica-Verlag 2000, 155-174.
The evolution of types and functions in the 20th century: A journey through Frege and Russell incorporating the important developments in the 20th century and the influence on logic, mathematics and computer science
by Fairouz Kamareddine (Heriot-Watt University, Scotland)
In this tutorial, we study the prehistory of type theory up to 1910 and its development between Russell and Whitehead's Principia Mathematica (1910--1912) and Church's simply typed $\lambda$-calculus of 1940. We first argue that the concept of types has always been present in mathematics, though nobody was incorporating them explicitly as such, before the end of the 19th century. Then we proceed by describing how the logical paradoxes entered the formal systems of Frege, Cantor and Peano concentrating on Frege's Grundgesetze der Arithmetik for which Russell applied his famous paradox and this led him to introduce the first theory of types, the Ramified Type Theory RTT. We present RTT formally using the modern notation for type theory and we discuss how Ramsey, Hilbert and Ackermann removed the orders from RTT leading to the simple theory of types STT. We present STT and Church's own simply typed lambda calculus and we finish by comparing RTT, STT and Church's simply types lambda calculus. Then we will introduce the cube of type systems which contains eight of the most influential type theories of the 20th century.
Kleene Algebra with Tests
by Dexter Kozen (Department of Computer Science, Cornell University, USA)
Kleene algebra with tests (KAT) is an equational system that captures axiomatically the properties of a natural class of structures arising in logic and computer science. A Kleene algebra with tests consists of a Kleene algebra with an embedded Boolean subalgebra. Kleene algebra is named for Stephen Cole Kleene (1909-1994), who among his many other achievements invented finite automata and regular sets, structures of fundamental importance in computer science. Kleene algebra is the algebraic theory of these objects, although it has many other natural and useful interpretations. Augmented with a Boolean algebra of tests, the system can be used to reason equationally about traditional programming language constructs such as conditional tests and while loops. KAT subsumes propositional Hoare logic (PHL); more generally, it is complete over relational interpretations for Hoare-style rules of inference involving partial correctness assertions (PHL is not). It is also PSPACE-complete, the same as PHL.
In this tutorial I will introduce KAT. I will give a brief history of developments of the algebraic theory, introduce some models of interest in computer science as motivating examples, and prove some basic properties. I will then survey recent results on expressiveness, completeness, and complexity. Finally, I will discuss some recent applications of KAT in protocol verification and compiler optimization.
Model theory of extensions of first order logic
by Jouko Väänänen (Department of Mathematics, Helsinki University, Finland)
The two basic model theoretic properties of first order logic are the compactness theorem and the Löwenheim-Skolem theorem. Extensions of first order logic can be obtained by adding generalized quantifiers and infinitary conjunctions and disjunctions. Such extensions have been studied both on finite structures and on infinite structures. The famous Lindström's Theorem characterizes first order logic as the maximal logic on infinite structures with the compactness theorem and the Löwenheim-Skolem theorem. No such characterization is known on finite structures. In this tutorial I will give the basic proofs leading to Lindström's Theorem and an introduction to the more subtle model theory of extensions of first order logic.
Last modified: July 11, 2002, 09:37:48 GMT-0300.