# No.096 Logic and Computational Complexity

**NII Shonan Meeting: **

**@ Shonan Village Center****, September 18 – 22, 2017**

## Organizers

- Yijia Chen, Fudan University, China
- Rodney Downey, Victoria University of Wellington, New Zealand
- Jörg Flum, Albert-Ludwigs-Universität Freiburg, Germany

## Overview

**Description of the meeting**

The discipline of theoretical computer science has its early roots in the pioneering work of Church, Turing, and Gödel. Two important branches of theoretical computer science were already visible right from the beginning: One oriented to computational complexity and algorithms, the other to logic, semantics, and formal methods. The two branches have quite different goals and problems, each developed methods of its own, and they partly use different mathematical tools. Even though their division has been growing steadily during the last 30 years, the two branches come together from time to time as witnessed by the work in areas as descriptive theory, proof complexity, and more recently, parameterized complexity. The main focus of the planned meeting are those areas.

Probably the theorem of Büchi and Trahtenbrot characterizing the languages accepted by finite automata in terms of monadic second-order logic (MSO) can be viewed as the first main result in the area of **descriptive complexity**. However, the systematic development of descriptive complexity (or finite model theory at large) started with Fagin’s seminal work. It shows that the complexity class NP consists precisely of the problems definable in existential second-order logic (ESO). It is well known that all major complexity classes have such a characterization in terms of an appropriate logic. For example, the class P corresponds to least fixed-point logic (LFP) on ordered structures. Therefore, the separation of P and NP, the central problem in computational complexity, amounts to show that LFP and ESO have different expressive power on ordered structures. Although it hasn’t panned out as hoped, finite model theorists have the long-term goal to settle some major complexity problems using methods from logic. In the converse direction, computational complexity has proved to be very useful to resolve some difficult questions in finite model theory. One example is Rossman’s result that on ordered structures the expressive power of k-variable first-order logic strictly increases with k. His proof uses Håstad’s Switching Lemma, a major tool from circuit complexity.

Of course, also in **proof complexity** the central open problem is the P versus NP question of whether there exists a polynomial time method of recognizing tautologies. A related research area is the question of proof lengths. In this area, the central questions concern the minimum lengths of proofs needed for tautologies in proof systems. A proof system *P* is polynomially optimal if for any other proof system *P**′* there is a polynomial algorithm transforming every proof in *P**′* of a tautology into a proof in *P* of the same tautology. In particular, the length of the proof in *P* is polynomially bounded in the length of the proof in *P**′*. Recently, it was shown that the existence of a polynomially optimal proof system for tautologies is equivalent to the fact that a certain logic, introduced by Blass and Gurevich, is a logic for P.

**Parameterized complexity theory** provides a framework for a refined analysis of hard algorithmic problems A specific structural property of a given problem is identified (called the parameter). It is expected to be small in typical instances of the problem. Then, the (parameterized) complexity of the problem is measured in terms of its parameter and input length.

Logic shows up in this area in many different ways. For example, logic yields the framework for algorithmic meta-theorems. These theorems give sweeping explanations for the existence of many efficient algorithms on special graph classes. For instance, Courcelle’s Theorem yields linear time algorithms for all problems definable by MSO on graphs of bounded tree-width. The area of algorithmic meta-theorems uses deep tools from both model theory and structural graph theory.

Furthermore, computational problems from logic, e.g., weighted satisfiability problems for propositional logic and model-checking problems for first-order logic, are used to define (or, to characterize) classes of parameterized intractability.

In the planned Shonan meeting, we aim to bring together researchers from both communities, complexity and logic, working in the areas mentioned above. So they can share their recent work and discuss research problems. The meeting will consist a number of tutorials, survey talks, and research talks.