NO.151 Higher-order Complexity Theory and its Applications
October 7 - 10, 2019 (Check-in: October 6, 2019 )
- Bruce M. Kapron
- University of Victoria, Canada
- Akitoshi Kawamura
- Kyushu University, Japan
- Florian Steinberg
- INRIA Sophia Antipolis, France
Description of the meeting
The theory of higher-order complexity is an emerging ﬁeld which is still in its infancy. More researchers across a broad range of research areas are asking questions that may best be addressed in this framework, or are using established tools and results, but in an isolated way. The objective of this workshop is to both strengthen the foundations and forge stronger connections between the foundations and the various areas in which the theory is now being applied. While there is quite a broad range of application areas, we will focus on three: computable analysis, programming language theory, and NP search complexity. We are also inviting researchers who work in related areas (e.g. type theory, logic, computability, security.)
Traditional computational complexity theory is concerned with computation by Turing ma-chines over objects with ﬁnite representations, such as ﬁnite graphs, or natural numbers. Higher-order complexity extends this to inﬁnitary domains, for example real numbers, while retaining ﬁnitary characteris-tics of the standard theory. This is achieved through the use of oracle Turing machines (OTMs), which allow oracle access to inﬁnitary inputs but with ﬁnitary internal state and ﬁnite computations. OTMs are a well understood model of computation, and have been widely applied in computability theory, computable anal-ysis, and ordinary complexity. Once we introduce complexity notions into this model the situation becomes more complex.
One particular goal of higher-order complexity has been to provide a notion of feasibility, or poly-time computability, in this setting. While early work in the 1970’s by Constable and Mehlhorn made partial progress, there was no simple OTM-based characterization. A breakthrough came with the work of Kapron and Cook in the 1990s, which uses second-order polynomials, and deﬁnes an appropriate size measure for function inputs. This work is the basis for much of the subsequent work in type-two complexity, as well as generalizations to higher levels of the type hierarchy. The model has been applied in a range of areas, including computable analysis, theory of programming languages, and ordinary complexity theory, especially the complexity of search problems.
Complexity in Computable Analysis
Computable analysis is the accepted theory of computability for continuous structures. It is well-developed and by nature it is closely related to numerical analysis. Theorems from computable analysis often reﬂect what is known heuristically in numerical computing, but unprovable. However, numerics is not only interested in providing correctness of algorithms but usually aims to optimize the performance of such algorithms. Computable analysis relies on higher-order computability theory to be able to model computations on uncountable structures. Developing a complexity theory for computable analysis necessitates the use of higher-order complexity theory.
The most popular framework for complexity considerations in computable analysis is the framework of second-order representations introduced by Kawamura and Cook. This framework relies on second-order complexity theory, but to avoid technical complications that arise from the necessity to use partial operators, it only uses a fragment of the theory. New methods are required for dealing with the partiality that seems unavoidable for applications in computable analysis.
Programming Language Theory
The tools of traditional complexity theory have been quite successful in complexity analyses of small, low-level programs, especially those written in an imperative style. However, extensions to larger-scale programs lead in a natural way to a more abstract and compositional approach, which while making reasoning about correctness more tractable, can obscure information required for com-plexity analysis. Moreover, alternate paradigms such as object-oriented or functional programming require a diﬀerent approach than for imperative programs, and naturally lead to models inolving higher-type com-putation. There is a rich theory in this area involving techniques from implicit complexity, type theory, rewriting sytems and semantics, among others. Higher-type complexity theory is an essential component in addressing the challenge of providing good complexity models in this setting.
Traditional complexity theory deals with computation over ﬁnite structures, but even in this setting there are situations where ﬁnite inputs are “too large” to be accessed in an eﬃcient way, in particular in the study of NP-search problems. The class TFNP, introduced by Papadimitriou in the 1990s, consists of search problems for which a small, eﬃciently-veriﬁable solution is always guaranteed to exist. The question that arises in this setting is the diﬃculty of eﬃciently ﬁnding a solution. This is a natural model for search algorithms having local access to a large object (e.g. the edges of a graph). One approach is to provide access via a circuit representing the graph. Subsequently, Beame et al. noted these problems may be modeled in a type-two setting, where the large object is presented as a function. This has led to approaches from a diversity of perspectives, including proof theory, the theory of generic oracles, and cryptography.
Aims of the Workshop
Our aim is to bring together researchers who are doing basic research in higher-order complexity along with those who are applying higher-order complexity in diﬀerent areas, with the goal of clarifying the foundations of the ﬁeld and providing a common framework and deﬁnition of basic models as a way of building bridges and searching for common approaches to problems in diverse areas. Possible activities towards achieving this aim include
- Tutorial sessions by experts in the ﬁeld, providing an introduction to the foundations of higher-type complexity, as well as sessions on the main application areas described above
- Workshop-style talks (20-30 minutes) by individual researchers describing new work in the area. All participants, including students, will be given the opportunity to present their work.
- Problem sessions which will provide a forum for new ideas and open questions. Participants from diverse backgrounds will address problems from one area that impact another, or are based on a synthesis of ideas from diﬀerent sub-areas. For example:
– Daskalakis and Papadimitriou recently introduced the class CLS of continuous local search algorithms. Can we obtain oracle separations for this class along the lines obtained for other TFNP subclasses? This involves search complexity, feasible analysis and foundations.
– Higher-order functional languages such as Haskell are a natural setting for computable analysis. What sort of complexity theory can we do in this setting?
– There has been recent interest in using probabilistic higher-order programming languages (e.g. Anglican, Church) for machine learning. What sort of complexity models can we develop in this setting? This and the previous question involve programming languages and feasible analysis.