No.115 Intensional and extensional aspects of computation: From computability and complexity to program analysis and security

Icon

NII Shonan Meeting Seminar 115

Overview

Models of computation branched into extensional and intensional in 1936, almost as soon as they were invented: Alonzo Church’s negative solution of the Entscheidungsproblem was based on his lambda calculus, whereas Alan Turing’s solution was based on his machines. For Church, function abstraction was given as a basic operation, the lambda abstraction. For Turing, program abstraction was a process of constructing a machine for some required function, and then describing that machine as a program to be fed to a universal machine. There are, of course, many different machines implementing the same function, and many different programs describing the same machine to different universal machines. So there are many programs for every function; in fact infinitely many. Much later, the binary relation identifying the programs that describe the same set-theoretic function came to be called the extensional equivalence. The terminology goes back to Hermann Grassmann, who construed sets as the extensions of mathematical objects. On the other hand, theory and practice of computation where the differences between the various algorithms or programs that implement the same function do matter (e.g. because some take longer time, or require longer programs) came to be called intensional. The term goes to Grassman again, but the meaning that he assigned to it was different from the present use.

The simple distinction between the extensional and the intensional views of computation is thus that in the extensional view there is a single program for each function, whereas in the intensional view there are many. To add a wrinkle, Alonzo Church initially did not impose on his abstraction operation the conversion rule that precludes the possibility that, besides the lambda-abstraction of a function, there might be other lambda terms whose applications yield the same function. The extensionality rule that identifies all such terms was imposed later, leading to what is now known as the extensional lambda calculus. The non-extensional lambda calculus, lacking the extensionality rule, is sometimes used as a model of intensional computation. While such uses are surely justified by the interesting results that they make possible, the mere presence of an operation that selects an abstraction, albeit one among many, factors out many intensional properties. A sequence of results from 1980s (Scott, Koymans, Hayashi…) shows that this is unavoidable, since every non-extensional model contains an extensional model as a computable retract. On the other hand, identifying the extensionally equivalent programs is, of course, not computable in general.

But just like lambda calculus factors out the intensional properties of computations, the intensional models factor out the extensional properties. Which extensional properties of computations can be recognized on the corresponding programs? Only the trivial ones, says Rice’s Theorem, assuming that “recognized” is taken to mean “decided”. Only the finitely approximable ones, say the theorems by Myhill-Shepherdson, Rice-Shapiro, and Kreisel-Lacombe-Schoenfield, assuming that “recognized” is taken to mean “computable”.

Hence the gap between the extensional and intensional models of computation: neither subsumes the other. Hence the schism between the Western and the Eastern Theory of Computation: algorithmics vs semantics — as diagnosed by Yuri Gurevich in the mid 1990s.

A schism is not necessarily a bad thing. Scott’s domain theory used the negative result that computable extensional properties must be finitely approximable as a fulcrum to lift the world of denotational semantics of programming languages. Cousot’s abstract interpretation extended it further into extensional semantics of programs. On the other hand, the problems of capturing the intensional properties, like time complexity or precision of program analysis and verification, in extensional models, generated solutions that enriched both sides of the gap, both technically, and conceptually.

Yet the bridge has not been built. Program semantics is still high level and abstract, algorithmics is still low level, and very concrete. Is this the way it should be, or are we missing something by keeping the Church-Turing legacies so far apart?

Category: Uncategorized

Tagged:

Comments are closed.