Theoretical Computer Science
computability lambda-calculus turing-machines
Updated Mon, 20 Jun 2022 10:49:13 GMT

Realizability theory: difference in power between Lambda calculus and Turing Machines

I have three related subquestions, which are highlighted by bullet points below (no, they could not be split, if you are wondering). Andrej Bauer wrote, here, that some functions are realizable through a Turing machine, but not through lambda-calculus. A key step of his reasoning is:

However, if we use the lambda calculus, then [the program] c is supposed to compute a numeral representing a Turing machine out of a lambda term representing a function f. This cannot be done (I can explain why, if you ask it as a separate question).

  • I would like to see an explanation/informal proof.

I don't see how to apply Rice's theorem here; it would apply to the problem "are this turing machine T and this lambda-term L equivalent?", because applying this predicate to equivalent terms gives the same result. However, the required function might compute different, but equivalent, TMs for different, but equivalent, lambda-terms.

  • Moreover, if the problem is with introspection of a lambda-term, I think that passing a Gdel encoding of a lambda-term would be also acceptable, wouldn't it?

On the one hand, given that his example involves computing, in the lambda calculus, the number of steps needed by a Turing Machine to complete a given task, I'm not very surprised.

  • But since here lambda-calculus can't solve a Turing-machine-related problem, I wonder whether one can define a similar problem for lambda-calculus and prove it unsolvable for Turing machines, or there is actually a difference in power in favor of Turing Machines (which would surprise me).


John Longley has a very extensive survey article discussing the issues involved, "Notions of Computability at Higher Type".

The basic idea is that the Church-Turing thesis is only about functions from $\mathbb{N} \to \mathbb{N}$ -- and there's more to computation than that! In particular, when we write programs, we make use of functions of higher type (such as $(\mathbb{N} \to \mathbb{N}) \to \mathbb{N}$).

In order to fully define a model of higher type computation, we need to specify the calling convention for functions, in order to allow one function to call another function it receives as an argument. In lambda calculus, the standard calling convention is that we represent functions by lambda-terms, and the only thing you can do with a lambda in the lambda calculus is to apply it. In typical encodings with Turing machines, we pass functions as arguments by fixing a particular Godel encoding, and then strings representing the index of the machine you want to pass as an argument.

The difference in encoding means that you can analyze the syntax of the argument with a TM-style encoding, and you cannot with a standard lambda-calculus representation. So if you receive a lambda-term for a function of type $\mathbb{N} \to \mathbb{N}$, you can only test its behavior by passing it particular $n$'s -- you can't analyze the structure of the term in any way. This is just not enough information to figure out the code of the lambda term.

One thing worth noting is that with higher types, if a language is less expressive at one order, it is more expressive one order up, because functions are contravariant. So similarly there are functions you can write in LC that you can't with a TM-style encoding (because they rely on the fact that you can pass functional arguments and know that the receiver can't look inside the function you give it).

EDIT: Here's an example of a function definable in PCF, but not in TM+Goedel encodings. I'll declare the isAlwaysTrue function

 isAlwaysTrue : ((unit  bool)  bool)  bool

which should return true if its argument ignores its argument and always returns true, should return false if its argument returns false on any inputs, and goes into a loop if its argument goes into a loop on any inputs. We can define this function pretty easily, as follows:

isAlwaysTrue p = p ((). true)  p ((). false)  p ((). )

where is the looping computation and is the and operator on booleans. This works because there are only three inhabitants of unit bool in PCF, and so we can exhaustively enumerate them. However, in a TM+Goedel-encoding style model, p could test how long its argument takes to return an answer, and return different answers based on that. So the implementation of isAlwaysTrue with TMs would fail to meet the spec.

Comments (5)

  • +2 – this is an excellent survey. thanks for the link ! — Sep 09, 2010 at 21:56  
  • +0 – I just realized I had forgot to accept an answer, though I meant to accept yours. Sorry! — Oct 05, 2013 at 21:02  
  • +0 – The difference in encoding means that you can analyze the syntax of the argument with a TM-style encoding, and you cannot with a standard lambda-calculus representation.: but if you have representations for function composition? Also, what you say seems to suggest HOL is more than a theory of a typed lambda calculus, it's more than that? — Feb 14, 2016 at 07:00  
  • +0 – Also, what about this: . Is this wrong in some way? — Feb 14, 2016 at 07:13  
  • +0 – Dear Neel, do you have an example for a function which can be realized in the lambda calculus model but not in the Turing model? — Sep 14, 2016 at 10:33  

External Links

External links referenced by this document:

Linked Articles

Local articles referenced by this article: