Theoretical Computer Science

# Proof assistant usage in complexity theory research?

Considering the topics covered at a conference like STOC, are any algorithm or complexity researchers actively using COQ or Isabelle? If so, how are they using it in their research? I assume most people wouldn't use such tools because the proofs would be too low level. Is anybody using these proof assistants in a way that is critical to their research, as opposed to a nice supplement?

I am interested because I might start learning one of those tools and it would be fun to learn about them in the context of proofs of reductions, correctness, or run time.

## Solution

A general rule of thumb is that the more abstract/exotic the mathematics you want to mechanise, the easier it gets. Conversely, the more concrete/familiar the mathematics is, the harder it will be. So (for instance) rare animals like predicative point-free topology are vastly easier to mechanize than ordinary metric topology.

This might initially seem a bit surprising, but this is basically because concrete objects like real numbers participate in a wild variety of algebraic structures, and proofs involving them can make use of any property from any view of them. So to be able to the ordinary reasoning that mathematicians are accustomed to, you have to mechanize all these things. In contrast, highly abstract constructions have a (deliberately) small and restricted set of properties, so you have to mechanize much less before you can get to the good bits.

Proofs in complexity-theory and algorithms/data-structures tend (as a rule) to use sophisticated properties of simple gadgets like numbers, trees, or lists. Eg, combinatorial, probabilistic and number-theoretic arguments routinely show up all at the same time in theorems in complexity theory. Getting proof assistant library support to the point where this is nice to do is quite a lot of work!

One context where people are willing to put in the work is in cryptographic algorithms. There are very subtle algorithmic constraints in place for complex mathematical reasons, and because crypto code runs in an adversarial environment, even the slightest error can be disastrous. So for example, the Certicrypt project has built a lot of verification infrastructure for the purpose of building machine-checked proofs of the correctness of cryptographic algorithms.