Theoretical Computer Science
reference-request computability big-list undecidability
Updated Fri, 10 Jun 2022 11:09:33 GMT

research on systematically attacking multiple instances of undecidable problems


this question is inspired by a recent popular question [1] on a boundary relating to decidable and undecidable problems (ie open problems in this area), a sort of counterpoint.

there are at least some papers of a type that attack multiple instances of undecidable problems. each instance of the general problem is not known a priori to be decidable or undecidable, and the paper describes strategies/techniques for identifying/sorting some into decidable cases, and provides algorithms which decide the problem if it is found to be among the decidable cases.

what are papers of this genre, systematically attacking multiple instances of undecidable problems?

[1] A simple problem whose decidability is not known / Reyzin, tcs.se




Solution

Let's say an algorithm A solves a "special case" of the decision problem L if on input x, $A(x)$ always either outputs the correct answer $L(x)$, or outputs "?".
These algorithms (which may be randomized) are sometimes called "errorless heuristics" and have been studied in complexity theory, esp. average-case complexity; see e.g. this paper of Watson and references therein.

A basic question you can ask about a very difficult decision problem is whether it contains any nontrivial easy cases. E.g. if L is decidable, is there any (deterministic) errorless heuristic A for L that solves $L(x)$ on infinitely many inputs? This has been explored in the notion of immunity (along with co-immunity, bi-immunity, simplicity and others) in computability theory. See e.g. this wiki article. Complexity-theoretic analogues have been studied as well.

Since you mention multiple instances, I might also interpret the question as looking for research into algorithms which, given instances $x^1, \ldots, x^m$ for a decision problem (i.e. language) L, compute some partial information about the membership/nonmembership of the given strings in $L$.

One of the earliest and most influential works of this type was by Jockusch, who in his dissertation and this 1968 paper defined "semi-recursive" languages. L is semi-recursive if there is a computable "selector" function that, given instances $x^1, x^2$, selects one of these two strings as being "logically more likely" to lie in L. That is, we require that whenever at least one of $x^1, x^2$ lie in L, the string selected by the selector lies in L.

The study of polynomial-time analogues of this concept was initiated by Selman in this 1979 paper. Since then there has been tons of work on "p-selective" languages (the polytime version of semi-recursive languages) along with "membership comparable" languages and many other related concepts. I can't summarize all this work, but consult e.g. this article of Beigel, Fortnow, and Pavan, and this book of Hemaspaandra and Torenvliet. The concept of bounded queries in computability and complexity is also related, see this survey by Gasarch.