Theoretical Computer Science
cc.complexity-theory sat lower-bounds space-bounded space-complexity
Updated Wed, 22 Jun 2022 17:48:27 GMT

Best current space lower bound for SAT?

Following on from a previous question,

what are the best current space lower bounds for SAT?

With a space lower bound I here mean the number of worktape cells used by a Turing machine which uses a binary worktape alphabet. A constant additive term is unavoidable since a TM can use internal states to simulate any fixed number of worktape cells. However, I am interested in controlling the multiplicative constant that is often left implicit: the usual setup allows arbitrary constant compression via larger alphabets so the multiplicative constant is not relevant there, but with a fixed alphabet it should be possible to take it into account.

For instance, SAT requires more than $\log\log n + c$ space; if not then this space upper bound would lead to a time upper bound of $n^{1+o(1)}$ by simulation, and thereby the combined $n^{1.801+o(1)}$ space-time lower bound for SAT would be violated (see the linked question). It also seems possible to improve this argument to argue that SAT requires at least $\delta\log n + c$ space for some small positive $\delta$ that is something like $0.801/C$, where $C$ is the constant exponent in simulation of a space-bounded TM by a time-bounded TM.

Unfortunately $C$ is usually quite large (and certainly at least 2 in the usual simulation, where the tapes of a TM are first encoded on a single tape via a larger alphabet). Such bounds with $\delta \ll 1$ are rather weak, and I would be especially interested in a space lower bound of $\log n + c$. An unconditional time lower bound of $\Omega(n^d)$ steps, for some large enough constant $d > 1$, would imply such a space lower bound via simulation. However, time lower bounds of $\Omega(n^d)$ for $d>1$ are not currently known, let alone for large $d$.

Put differently, I'm looking for something that would be a consequence of superlinear time lower bounds for SAT, but which might be possible to obtain more directly.


Looks like the best bound known (for multitape Turing machines) is logarithmic.

Suppose $\delta\log n$ bits of binary worktape is enough to decide whether any $n$-bit CNF formula is satisfiable, for all large enough $n$. By the standard simulation, a TM with $q$ states that uses at most $s$ bits of space can be simulated by a TM that has at most $qns2^s = 2^{s+\log n + \log s + \log q}$ different configurations. Whenever the machine accepts, there is a sequence of (nondeterministic) moves reaching an accepting state that is at most as long as this number of configurations. When $s = \Omega(\log n)$, this is at most $2^{s(2+o(1))}$ (note that $q$ remains the same for all input lengths $n$). On a separate counter tape, $M$ can first write this quantity in unary, then at each step of the simulation erase one of the symbols of the counter, and terminate the computation if it ever runs out of counter symbols. This creates a constant factor of overhead (something like 3), which is absorbed by the $o(1)$ term in the exponent. Hence $2^{s(2+o(1))}$ steps are sufficient.

By assumption $s \le \delta\log n$, so the time-space product is at most $\delta\log n2^{\delta\log n(2+o(1))} = n^{\delta(2+o(1))}$.

Rahul Santhanam showed in 2001 (see doi:10.1016/S0020-0190(00)00227-1) that the space-time product for a Turing machine deciding SAT must be at least $\Omega(n^{2-o(1)})$; his argument applies also to nondeterministic machines. Hence $\delta \ge 1$, and at least $\log n$ bits of binary worktape are needed.

More generally, additional worktapes and a larger worktape alphabet change the exponent by a constant factor. This ultimately reduces the factor $\delta$, but the space lower bound is still $\Omega(\log n)$.

External Links

External links referenced by this document: