- cc.complexity-theory pcp
- Updated Wed, 22 Jun 2022 10:57:59 GMT

What follows might seem stupid (and that probably reflects my poor understanding - so please bear with me)

I had a query on PCP theorem. We know that after the first three steps viz. Degree Reduction, Expanderization and Gap Amplification, we have a constraint graph $G$ with improved gap and huge alphabet size (like $\Sigma^{d^t}$). It is this problem that the alphabet reduction step addresses.

My question is that as outlined in Venkat Guruswami's lecture notes Introduction to Composition, it seems to me that the high level idea is to express the constraint $c_e$ over an edge $e$ as a Boolean constraint over boolean variables. This by itself achieves nothing and we also need to apply the PCP reduction ,$P_e$, on this edge. This "looks like" a recursive invocation of PCP and this is where I start getting a little worried. It seems as this recursive invocation would blow up the alphabet size again.

The authors have offered some explanation by observing that this recursion has a "base case" - namely - the "inner" PCP reduction applies only to constraints of constant size.

(By this I understand that the inner recursion gets invoked only when we are looking at constraints $c_e$ over a single edge which is a binary constraint, but still I have not yet come over the fear that somehow we still might blow up the alphabet size instead of shrinking it). To me, it still seems that a recursive repetition of Gap Amplification step will only make matters worse by blowing up the alphabet size unless we incorporate measures to handle the base case a little differently.

I hope my query (as silly as it is) is probably clear. Please let me know what essential part am I missing (or have misunderstood).

You are asking about Dinur's proof of the PCP Theorem. The alphabet reduction step does use a PCP, but the PCP has very different parameters from the one you construct, and you don't necessarily have to use recursion to construct it. In particular, in Dinur's proof, because this inner PCP for the alphabet reduction is applied on constant sized input, we don't care if it has huge (say exponential or even more than that) blowup, which makes it comparatively easy to give a direct construction of a good enough PCP.

The entire proof including this stage is described in several places (see answer to this question), and so you may find a different description that you like better. In particular, in my complexity textbook with Sanjeev Arora this is covered in Chapters 11 and 22, we offer there two alternative ways to achieve the alphabet reduction step. One is using Hadamard-based PCP in the main text. But maybe the simplest self contained variant of it is the construction worked out in Exercise 22.5. We also have a table, in Section 22.2.1, that shows exactly what very step of the proof does to the alphabet size (and other parameters such as soundness error, size and number of queries) and that may allay your concern.

External links referenced by this document: