- cc.complexity-theory approximation-hardness pcp max-cut unique-games-conjecture
- Updated Sun, 19 Jun 2022 22:20:06 GMT

I am studying the Unique Games Conjecture and the famous reduction to Max-Cut of Khot et al. From their paper and elsewhere on the internet, most authors use (what to me is) an implicit equivalence between the MAX-CUT reduction and constructing particular tests for long codes. Because of my own lack of clarity about that equivalence, I struggle to follow this train of thought.

It also seems clear from these expositions that one could describe the reduction purely in terms of graphs, but that by coincidence or preference nobody chooses to do it that way. For example, in these lecture notes of O'Donnell he hints that the long-code test corresponds to a natural definition of the edges in the graph being constructed, but since it's not spelled out that rule seems to depend on the choice of a cut to define the boolean function being tested, and it has left me rather confused.

So I'm asking for someone to explain the reduction "just" graph-theoretically. I think this will help me understand the equivalence between the two viewpoints.

Let me see if I can clarify this, on a high level. Assume the UG instance is a bipartite graph $G = (V \cup W, E)$, bijections $\{\pi_e\}_{e \in E}$, where $\pi_e\colon \Sigma \to \Sigma$, and $|\Sigma| = m$. You want to construct a new graph $H$ so that if the UG instance is $1-\delta$ satisfiable, then $H$ has a large cut, and if the UG instance is not even $\delta$-satisfiable, then $H$ has only very small cuts.

The graph $H$ contains, for each vertex in $W$, a cloud of $2^m$ points, each labeled by some $x \in \{-1, 1\}^\Sigma$. The intention is that you should be able to interpret a long code encoding of the labels of $W$ as a cut of $H$. Recall that to encode some $\sigma \in \Sigma$ with the long code, you use a boolean function $f\colon \{-1, 1\}^\Sigma \to \{-1, 1\}$; in particular it is the dictator function $f(x) = x_\sigma$. Let's produce a cut $S\cup T$ (i.e. bi-partition of the vertices) from the long code encoding as follows. If $w \in W$ has a label encoded by the boolean function $f$, go to the the cloud of vertices in $H$ corresponding to $w$, and put in $S$ all vertices in the cloud that are labeled by some $x$ for which $f(x) = 1$. All others go to $T$. You can do this backwards to assign boolean functions to all $w \in W$ based on a cut of $H$.

In order for the reduction to work, you need to be able to tell *only by looking at the value of a cut $S\cup T$* whether the boolean functions corresponding to the cut are close to a long code encoding of some assignment of labels to $W$ that satisfies a lot of the UG constraints of $G$. So the question is what information do we get from the *value* of a cut $S \cup T$. Consider any two vertices $a$ with label $x$ in the cloud corresponding to $w$ and $b$ with label $y$ in the cloud corresponding to $w'$ (in the reduction we only look at $w$, $w'$ in different clouds). We said that the cut can be used to derive boolean functions $f_w$ and $f_{w'}$. Now if there is an edge $(a,b)$ in $H$, then $(a, b)$ is cut if and only if $f_w(x) \neq f_{w'}(y)$. Therefore, using only the value of a cut to tell if the boolean functions it induces are "good" is the same as having a test that, given boolean functions $\{f_w\}_{w \in W}$, only asks for what fraction of some specified list of pairs $((w, x), (w', y))$ we have $f_w(x) \neq f_{w'}(y)$.

In other words, whenever Ryan says in the notes "test if $f_w(x) \neq f_{w'}(y)$", what he really means is "in $H$, add an edge between the vertex in the cloud of $w$ labeled by $x$ and the vertex in the cloud of $w'$ labeled by $y$". I.e. for every $v\in V$, every two of its neighbors $w, w'$, and every $x,y \in \{-1, 1\}^n$, include the edge between the vertex in the cloud of $w$ labeled by $x\circ \pi_{v,w}$ and the vertex in the cloud of $w'$ labeled by $y \circ \pi_{v,w'}$, and assign the edge weight $(({1-\rho})/{2})^d(({1+\rho})/{2})^{n-d}$ where $d$ is the Hamming distance between $x$ and $y$. In this way the value of a cut divided by the total edge weight is exactly equal to the success probability of the test.

- +0 – This is an excellent answer, which I will need to study in some more depth. I have a minor follow-up question: should I be suspicious that a reduction I expect to be deterministic still has this randomized component of generating a $\mu$? — Sep 10, 2014 at 13:57
- +0 – Sorry, this is simulated by adding the edges for all vectors in the support of $x\mu$ and assigning the edge weights proportional to the probabilities. Fixed. — Sep 10, 2014 at 14:30

External links referenced by this document: