In the $k-$center problem, you're given $V$ points in Eucledian space, and you're asked to get a subset $C\subset V, |C|=k$ such that $\max _{v\in V}d(v, Closest-Center(C,v))$ is minimized.
Now I am aware of the greedy $2-$Approximation where you start with a random center, $C=\{c_1\}$. Then at each step, add the point furthest from $C$ iteratively until $|C|=k$
However, my question is about the local search approach. Namely, start with a set of random $k$ centers $C=\{c_1, ..., c_k\}$, then at each step, for every $(c_i, v), \forall v\in V-C, \forall c_i\in C$, check if $C-\{c_i\}+\{v\}$ improves the cost. There are $O(nk)$ such pairs, and it takes $O(n)$ time for each pair to compute it's cost for a total time of $O(n^2k)$ per iteration. Repeat this until your $k$ centers don't change after $I$ iterations (Where $I$ is trivially $O(n^k)$ ).
My question is, once the algorithm terminates, does the local search guarantee any approcimation ratio on the maximum distance? Is there any literature on this? I read in a paper that in the $k-$median problem, this approach would give a $5-$approximation.
Local search (with a single swap) doesn't give you a good approximation factor in the worst case for $k$-center, as illustrated by the following example.
Take a simplex in $\mathbb{R}^{k-1}$, and put $k$ points at each of the $k$ vertices, for a total of $k^2$ points.
Note that between each pair of vertices of the simplex the distance is 1. The optimal solution takes one point from each vertex, obtaining cost zero. And that's the only way to obtain cost zero.. any other type of solution has cost 1.
So if you start local search with $C$ being any solution that is more than one swap away from such an optimal solution, no local improvement (in one step) is possible. For example, start local search with $C$ containing all $k$ of the points at a single vertex. That solution has cost 1, and no single swap (or even $k-2$ swaps) can reduce the cost below 1.