Theoretical Computer Science
cc.complexity-theory randomness derandomization expanders
Updated Wed, 15 Jun 2022 13:35:50 GMT

Error reduction with expanders and derandomization


In "standard" error reduction with an expander, if a randomize algorithm uses $n^d$ random bits, we need $n^d+O(n)$ random bits to achieve $2^{-O(n)}$ error probability. Now, if the algorithm has a good error probability to begin with, say 1/n, without expanders we need to run it only O(n/log n) times to achieve the same error and use $n^d\times O(n/\log n)$ random bits. If I'm correct with expanders it doesn't help (beside a little in the first step), meaning we still need $n^d+O(n)$ random bits. There exist some method, such that a good algorithm does have an affect on it (uses say $n^d+O(n/\log n)$ random bits)? There exist some barrier for achieving such a result?

*The question refers to the randomized algorithm in a black-box manner.




Solution

You can't expect this to be true since you can reduce the error of a constant-error algorithm to $1/n$ using an additional $O(\log n)$ bits using expanders (and by the way you don't really need even these extra bits to do that). So if an error of $1/n$ can go down to $\exp(-n)$ error using only $O(n/\log n)$ additional bits, then you could always go from constant error to $\exp(-n)$ error using $O(\log n)+O(n/\log n)=O(n/\log n)$.





Comments (1)

  • +0 – Thanks. This means that even if we have a very good algorithm, say with sub exponential error probability, you still need O(n) additional random bits. Hence, for error-reduction it is pretty much useless right? (besides the fact that it can reduce the number of times that the original algorithm is being used) — Apr 24, 2011 at 16:49