Theoretical Computer Science
ds.algorithms randomized-algorithms derandomization
Updated Wed, 22 Jun 2022 17:45:50 GMT

Efficient and simple randomized algorithms where determinism is difficult

I often hear that for many problems we know very elegant randomized algorithms, but no, or only more complicated, deterministic solutions. However, I only know a few examples for this. Most prominently

  • Randomized Quicksort (and related geometric algorithms, e.g. for convex hulls)
  • Randomized Mincut
  • Polynomial Identity Testing
  • Klee's Measure problem

Among these, only polynomial identity testing seems to be really hard without the use of randomness.

Do you know more examples of problems where a randomized solution is very elegant or very efficient, but deterministic solutions are not? Ideally, the problems should be easy to motivate for laymen (unlike e.g. polynomial identity testing).


Sorting nuts and bolts

The following problem was suggested by Rawlins in 1992: Suppose you are given a collection of n nuts and n bolts. Each bolt fits exactly one nut, and otherwise, the nuts and bolts have distinct sizes. The sizes are too close to allow direct comparison between pairs of bolts or pairs of nuts. However, you can compare any nut to any bolt by trying to screw them together; in constant time, you will discover whether the bolt is too large, too small, or just right for the nut. Your task is to discover which bolt fits each nut, or equivalently, to sort the nuts and bolts by size.

A straightforward variant of randomized quicksort solves the problem in $O(n \log n)$ time with high probability. Pick a random bolt; use it to partition the nuts; use the matching nut to partition the bolts; and recurse. However, finding a deterministic algorithm that even runs in $o(n^2)$ is nontrivial. Deterministic $O(n\log n)$-time algorithms were finally found in 1995 by Bradford and independently by Komls, Ma, and Szemerdi. Under the hood, both algorithms use variants of the AKS parallel sorting network, so the hidden constant in the $O(n\log n)$ time bound is quite large; the hidden constant for the randomized algorithm is 4.

  • Noga Alon, Manuel Blum, Amos Fiat, Sampath Kannan, Moni Noar, and Rafail Ostrovsky. Matching nuts and bolts. Proc. 5th Ann. ACM-SIAM Symp. Discrete Algorithms, 690696, 1994.
  • Noga Alon, PhillipG. Bradford, and Rudolf Fleischer. Matching nuts and bolts faster. Inform. Proc. Lett. 59(3):123127, 1996.
  • PhillipG. Bradford. Matching nuts and bolts optimally. Tech. Rep. MPI-I-95-1-025, Max-Planck-Institut fr Informatik, 1995.
  • PhillipG. Bradford and Rudolf Fleischer. Matching nuts and bolts faster. Proc. 6th. Int. Symp. Algorithms Comput., 402408, 1995. Lecture Notes Comput. Sci. 1004.
  • Jnos Komls, Yuan Ma, and Endre Szemerdi. Matching nuts and bolts in $O(n\log n)$ time. SIAM J. Discrete Math. 11(3):347372, 1998.
  • Gregory J.E. Rawlins. Compared To What? : An Introduction to The Analysis of Algorithms. Computer Science Press/W. H. Freeman, 1992.

Comments (5)

  • +2 – This is a beautiful example, but it's an oracle problem. Is there some way to remove the oracle from it? — Nov 24, 2012 at 21:40  
  • +0 – Got a link to the 98 Szemeredi paper? How is this hard? In parallel compare each bolt to a unique nut and put each pair in sorted order; removing matched elements. In log(n) steps merge the sorted nbnbnbnbnb sequences, kicking out matches as they arise. EDIT: Yeah the noncomparability of nnn and bbbb strings is annoying on the merge step. — Nov 29, 2012 at 23:06  
  • +0 – @ChadBrewbaker Suppose in every pair but one, the bolt is smaller than the nut. (Yes, this is possible.) Now what does your algorithm do? In other words, "annoying" = "the whole problem". — Nov 30, 2012 at 05:29  
  • +0 – I was looking for the Szemeredi paper and thinking out loud how it is hard. Yes, I concur that a merge based approach is nontrivial; but Vishkin's papers on parallel graph connectivity leave a gut feeling it is not impossible. — Nov 30, 2012 at 20:08  
  • +0 – With each comparison from a nut and bolt you get a directed edge added to the graph or a match which removes both vertices. The goal would be to merge connected components in such a way that they collapse all matches between them with a linear amount of work and keep the storage size of edges in a connected component bounded to linear space. — Nov 30, 2012 at 20:14  

External Links

External links referenced by this document: