General Computing
hard-drive performance dd secure-erase random-number-generator
Updated Sun, 12 Jun 2022 15:04:19 GMT

Why is my /dev/random so slow when using dd?

I am trying to semi-securely erase a bunch of hard drives. The following is working at 20-50Mb/s

dd if=/dev/zero of=/dev/sda


dd if=/dev/random of=/dev/sda 

seems not to work. Also when I type

dd if=/dev/random of=stdout

It only gives me a few bytes regardless of what I pass it for bs= and count=

Am I using /dev/random wrong? What other info should I look for to move this troubleshooting forward? Is there some other way to do this with a script or something like

makeMyLifeEasy | dd if=stdin of=/dev/sda

Or something like that...


Both /dev/random and /dev/urandom use an "entropy pool". When the pool runs out, /dev/random waits for it to refill, which requires monitoring system behavior (keyboard input, mouse movement, etc.), whereas /dev/urandom will continue to give you pseudo-random data. /dev/random is theoretically higher quality, but /dev/urandom is almost certainly good enough for your purposes. (But even /dev/urandom is likely be slower than some other methods. A faster, but lower quality, generator is probably good enough for erasing hard drives. It's not clear that an attacker would gain any advantage from knowing the sequence that's going to be generated, or that random numbers are better for this purpose than a sequence like 0, 1, 2, 3, 4, ....)

Quoting the random(4) man page:

If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.

UPDATE : The `random(4) man page has been updated since I wrote that. It now says:

The /dev/random interface is considered a legacy interface, and /dev/urandom is preferred and sufficient in all use cases, with the exception of applications which require randomness during early boot time; for these applications, getrandom(2) must be used instead, because it will block until the entropy pool is initialized.

See also "Myths about /dev/urandom" by Thomas Hhn.

But /dev/urandom, even though it won't block, is likely to be too slow if you want to generate huge amounts of data. Take some measurements on your system before trying it.

EDIT : The following is a digression on "true" random numbers vs. pseudo-random numbers. If all you're interested in is a practical answer to the question, you can stop reading now.

I've seem claims (including in other answers here) that /dev/random implements a "true" random number generator, as opposed to a pseudo-random number generator (PRNG). For example, the Wikipedia article makes such a claim. I don't believe that's correct. There's some discussion of it here which refers to hardware random number generators, but I see no evidence that /dev/random typically uses such a device, or that typical computers even have such a device. They differ from PRNGs like the C rand() function in that they're not deterministic, since they harvest entropy from sources that are practically unpredictable.

I'd say there are three classes of "random" number generators:

  1. Deterministic PRNGs, like C's rand() function, which use an algorithm to generate repeatable sequences that have (more or less) the statistical properties of a truly random sequence. These can be good enough for games (given a good way of seeding them), and are necessary for applications that require repeatability, but they're not suitable for cryptography.

  2. Generators like /dev/random and /dev/urandom that harvest entropy from some practically unpredictable source like I/O activity (this is why pounding on the keyboard or moving the mouse can cause /dev/random to produce more data). It's not clear (to me) whether these satisfy the definition of a PRNG (I've seen definitions that say a PRNG is deterministic), but neither are they true random number generators.

  3. Hardware random number generators that are physically unpredictable even with complete knowledge of their initial state, and that additionally use mathematical techniques to ensure the right statistical properties.

Comments (3)

  • +2 – Even /dev/urandom is kind of slowish if you have to fill in huge amounts of disk space (like whole partitions, before creating encrypted filesystems on them). This should be taken as small addition to the excellent answer and detailed explanation. — Nov 24, 2011 at 05:55  
  • +0 – Since you cannot compute/derive/create/... more than one bit of entropy from one random bit, anything which generates/outputs more 'random' bits than it received as input is by definition pseudo-random at best. Hence, /dev/urandom clearly is pseudo-random. /dev/random differs in that it tries to make a conservative estimate of its input's entropy and does not output more entropy than it (thinks it) actually can. This is independent of the presence of a dedicated TRNG device, because true entropy can also be obtained from independent events of any kind, like keyboard or network IO vs. time. — Feb 12, 2016 at 12:35  
  • +0 – Nowadays, many computers will have hardware random number generators (e.g. modules or on-chip generators) that contribute to the "/dev/random" entropy pool. Try running rngd --list to list the entropy sources used, it may list a TPM or something like "Intel RDRAND Instruction RNG". — Jul 22, 2020 at 16:52