Developers Club geek daily blog

2 years, 11 months ago

For certain many of you repeatedly faced myths about / dev/urandom and / dev/random. Perhaps, you even trust in some of them. In this post we will break covers from all these myths and we will sort the presents strong and weaknesses of these random number generators.

Myth 1: / dev/urandom is unsafe. For cryptography always use / dev/random

Generally it affirms in relation to rather "fresh" systems based on Linux, but not in general to all UNIX-like systems. Actually / dev/urandom is a preferable random number generator for cryptographic tasks in UNIX-like systems.

Myth 2: / dev/urandom is a pseudorandom number generator (PRNG), and / dev/random — the generator of "truly" random numbers

Actually both of them are cryptographic resistant pseudorandom number generators (CSPRNG). Distinctions between them are very small and have no relation to randomness degree.

Myth 3: for cryptographic tasks unambiguously to use / dev/random better. In use/dev/urandom there is no sense even if it would be rather safe

Actually at / dev/random is very unpleasant problem: blocking.

But same it is remarkable! The more in bullet/dev/random, the randomness level is higher than entropy. А / generates unsafe random numbers even if entropy is already settled

No. "Entropy exhaustion" is a scarecrow even if not to take into account availability and the subsequent manipulations from users. About 256 bits of entropy are quite enough for VERY long generation vychislitelno resistant numbers.

Far more funny another: from where / dev/random can know how many still entropy at it is in a stock?

But specialists in cryptography go on about permanent updating of start state (re-seeding) all the time. Whether it contradicts your last statement?

It is right partly. Yes, the random number generator constantly updates the start state by means of various sources of entropy which are available to it. But the reasons of it consist in another (partly). I do not claim that use of entropy — it is bad. At all not. I only speak about harm of blocking when falling level of entropy.

All this is fine, but even the reference book on / dev / (u) random contradicts your statements. In general though somebody shares your point of view?

I do not disprove handbook data at all. I can assume that you yet not absolutely understand all this cryptographic slang therefore saw in the reference book confirmation of insecurity/dev/urandom for cryptographic tasks. But in the reference book it is only not recommended to use / dev/urandom in certain cases. From my point of view, it is not critical. At the same time in the reference book it is recommended to use / dev/urandom for "normal" cryptographic tasks.

Appealing to authorities — not a reason for pride. In cryptography it is necessary to approach carefully a solution of questions, listening to opinion of specialists in specific spheres.

And yes, many experts share my point of view that in the context of cryptography in UNIX-like systems the best random number generator —/dev/urandom. As you understand, this their cumulative opinion influenced me, and not vice versa.

* * *
Possibly, it is difficult for much of you to believe in all this. "He for certain is mistaken!" Well, let's in detail sort everything told, and you solve, I am right or not. But before starting analysis, answer yourself: what is randomness? More precisely about what randomness of a sort we speak here? I do not try to be indulgent. This text was written to refer to it in the future when again there is a discussion about random number generators. So here I perfect the arguments and arguments. Besides, I am interested in other opinions. Insufficiently just to claim that "/dev/urandom — it is bad". You need to find out what you do not agree with and to deal with it.

"He is an idiot!"

Categorically he does not agree! I once believed that / dev/urandom is unsafe. Yes all of us were simply forced in it to believe because the great number of all these dear programmers and developers at forums and in social networks goes on about it to us. And it seems to much that even in man is told about the same. And who we are such to argue with their convincing argument on "entropy exhaustion"?

This deeply wrong opinion took roots not because people are silly, and just very few people seriously understand cryptography (namely foggy concept of "entropy"). Therefore authorities easily convince us. Even the intuition agrees with them. Unfortunately, the intuition also understands nothing cryptography, as well as the majority of us.

True randomness

What criteria of "true randomness" of number? Let's not go deep into a jungle as discussion will quickly pass into the field of philosophy. In similar discussions very quickly it is found out that everyone goes on about the favourite model of randomness, without listening to others and even without caring for that it was understood.

I consider that as this standard of "true randomness" serve quantum effects. For example, arising when passing photons through a translucent mirror, at emission of alpha particles radioactive material etc. That is ideal randomness meets in some physical phenomena. Someone can consider, as they cannot be truly accidental. Or that in the world in general nothing can be accidental. Well, "all pis".

Cryptographers usually avoid participation in similar philosophical debate as do not recognize the concept "truths". They operate with the concept "unpredictability". If nobody possesses any information on the following random number, then everything is all right. I consider that it is necessary to be guided by it when using random numbers in cryptography.

Generally, I am a little concerned by all these "philosophical and safe" random numbers as I for myself call "truly" accidental.

From two types of safety only one matters

But let's assume that you managed to receive "truly" random numbers. What you with them will do? You will print and will hang up on walls in a bedroom, enjoying beauty of the quantum Universe? Why is also not present, I can understand such relation.

But you for certain use them, and for cryptographic needs? And it already looks not so positively. You see your truly accidental, dawned clemency of quantum effect numbers get to the grounded algorithms of the real world. And the problem is that almost all from the used cryptographic algorithms do not correspond to teoretiko-information security. They ensure "only" computing safety. Only two exceptions come back to memory: scheme of separation of a secret of Shamir and Vernam's cipher. And if the first can act as a counterpoint (if you really are going to use it), then the second is extremely impractical. Nevertheless other algorithms: AES, RSA, Diffie — Hellmana, elliptic curves, such cryptopackets as OpenSSL, GnuTLS, Keyczar, and cryptographic API are vychislitelno safe.

In what difference? Teoretiko-informatsionno secure algorithms ensure safety during some period, and all other algorithms do not guarantee safety in the face of the malefactor having unlimited computational capabilities and touching all possible values of keys. We use these algorithms only because if to assemble all computers in the world, then they will solve a problem of search longer, than there is a Universe. Here about what level of "insecurity" there is a speech.

But it only until some very smart guy does not crack the next algorithm by means of much more modest computational capabilities. Any cryptanalyst dreams of such success: to find glory, having cracked AES, RSA etc. And when crack "ideal" hashing algorithms or "ideal" block ciphers, it is already absolutely unimportant that you have your "philosophical and safe" random numbers. You just have no place to use them safely. So apply in vychislitelno secure algorithms vychislitelno safe random numbers better. In other words, use / dev/urandom.

Structure of a random number generator in Linux: incorrect representation

Most likely, you approximately so imagine work of the random number generator which is built in a kernel:


"True" randomness, though for certain distorted, gets to system. Entropy is calculated it and immediately added to value of internal entropy of the counter. After correction and introduction of a white noise (whitening) the turned-out entropy is transmitted to a kernel pool from where take random numbers / dev/random and / to dev/urandom. / dev/random receives them from a pool directly if the counter of entropy has a required quantity of numbers. Of course, at the same time the counter decreases. Otherwise the counter is blocked until the new portion of entropy does not get to system.

The important point is that received / dev/random data surely are exposed to operation of introduction of a white noise. С / the same history, except for the moment when in system there is no necessary amount of entropy: instead of blocking application he receives "low-quality" random numbers from CSPRNG working out of the system considered by us. The starting number for this CSPRNG is selected only once (or every time, it is unimportant) proceeding from the data which are available in a bullet. It cannot be considered safe.

For many it is a cogent reason to avoid use/dev/urandom in Linux. When entropy has enough, the same data, as in a case with / dev/random and when it is not enough are used, external CSPRNG which almost never receives data with high entropy is connected. Awfully, isn't that so? Unfortunately, everything described above is quite misleading. Actually inner pattern of a random number generator looks differently:


Simplification is quite rough: actually not one is used, but three pools of entropy:
  • primary,
  • for / dev/random,
  • for / dev/urandom.

The last two pools obtain data from primary. At each pool the counter, but at the last two they are close to zero. "Fresh" entropy moves from the first pool as required, at the same time its own counter decreases. Mixing and return of data back in system is also actively applied, but all these nuances are unimportant for a subject of our conversation.

You notice a difference? CSPRNG does not work in addition to the main generator, and is a component of process of random number generation. For / dev/random are not issued "pure and good" accidental data in which the white noise is introduced. The entering data from each source carefully mix up and hashed in CSPRNG and only then dev/random or / are issued in the form of random numbers for / to dev/urandom.

One more important difference is that entropy is not considered here, and is evaluated. The amount of entropy from any source is not something obvious as if some data. Remember that yours gently darling/dev/random only issues that quantity of random numbers which is available thanks to the available entropy. Unfortunately, it is quite difficult to evaluate amount of entropy. In a kernel of Linux such approach is applied: start time of some event undertakes, its polynom is interpolated and calculated, "is how unexpected", according to a certain model, this event began. There are certain questions by efficiency of such approach to an entropy assessment. It is impossible to dismiss also influence of hardware delays on start time of events. Sampling frequency of hardware components as it directly influences value and granularity of start time of events can play a role also.

In general we know only that the entropy assessment in a kernel is implemented very not bad. What means — conservatively. Someone can argue on that, how well everything is made, but it is already beyond our conversation. You can be nervous concerning shortage of entropy for generation of random numbers, but personally I accept the current mechanism of an assessment.

I will sum up: / and / dev/urandom work with dev/random at the expense of the same CSPRNG. They differ only with behavior at exhaustion of a pool of entropy: / dev/random blocks, and / dev/urandom do not.

What bad is in blocking?

You sometime had to wait until / dev/random issues random numbers? For example, generating PGP keys in the virtual computer? Or at connection to the Web server which expects a portion of random numbers for creation of a dynamic key of session? Here the problem also consists in it. In fact, there is an availability blocking, your system temporarily does not work. It does not do that has to.

Besides, it gives negative psychological effect: it is not pleasant to people when something interferes with them. I, for example, work on safety of systems of industrial automation. How you think because of what most often there are violations of a security system? Because of a conscious activity of users. Just some measure urged to provide protection is executed too long, according to the employee. Or it is too inconvenient. And when it is necessary to find "unofficial solutions", people show resourcefulness miracles. They will look for alternate paths, to invent fancy frauds to force system to work. The people who are not understanding cryptography. Normal people.

Why not to propatchit random challenge ()? Why not to ask somebody at forums how it is possible to use strange ioctl for increase in the counter of entropy? Why completely not to disconnect SSL? Eventually, you just teach the users to do the idiotic things compromising your security system even without knowing about it. It is possible to treat availability and usability of system and other important things as much as contemptuously. Safety above all, huh? better let will be inconvenient, inaccessible or useless instead of simulating safety.

But all this false dichotomy. Safety can be ensured also without blocking, / dev/urandom provides you just the same random numbers, as / dev/random.

In CSPRNG there is nothing bad

But now the situation looks absolutely sadly. Even if high-quality numbers from / dev/random are generated by CSPRNG how it is possible to use them in the tasks demanding the high level of safety? It seems that the main requirement to the majority of our cryptographic modules is need "to look accidental". That the output data of a cryptographic hash was accepted by cryptographers, they have to be indistinguishable from accidental rowset. And the output data of the block cipher without knowledge of a key has to be indistinguishable from accidental data.

Be not afraid that someone will be able to use some weaknesses of CSPRNG and will crack cryptographic modules. All the same nothing remains to you except how to reconcile to it as both block ciphers, and hashes, and all the rest is founded on the same mathematical base, as CSPRNG. So relax.

What about entropy exhaustion?

It does not matter. Basic cryptographic Elements are developed taking into account that attacking will not be able to predict result if at the beginning there was a lot of randomness (entropy). Usually lower limit of "sufficiency" makes 256 bits, it is no more. So forget already about the entropy. As we already sorted above, the random number generator which is built in a kernel cannot even precisely count amount of the entropy coming to system. He can only evaluate it. Besides it is not clear, estimation is how exactly performed.

Updating of start state (re-seeding)

But if entropy matters so a little, then for what fresh entropy is constantly transmitted to the generator? It is necessary to tell that excess of entropy is harmful. But for updating of start state of the generator there is other important reason. Provide that attacking the internal state of your generator became known. It is the most dreadful situation from the point of view of safety what you can imagine, attacking gets full access to system. You in complete flight, from this point the malefactor can calculate all future output data.

But over time new portions of fresh entropy which will be mixed to internal state will begin to arrive everything, and degree of its randomness will start over again growing. So it is the peculiar protection gear which is built in architecture of the generator. However pay attention: entropy is added to internal state, it has no relation to blocking of the generator.

The man pages on random and urandom

There is no Man fears, equal on suggestion, in reasons of programmers:

When reading from / blocking waiting for necessary entropy will not be performed by dev/urandom. As a result if in a bullet entropy does not suffice, then the returned data are theoretically vulnerable to cryptographic attacks to the algorithms used by the driver. Open sources do not contain information as it can be made, but existence of such attacks is theoretically possible. If it can concern your application, then use / dev/random.

About similar attacks nothing is told anywhere, but NSA/FSB for certain has something on arms, truly? And if it concerns you (has to concern!), all your problems will be solved / by dev/random. Even if the method of carrying out similar attack is known to intelligence agencies, kulkhatsker or a babayka, then it is simple to take and arrange it it will be irrational. I will tell more: in open literature also practical methods of attacks to AES, SHA-3 or any other similar ciphers and hashes are not described. You and too will refuse them? Of course, no. Especially council "touches use / dev/random" in the light of the fact that we already know about the general source it and / dev/urandom. If directly are extremely necessary to you it teoretiko-is information safe random numbers (and they are not necessary to you!) and for this reason you cannot use CSPRNG, and / dev/random for you is useless! In the reference book the nonsense, that's all is written. But authors at least try to improve somehow:

If you are not sure of what you should use —/dev/random or / dev/urandom, then, most likely, it will be better to apply the second. In most cases, except for generation of reusable keys of GPG/SSL/SSH, it is necessary to use / dev/urandom.

Remarkably. If you want to use / dev/random for reusable keys — a more power to you though I also do not consider that it is necessary. Well you will wait several seconds before you are able to key something, you will think. Only I beg you, do not force to connect infinitely to the e-mail server only because you "want to be in safety".

To orthodoxes it is devoted

Some interesting expressions found me in the Internet are given below. And if very much it wants to you that someone supported you with / dev/random, then address these cryptographers.

Daniel Bernstein aka djb:

Cryptographers have no relation to this superstition. Reflect: the one who wrote a manual on / dev/random really trusts in it.
  1. We do not know how it is possible is determined to transform one 256-bit number from / dev/random to an infinite flow of unpredictable keys (and it and is necessary for us from urandom), but
  2. we can calculate how to use the only key for safe enciphering of numerous messages. What, actually, is necessary for us from SSL, PGP etc.

Cryptographers from all this will not even smile.

Thomas Pornin, one of the most useful users whom I faced on Stack Exchange:

If it is short — yes. If it is unrolled — too yes. / dev/urandom provides data which by means of the available technologies cannot be distinguished from truly accidental. There is no sense to aim at even "best" randomness, than provided / dev/urandom if only you do not use one of several "teoretiko-information" cryptoalgorithms. And you definitely do not use them, otherwise you would know about it. The reference book on urandom misleads the sentence that because of "an entropy issyakaniye" / it is necessary to use dev/urandom / dev/random.

Thomas Ptacek is not this cryptographer from the point of view of algorithm elaboration or creation of cryptosystems. But it founded the consulting agency in the field of safety which earned good reputation by numerous testings for penetration and cracking of low-quality cryptography:

Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.

Is not present in the world of perfection

/ dev/urandom is not ideal, and to that there are two reasons. Unlike FreeBSD, in Linux it never causes blocking. And as you remember, all security system is based on a certain initial randomness, i.e. on the choice of starting number. In Linux/dev/urandom remorselessly issues you not too random numbers still before at a kernel appears though some opportunity to collect entropy. When it occurs? At start of the machine, in load time of the computer. In FreeBSD everything is arranged more correctly: there is no distinction between / dev/urandom and / dev/random, this same. Only at loading/dev/random once blocks, enough entropy will not collect yet. After that there are no blocking any more.

In Linux everything is not as bad too as looks at first sight. In all distribution kits at a loading stage a quantity of random numbers remains in the seed-file which is read out at the following loading of system. But record is made only after enough entropy as the script is not started instantly after clicking of the button of inclusion is gathered. So you bear responsibility for accumulation of entropy from the previous session. Of course, it is not so good as though you allowed to write starting number to the scripts completing system operation. It is necessary to save entropy much longer. But you do not depend on a correctness of end of system with execution of the corresponding scripts (for example, to you reseta and falling of system are not terrible). Besides, such solution will not help you at the very first start of the machine. But in the Linux seed-file distribution kits registers during installer operating time so in general will go.

At the same time in Linux the new system call of getrandom (2) which initially appeared in OpenBSD as getentropy(2) was implemented. It performs blocking until the sufficient initial amount of entropy collects, and afterwards does not block any more. However, it not the character device, but a system call therefore not so just to get access from the terminal to it or scripting languages.

One more problem is connected with virtual computers. People like to clone them or to return to the previous statuses, and the seed-file will not help you with these cases. But it is solved not universal use / dev/random, but very exact choice of starting number for each virtual computer after cloning, return to the previous status etc.

Tl; dr

Just use / dev/urandom.

This article is a translation of the original post at
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here:

We believe that the knowledge, which is available at the most popular Russian IT blog, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus