We can define deterministic and probabilistic encryption;
In the case of deterministic, there is one ciphertext that decrypts to the message and in the case of probabilistic, there are many to be expected.
From the correctness requirement, we want
The probabilistic encryption is preferable since it can provide semantic security or its equivalent and easy to use version is the indistinguishability.
Now, what if set the definition for probabilistic decryption to more than :$$\Pr[D(k,E(k,(m))=m] = 1 - negl(k)$$
instead, making it:
$$\Pr[D(k,E(k,(m))=m] = p$$
An example of this is the Rabin cryptosystem (see note 2) where $p = 1/4$ since we end up with four possible $m$. It is suggested to add some auxiliary data to plaintext to resolve. Note that Rabin cryptosystem is not randomized encryption where an IV/nonce is usually needed to achieve that. The decryption still produces randomized decryption.
Are there any other examples of randomized decryption schemes other than the half randomized Rabin Cryptosystem?
Is there any advantage or usage of randomized decryption schemes?
Notes:
This is asked as a fundamental question for clarification of the probabilistic decryption which is simply discarded in the textbooks.
Actually, Rabin has not defined an encryption scheme on their seminal paper. They defined the first secure signature scheme and hashing is a part of it. The Wikipedia article is misleading about this.
One should never use the AES-GCM-SIV with fixes nonces, one should use unique/random nonces and AES-GCM-SIV and it doesn't fail like AES-GCM if the nonce repeats under the same key.
Since when decrypting we always want to get the correct message back, there's no reason why we would want to make this ambiguous. It would have no security advantage (if the adversary can guess with any non-negligible probability, you have already lost, so ambiguous decryption can't make that harder). Thus, unlike probabilistic encryption, which is needed for security, probabilistic decryption is an undesired property that sometimes arises when trying to construct schemes. For example, with the naive Rabin, it just happens to be a property of the number theory used. The additional work required to make Rabin a trapdoor permutation is necessary to make it useful.
Thus, the only reason why randomized decryption (with a probability of error) exists is when this is what some constructions unfortunately give us. (This can be viewed in some sense as an advantage - if we weren't able to allow decryption errors, we wouldn't be able to build these schemes, some of them having advantages.) There are some examples of this: NTRU and the Ajtai scheme based on lattices. As a result, there has been some theoretical work on removing errors from encryption schemes that have them. Most notably, I recommend reading the paper Immunizing Encryption Schemes from Decryption Errors by Dwork, Naor and Reingold.
External links referenced by this document:
Local articles referenced by this article: