Table of contents
Open Table of contents
What it is
Computational Security is a branch of security in Cryptography that focuses on analyzing cryptographic constructions from a practical point of view, meaning that such constructions are examined to see if they’re secure enough to withstand attacks in a real world environment.
Computational Security’s main goal is to prove (or disprove) that an attacker has no way of breaking a cryptosystem despite having access to the limited resources that are specified.
The notion of security that’s derived can be understood as a continuous scale. E.g. if the scale has a range from to we could define the value of to mean that a system is completely insecure whereas a value of would mean that it’s unbreakable.
This is due to the fact that resources are limited, yet in theory it might still be possible to break a construction with access to more resources.
A cryptosystem is analyzed by assuming that an attacker has access to a predefined set of limited resources to break it.
For a brute-force key search this would mean that while it’s theoretically possible to try out all keys, it’s practically impossible due to the lack of access to unlimited resources to try each key.
Computational Security therefore focuses on the “Practical Impossibility” of breaking a scheme under the aforementioned resource constraints.
A common way to express Computational Security is in the form of a combination of two values, and .
The expression is oftentimes written as the tuple where is the maximum amount of operations an attacker runs (i.e. the upper limit) and is the limit on the probability of success such an attack has.
To express a brute-force key search in which an attacker tries out all potential keys for an n-bit key we’d write that it’s -secure for values of that range from to .
The following are two examples that examine the probability of success given a 256-bit key.
- An attacker tries exactly key, therefore the probability of success for such an attack is .
- An attacker tries all keys, therefore the probability of success for such an attack is .
We can simplify the -secure notion by assuming that the success probability will always be close to which results in what’s called the -secure notion.
Using the aforementioned -secure notion, we can derive yet another metric that’s called n-bit security.
With n-bit security we assume that about operations are necessary to break a construction.
This n-bit security defines the upper bound when it comes to the security level as this is the maximum number of operations an attacker needs to take to have a 100% success guarantee (remember, that all it takes is to try out every possible key there is).
If only the number of operations necessary to break a cryptosystem are known then the binary logarithm (logarithm to the base ) can be used to derive the n-bit security level.
If, for example it takes operations, then the n-bit security level would be .
A common practice to derive a Computational Security guarantee is the reliance on proofs in which it’s shown that breaking the construction would require solving a known, hard problem.
Those hard problems can be categorized into mathematical- and cryptographic problems. Furthermore, Provable Security is complementary to Probable Security (see below).
A lot of security assumptions for asymmetric cryptography rely on reductions to known, hard mathematical problems.
RSA, for example is built on the assumption that it’s easy to multiply two large prime numbers to get a third, even larger number, yet it’s computationally infeasible to figure out the prime factors of such large number.
A second example is the Discrete Logarithm problem in which given one should find . This problem is the backbone of a lot of Elliptic Curve Cryptography cryptosystems.
Both problems are known to be easy to compute in one direction (i.e. when a solution is present it’s easy to verify it) but hard to compute in the other direction. This is commonly referred to as a Trapdoor Function.
Security assumptions may also be based on the relation the current schemes has with existing, well studied problems from the Cryptography field.
For example, a symmetric cipher that relies heavily on a specific substitution and bit-permutation implementation can be proven secure as long as both, the substitution and bit-permutation remain secure.
Another approach to define Computational Security guarantees is the reliance on Probable Security (sometimes called Heuristic Security).
Notions of Probable Security rely on the Lindy Effect in which a construction is considered secure if a lot of very smart people did cryptanalysis in an attempt to break it but thus far were unsuccessful.
A common way to conduct such cryptanalysis on a cryptosystems is to start with a simplified version (i.e. one round in a symmetric cipher) to then slowly increase the complexity until the cipher can’t be broken anymore.
A Security Margin that’s calculated as the difference between the total number of rounds of the cipher and the number of rounds that were broken is then used to describe how secure a system currently is.
Note that Probable Security is complementary to Provable Security (see above), meaning that both can go together when analyzing constructions.
Costs of attacks
One aspect that’s worthwhile to discuss is the cost of an attack and how it evolves over time.
The infamous Moore’s Law states that computational efficiency doubles roughly every 2 years (note that this is an oversimplification) which means that what might be hard to break right now can become significantly cheaper and faster to break in the future.
While this is true in general it’s important to note that there’s the expectation that Moore’s Law has a ceiling and can’t grow forever. Furthermore, even if there’s still progress to be made we should be good for the foreseeable future if we keep n-bit security in mind and choose key sizes that are reasonably large.