Cryptography Tricks Make a Hard Problem a Little Easier

47

What’s the best way to solve hard problems? That’s the question at the heart of a subfield of computer science called computational complexity theory. It’s a hard question to answer, but flip it around and it becomes easier. The worst approach is almost always trial and error, which involves plugging in possible solutions until one works. But for some problems, it seems there simply are no alternatives — the worst approach is also the best one.

Researchers have long wondered whether that’s ever really the case, said Rahul Ilango, a graduate student studying complexity theory at the Massachusetts Institute of Technology. “You could ask, ‘Are there problems for which guess-and-check is just optimal?’”

Complexity theorists have studied many computational problems, and even the hard ones often admit some kind of clever procedure, or algorithm, that makes finding a solution a little bit easier than pure trial and error. Among the few exceptions are so-called compression problems, where the goal is to find the shortest description of a data set.

But last November, two groups of researchers independently discovered another algorithm for compression problems — one that’s ever so slightly faster than checking all the possible answers. The new approach works by adapting an algorithm invented by cryptographers 25 years ago for attacking a different problem. There’s just one restriction: You need to tailor the algorithm to the size of your data set.

“They’re really beautiful and important results,” said Eric Allender, a theoretical computer scientist at Rutgers University.

Defining Hardness

The new results are the latest to investigate a question first studied in the Soviet Union, well before the advent of complexity theory. “Before I was in grade school, people in Russia were formulating this,” Allender said.

The specific computational problem that those Soviet researchers studied, called the minimum circuit size problem, is akin to one that designers of computer hardware face all the time. If you’re given complete specifications of how an electronic circuit should behave, can you find the simplest circuit that will do the job? Nobody knew how to solve this problem without “perebor” — a Russian word roughly equivalent to “exhaustive search.”

The minimum circuit size problem is an example of a compression problem. You can describe a circuit’s behavior with a long string of bits — 0s and 1s — and then ask whether there’s a way to reproduce that same behavior using fewer bits. Checking all possible circuit layouts would take time that grows exponentially with the number of bits in the string.

This sort of exponential growth is the defining feature of a hard computational problem. But not all hard problems are equally hard — some have algorithms that are faster than exhaustive search, though their runtimes still grow exponentially. In modern terms, the perebor question is whether any such algorithms exist for compression problems.

In 1959, a prominent researcher named Sergey Yablonsky claimed to have proved that exhaustive search really was the only way to solve the minimum circuit size problem. But his proof left some loopholes. Some researchers noticed the flaws at the time, but Yablonsky was influential enough to discourage most others from working on the perebor question.

In the decades that followed, few researchers studied compression problems, and the perebor question was known mostly as a footnote in the prehistory of complexity theory. Widespread attention to the question came only recently, after researchers discovered a curious link between compression problems and the foundations of cryptography.

One-Way Traffic

Modern cryptography uses hard computational problems to safeguard secret messages. But computational hardness is only useful if it’s asymmetric — if it’s hard to decipher coded messages but not hard to encode messages in the first place.

In every cryptography scheme, the origin of this asymmetry is a mathematical object called a one-way function, which transforms input bit strings into output strings of the same length. Given an input to a one-way function, it’s easy to compute the output, but given an output it’s hard to invert the function — that is, to reverse-engineer it and recover the input.

“Cryptographers really would like to have very, very efficiently computable one-way functions that are really, really hard to invert,” Allender said. Many mathematical functions seem to have this property, and the difficulty of inverting these functions stems from the apparent difficulty of solving different computational problems.

Unfortunately, cryptographers don’t know for sure whether any of these functions are truly hard to invert — indeed, it’s possible that true one-way functions don’t exist. This uncertainty persists because complexity theorists have struggled for 50 years to prove that seemingly hard problems really are hard. If they aren’t, and if researchers discover super-fast algorithms for these problems, that would be disastrous for cryptography — akin to suddenly routing speeding cars in both directions on a one-way street.

Even though a comprehensive understanding of computational hardness remains elusive, cryptographers have recently made exciting progress toward a unified theory of one-way functions. One big step was taken in 2020, when the Tel Aviv University cryptographer Rafael Pass and his graduate student Yanyi Liu proved that one-way functions are intimately connected to a specific compression problem called the time-bounded Kolmogorov complexity problem.

If that one problem really is hard to solve for most inputs, then Pass and Liu’s result yields a recipe for how to construct a provably hard one-way function — one that’s guaranteed to be secure even if other computational problems turn out to be far easier than researchers expected. But if there’s a fast algorithm for solving the time-bounded Kolmogorov complexity problem, then cryptography is doomed, and any function can be easily inverted. A one-way function based on the hardness of this problem is the most secure function possible — a one-way function to rule them all.

Building on Data Structures

Pass and Liu’s discovery was the latest chapter in a long line of research that uses complexity theory to better understand the foundations of cryptography. But it also suggested a way to invert that relationship: The equivalence between the time-bounded Kolmogorov complexity problem and function inversion implies that insights about either problem can reveal more about the other. Cryptographers have been studying function inversion algorithms for decades to better understand the weak points of their encryption methods. Researchers began to wonder whether those algorithms could help answer age-old questions in complexity theory.

Like many computational problems, function inversion can be solved by exhaustive search. Given an output string, simply plug every possible input into the function until you find the one that yields the right answer.

Previous articleComcast launches prepaid and month-to-month internet and phone plans
Next articleResearchers uncover human DNA repair by nuclear metamorphosis