The traveling salesperson problem is one of the oldest known computational questions. It asks for the ideal route through a certain list of cities, minimizing mileage. Despite seeming simple, the problem is notoriously difficult. While you can use brute force to check all the possible routes until you find the shortest path, such a strategy becomes untenable after just a handful of cities. Instead, you can apply a rigorous mathematical model called linear programming, which roughly approximates the problem as a set of equations and methodically checks the possible combinations to find the best solution.
But sometimes, you need to optimize for problems involving whole-number amounts. What good is a factory optimization plan that manufactures 500.7 couches? For this, researchers often turn to a variant of linear programming called integer linear programming (ILP). It’s popular in applications that involve discrete decisions, including production planning, airline crew scheduling and vehicle routing. “Basically, ILP is the bread and butter of operations research both in theory and practice,” said Santosh Vempala, a computer scientist at the Georgia Institute of Technology.
Since they first formulated ILP over 60 years ago, researchers have discovered various algorithms that solve ILP problems, but they’ve all been relatively slow in terms of the number of steps required. The best version they could come up with — a kind of speed limit — comes from the trivial case where the problem’s variables (such as whether a salesman visits a city or not) can only assume binary values (zero or 1). In this case, ILP has a runtime that scales exponentially with the number of variables, also called the dimension. (If there’s only one variable, it takes just two steps to test every possible combination and solve the problem; two variables mean four steps, three mean eight steps, and so on.)
Unfortunately, once the variables take a value beyond just zero and 1, the algorithm’s runtime grows much longer. Researchers have long wondered if they could get closer to the trivial ideal. Progress has been slow, with the record set in the 1980s and only incremental improvements made since.
But recent work by Victor Reis, currently at the Institute for Advanced Study, and Thomas Rothvoss, at the University of Washington, has made the biggest runtime leap in decades. By combining geometric tools to limit the possible solutions, they created a new, faster algorithm for solving ILP in almost the same time as the trivial binary case. The result received a best-paper award at the 2023 Foundations of Computer Science conference.
“This new algorithm is extremely exciting,” said Noah Stephens-Davidowitz, a mathematician and computer scientist at Cornell University. “It represents the first [major] improvement to ILP solvers in nearly 40 years.”
ILP works by transforming a given problem into a set of linear equations that must satisfy some inequalities. The specific equations are based on the details of the original problem. But while these details may differ, the basic makeup of ILP problems remains the same, giving researchers a single way to attack a multitude of problems.