Thore Husfeldt

Overview of Exponential Time Algorithms, Early 2009

leave a comment »

This is an overview of the state of research in exponential time algorithms, which we submitted as a motivation for a Dagstuhl seminar in April 2009. See my longer organiser’s report:

The seminar, 10441, is now over, but I think the proposal is a competent snapshot of a rapidly evolving field.

A decade before NP-completeness became the lens through which Computer Science views computationally hard problems, beautiful algorithms were discovered that are much better than exhaustive search, for example Bellman’s 1962 dynamic programming treatment of the Traveling Salesman problem and Ryser’s 1963 inclusion–exclusion formula for the permanent.

Today we know that all NP-hard problems are unlikely to admit polynomial-time algorithms, yet NP-hard problems must be solved, in some way, for everything from manufacturing and supply-chain optimization to railroad timetabling. Approaches include approximation algorithms, heuristics, average-case analysis, and exact exponential-time algorithms: all are essential. While all NP-complete problems are equivalent from the polynomial-time perspective, their exponential-time properties vary widely. Which problems are easiest or hardest? What are the promising algorithmic techniques? What are the connections with parametrized complexity? How fast an algorithm can we find? What about complexity lower bounds?

Work addressing such questions, both from the algorithmic and complexity theoretic sides, has become known as exact complexity. Despite significant progress, the area is still fairly new and many fundamental problems remain open. Where the approximation algorithms field, for example, has unifying algorithmic techniques such as LP rounding and semidefinite programming, and hardness techniques from probabilistically checkable proofs and the Unique Games conjectures, much exact algorithms work is still specific to a particular NP-complete problem: powerful unified techniques are just emerging.

Exciting new results and directions have been established by scientists on several continents, with important contributions coming from young researchers such as Williams and Traxler. The purpose of this workshop is to accelerate developments in this late-blooming field. Below, we outline several new results and promising directions.

The Tutte polynomial of an n-vertex, m-edged graph can trivially be evaluated in time O*(2m), but no vertex-parameterized algorithm is obvious. The Potts (q-coloring) partition function can trivially be evaluated in time O*(qn), but it is not obvious if one can remove the dependence on q. The Fortuin–Kasteleyn model from statistical physics generalizes both, and a breakthrough result of Björklund, Husfeldt, Kaski, and Koivisto [FOCS 2006, STOC 2007, FOCS 2008] shows how to evaluate it using the inclusion–exclusion method in time O*(2n). It is an intriguing question as to how far these techniques could be extended.

Recently, the color-coding technique of Alon, Yuster, and Zwick [JACM 1995] has been extended by introducing algebraic structures that yield faster fixed parameter tractable algorithms. Koutis [ICALP 2008] uses “vector coding” for a randomized O*(23k/2) algorithm for the k-Path problem, and Williams [IPL 2009] improves this to O*(2k). Such algorithms from group algebra are a promising direction for further exploration.

Branch-and-reduce is one of the most frequently used methods for solving \np-hard problems, but current analyses of such algorithms may be overly pessimistic. Fomin, Grandoni and Kratsch [ICALP 2005, SODA 2006] used a measure and conquer framework to establish simple and fast algorithms to solve the Minimum Dominating Set and the Maximum Independent Set problem. By now measure and conquer analysis has had an enormous impact. This and related methods, Eppstein’s quasiconvex analysis [SODA 2004] and Scott and Sorkin’s linear programming method [Random 2005], have become indispensable, but a need remains for further improvements.

Faster algorithms, notably for Maximum Independent Set, have resulted from computer-produced graphical reductions and case analysis. Can these automated techniques be put on a more general theoretical level, and improved? Can similar automation be applied to logic-based branching rules such as the “clause learning” of Kulikov and Kutzkov [CSR 2007]? What about lower bounds on such local methods?

Exponential-time and other approaches may be combined. Scott and Sorkin’s [CPC 2006] average-case analysis of an exponential-time algorithm shows that Max 2-CSP is solvable in expected linear time on random constraint graphs below the giant-component threshold. It would be interesting to obtain similar results for other problems. Cygan, Kowalik, Pilipczuk and Wykurz [2008] explore exponential-time approximation algorithms trading off the running time and approximation ratio for various problems, an approach well worth further investigation. Hybrid algorithms, introduced by Vassilevska, Williams and Woo [SODA 2006], are exemplified for Minimum Bandwidth: the fastest exact algorithm known takes time O*(5n) and the best polynomial-time approximation ratio is roughly O(log3 n), but there is a hybrid algorithm that, depending on the input, produces either the exact minimum in time roughly O*(4n) or an O(log2.5 n) approximation in polynomial time.

Some other approaches merit mention. Horowitz and Sahni’s O*(2n/2) Knapsack algorithm [JACM 1974] is an early example of the approach of reducing a hard problem to one that is easy but exponentially large. Williams’ O*(1.74n) algorithm for Max 2-CSP [ICALP 2004] is the only nontrivial algorithm known for dense instances. Koivisto’s ring extension of this algorithm [IPL 2006], the Tutte and k-Path work previously mentioned, and Scott and Sorkin’s ring extension of other Max 2-CSP algorithms [TALG 2009] show the power of algebraic approaches.

Despite these algorithmic successes, there may be limits to what is possible. Impagliazzo, Paturi and Zane [FOCS 1998] initiate a complexity theory of exact algorithms, using subexponential-time Turing reductions. They introduce the Exponential-Time Hypothesis (ETH), viz., that the best exact algorithm for 3-SAT has exponential complexity 2cn where c>0. Assuming ETH they prove that the best exponent sequence ck for k-SAT must be an increasing sequence [CCC 1999]. Using similar techniques, Traxler [IWPEC 2008] shows that for 2-CSPs over k-valued variables, assuming ETH, any algorithm’s running time must depend strongly on k; this is in sharp contrast to the earlier-mentioned result that k-coloring can be solved in time O*(2n), independent of k. These results call for a vigorous investigation of the complexity-theoretic limitations of exact algorithms.

The authors are Dieter Kratsch, Ramamohan Paturi, Gregory Sorkin, and me.

Advertisements

Written by thorehusfeldt

November 17, 2010 at 00:05

Posted in Uncategorized

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: