Additional Exercise 5
The Good Approximation Theorem.
Assume the following result (due to Berman and here couched in terms of
Clique problem, but trivially generalizable to any
NP-complete problem):
Theorem. If there can be found a language L, a function
f: N -> N, and a transformation t, such that:
-
x is a yes-instance of Clique if and only if
t(x) is in L. (That is, t is a standard
many-one transformation from Clique to L.)
-
t(x) can be computed in deterministic f(|x|) time.
-
t is f-sparse, that is, the number of different
strings produced by t when applied to strings of length at
most n is bounded above by f(n).)
Then the deterministic time complexity of Clique is bounded above
by a polynomial or by a function of the form p(f(n)), for some
polynomial p.
Use this theorem in answering the following questions.
-
Prove that the existence of an NP-complete subset of
0* implies P=NP
(our alphabet is of course just {0,1}).
-
Let L be an NP-complete problem. Prove that, if there exist
a polynomial p and a (deterministic) solution algorithm
A for L such that the number of instances of size
no larger than n on which A does not run in time
p(n) grows subexponentially (slower than any exponential
function of n), then the deterministic time complexity
of any NP-complete problem is polynomially related to
max{f(n),n}.
Such a solution algorithm would be a pretty good approximation algorithm,
since it would always solve the problem and run in polynomial time in
the majority of cases. In fact, this result is sometimes known as
"the good approximation theorem."
-
What exactly would be the consequences of the existence of a "good
approximation" algorithm for an NP-complete problem? Relate the
"good approximation theorem" to your discussion of the complexity
class SUBEXP (Exercise 6.24).