2019-05-01
2
2.3
Definition 2.3.1.2
Let $\Sigma$ be an alphabet. A word over $\Sigma$ is any finite sequence of symbols of $\Sigma$. The empty word $\lambda$ is the only word consisting of zero symbols. The set of all words over the alphabet $\Sigma$ is denoted by $\Sigma^{*}$
Definition 2.3.1.3
The length of a word $w$ over an alphabet $\Sigma$, denoted by $|w|$, is the number of symbols in $w$. For every word $w\in\Sigma^{*}$, and every symbal $a\in\Sigma$, $\sharp_{a}(w)$ is the number of occurrences of the symbol $a$ in the word $w$
Definition 2.3.1.4
Let $\Sigma$ be an alphabet. Then, for any $n\in\mathbb{N}$
$$ \Sigma^n=\{x\in\Sigma^{*}||x|=n\} $$
We define $\Sigma^+=\Sigma^{*}-\{\lambda\}$
Definition 2.3.1.5 (concatenation)
For every word $w\in\Sigma^{*}$, we define
- $w^0=\lambda$, and
- $w^{n + 1}=w\cdot w^n=ww^n$ for every positive integer $n$
Definition 2.3.1.9 (language)
Let $\Sigma$ be an alphabet, Every set $L\subseteq\Sigma^{*}$ is called a language over $\Sigma$. The complement of the language $L$ according to $\Sigma$ is $L^C=\Sigma^{*}-L$
Let $\Sigma_1$ and $\Sigma_2$ be alphabets, and let $L_1\subseteq\Sigma_1^{*}$ and $L_2\subseteq\Sigma_2^{*}$ be languages. The concatenation of $L_1$ and $L_2$ is
$$ L_1L_2=L_1\circ L_2=\{uv\in(\Sigma_1\cup\Sigma_2)^{*}|u\in L_1 and\ v\in L_2\} $$
Definition 2.3.1.10
Let $\Sigma=\{s_1,s_2…s_m\},m\ge1$, be an alphabet, and let $s_1 < s_2 < … < s_m$ be a linear ordering on $\Sigma$. We define the canonical ordering on $\Sigma^{*}$ as follows. For all $u,v\in\Sigma^{*}$
$$u < v\mbox{ if }|u| < |v|$$
$$\mbox{or }|u|=|v|,u=xs_iu’,\mbox{ and }v=xs_jv’$$
$$\mbox{for some }x,u’,v’\in\Sigma^{*},\mbox{ and }i < j$$
2.3.2
Definition 2.3.2.1
A decision problem is a triple $(L, U, \Sigma)$ where $\Sigma$ is an alphabet and $L\subseteq U\subseteq\Sigma^{*}$. An algorithm $A$ solves(decides) the decision problem $(L, U, \Sigma^{*}$ if, for every $x\in U$
- $A(x)=1$ if $x\in L$, and
- $A(x)=0$ if $x\in U-L(x\not\in L)$
Definition 2.3.2.2
An optimization problem is a 7-tuple $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$, where
- $\Sigma_I$ is an alphabet, called the input alphabet of $U$
- $\Sigma_O$ is an alphabet, called the output alphabet of $U$
- $L\subseteq\Sigma_I^{*}$ is the language of feasible problem instances
- $L_I\subseteq L$ is the language of the(actual) problem instances of $U$
- $M$ is a function from $L$ to (power set)$Pot(\Sigma_O^{*})$, and for every $x\in L$, $M(x)$ is some $x\in L$, assigns a positive real number cost($u,x$)
- cost is the cost function that, for every pair $(u,x)$, where $u\in M(x)$ for some $x\in L$, assigns a positive real number cost($u,x$)
- goal$\in${minimum,maximum}
For every $x\in L_I$, a feasible solution $y\in M(x)$ is called optimal for $x$ and $U$ if
$$ cost(y,x)=goal\{cost(z,x)|z\in M(x)\} $$
For an optimal solution $y\in M(x)$, we denote cost($u,x$) by Opt$_U(x)$ ,$U$ is called maxmization problem if goal=maximum, and $U$ is a minimization problem if goal=minimum. In what follows Output$_U(x)\subseteq M(x)$ denotes the set of all optimal solutions for the instance $x$ of $U$
An algorithm $A$ is consistent for $U$ if, for every $x\in L_I$, the output $A(x)\in M(x)$. We say that an algorithm $B$ solves the optimization problem $U$ if
- $B$ is consistent for $U$, and
- for every $x\in L_1, B(x)$ is an optimal solution for $x$ and $U$
Definition 2.3.2.3
Let $U_1=(\Sigma_I,\Sigma_O,L,L_{I,1},M,cost,goal)$ and $U_2=(\Sigma_I,\Sigma_O,L,L_{I,2},M,cost,goal)$ be two optimization problems. We say that $U_1$ is a subproblem of $U_2$ if $L_{I,1}\subseteq L_{I,2}$
Theorem 2.3.3.3
There is a decision problem $(L,\Sigma_{bool})$ such that, for every algorithm $A$ deciding $L$, there exists another algorithm $B$ deciding $L$, such that
$$ Time_B(n)=\log_2(Time_A(n)) $$
for infinitely many positive integers $n$
Definition 2.3.3.5
We define the complexity class $P$ of languages decidable in polynomial-time by
$$ P=\{L=L(M)|M\mbox{ is a TM(an algorithm) with }Time_M(n)\in O(n^c)\mbox{ for some positive integer c}\} $$
A language (decision problem) $L$ is called tractable(practially solvable) if $L\in P$. A language $L$ is called intractable if $L\not\in P$
Definition 2.3.3.6
Let $M$ be a nondeterministic TM(algorithm). We say that $M$ accepts a language $L, L = L(M)$, if
- for every $x\in L$, there exists at least one computation of $M$ that accepts $x$, and
- for every $y\not\in L$, all computations of $M$ reject $y$
For every input $w\in L$, the ** time complexity Time$_M(w)$ of $M$ on $w$ ** is the time complexity of the shortest accepting computation of $M$ on $w$. The time complexity of $M$ is the function Time$_M$ from $\mathbb{N}$ to $\mathbb{N}$ defined by
$$ Time_M(n)=\max\{Time_M(x)|x\in L(M)\cap\Sigma^n\} $$
Definition 2.3.3.7
Let $L\subseteq\Sigma^{*}$ be a language. An algorithm $A$ working on inputs from $\Sigma^{*}\times\{0,1\}$ is called a verifier for $L$, denoted $L=V(A)$, if
$$ L=\{w\in\Sigma^{*}|A\mbox{ accepts }(w,c)\mbox{ for some }c\in\{0,1\}^{*}\} $$
If $A$ accepts $(w,c)\in\Sigma^{*}\times\{0,1\}$, we say that $c$ is a proof(certificate) of the fact $w\in L$
A verifier $A$ for $L$ is called a polynomial-time verifier if there exists a positive integer $d$ such that, for every $w\in L$, Time$_A(w,c)\in O(|w|^d)$ for a proof $c$ of $w\in L$
We define the class of polynomially verifiable languages as
$$ VP=\{V(A)|A\mbox{ is a polynomial-time verifier}\} $$
Definition 2.3.3.10
Let $L_1\subseteq\Sigma_1^{*}$ and $L_2\subseteq\Sigma_2^{*}$ be two languages. We say that $L_1$ is polynomial-time reducible to $L_2, L_1\le_pL_2$, if there exists a polynomial-time algorithm $A$ that computes a mapping from $\Sigma_1^{*}$ to $\Sigma_2^{*}$ such that, for every $x\in\Sigma_1^{*}$
$$ x\in L_1\Leftrightarrow A(x)\in L_2 $$
A is called the polynomial-time reduction from $L_1$ to $L_2$
- A language $L$ is called NP-hard if, for every $U\in NP,U\le_pL$
- A language $L$ is called NP-complete if
- $L\in NP$, and
- $L$ is NP-hard
Lemma 2.3.3.11
If $L$ is NP-hard and $L\in P$, then P=NP
Lemma 2.3.3.15
Sat$\le_{p}$Clique
Lemma 2.3.3.16
Clique$\le_{p}$VC
Lemma 2.3.3.19
3Sat$\le_{p}$Sol-0/1-LP
Definition 2.3.3.21
NPO is the class of optimization problems, where $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)\in$NPO if the following conditions hold
$L_I\in$P
there exists a polynomial $p_U$ such that
- for every $x\in L_I$, and every $y\in M(x),|y|\le p_U(|x|)$, and
- there exists a polynomial-time algorithm that, for every $y\in\Sigma_O^{*}$ and every $x\in L_I$ such that $|y|\le p_U(|x|)$, decides whether $y\in M(x)$, and
the function cost is computable in polynomial time
Definition 2.3.3.23
PO is the class of optimization problems $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ such that
- $U\in$NPO, and
- there is a polynomial-time algorithm that, for every $x\in L_1$, computes an optimal solution for $x$
Definition 2.3.3.24
Let $U=(\Sigma_1,\Sigma_O,L,L_1,M,cost,goal)$ be an optimization problem from NPO, We define the threshold language of $U$ as
$$ Lang_U=\{(x,a)\in L_1\times\Sigma_{bool}^{*}|Opt_U(x)\le Number(a)\} $$
if goal=minimum, and as
$$ Lang_U=\{(x,a)\in L_1\times\Sigma_{bool}^{*}|Opt_U(x)\ge Number(a)\} $$
if goal=maximum
We say that $U$ is NP-hard if Lang$_U$ is NP-hard
Lemma 2.3.3.25
If an optimization problem $U\in$PO, then Lang$_U\in$P
4
4.2
Definition 4.2.1.1
Let $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ be an optimization problem, and let $A$ be a consistent algorithm for $U$. For every $x\in L_I$, the relative error $\varepsilon_A(x)$ of $A$ on $x$ is defined as
$$ \varepsilon_A=\dfrac{|cost(A(x))-Opt_U(x)|}{Opt_U(x)} $$
For any $n\in\mathbb{N}$, we define the relative error of $A$ as
$$ \varepsilon_A(n)=\max\{\varepsilon_A(x)|x\in L_I\cap(\Sigma_I)^n\} $$For every $s\in L_I$, the approximation ratio $R_A(x)$ of $A$ on $x$ is defined as
$$ R_A(x)=\max\{\dfrac{cst(A(x))}{Opt_U(x)},\dfrac{Opt_U(x)}{cost(A(x))}\} $$For any $n\in\mathbb{N}$, we define the approximation ratio of $A$ as
$$ R_A(n)=\max\{R_A(x)|x\in L_I\cap(\Sigma_I)^n\} $$For any positive real $\delta > 1$, we say that $A$ is a $\delta$-approximation algorithm for $U$ if $R_A(x)\le\delta$ for every $x\in L_I$
For every function $f:\mathbb{N}\to\mathbb{R}^+$, we say that $A$ si an $f(n)$-approximation algorithm for $U$ if $R_A(n)\le f(n)$ for every $n\in\mathbb{N}$
Definition 4.2.1.6
Let $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ be an optimization problem. An algorithm $A$ is called a polynomial-time approximation scheme (PTAS) for $U$ if for every input pair $(x,\varepsilon)\in L_I\times\mathbb{R}^+, A$ computes a feasible solution $A(x)$ with a relative error at most $\varepsilon$, and $Time_A(x,\varepsilon^{-1})$ can be bounded by a function that is polynomial in $|x|$.
If $Time_A(x,\varepsilon^{-1})$ can be bounded by a function that is polynomial in both $|x|$ and $\varepsilon^{-1}$, then we say that $A$ is a fully polynomial-time approximation scheme (FPTAS) for $U$
Definition 4.2.3.1
Let $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ and $\bar{U}=(\Sigma_I,\Sigma_O,L,L,M,cost,goal)$ be two optimization problems with $L_I\subset L$. A distance function for $\bar{U}$ according to $L_I$ is any function $h_L:L\to\mathbb{R}^{\ge0}$ satisfying the properties
- $h_L(x)=0$ for every $x\in L_I$, and
- $h$ is polynomial-time computable
Let $h$ be a distance function for $\bar{U}$ accoring to $L_I$. We define, for any $r\in\mathbb{R}^+$
$$ Ball_{r,h}(L_I)=\{w\in L|h(w)\le r\} $$
Let $A$ be a consistent algorithm for $\bar{U}$, and let $A$ be an $\varepsilon$-approximation algorithm for $U$ for some $\varepsilon\in\mathbb{R}^{-1}$. Let $p$ be a positive real. We say that $A$ is p-stable according to $h$ if, for every real $0 < r \le p$, there exists a $\delta_{r,\varepsilon}\in\mathbb{R}^{> 1}$ such that $A$ is a $\delta_{r,\varepsilon}$-approximation for $U_r=(\Sigma_I,\Sigma_O,L,Ball_{r,h}(L_I),M,cost,goal)$
$A$ is stable according to $h$ if $A$ is $p$-stable according to $h$ for every $p\in\mathbb{R}^+$. We say that $A$ is unstable according to h if $A$ is not $p$-stable for any $p\in\mathbb{R}^+$
For every positive integer $r$, and every function $f_r:\mathbb{N}\to\mathbb{R}^{> 1}$ we say that $A$ is $(r,f_r(n))$-quasistable according to $h$ if $A$ is an $f_r(n)$-approximation algorithm for $U_r=(\Sigma_I,\Sigma_O,L,Ball_{r,h}(L_I),M,cost,goal)$
Definition 4.2.3.6
Let $U=(\Sigma_I,\Sigma_O,L,Ball_{r,h}(L_I),M,cost,goal)$ and $\bar{U}=(\Sigma_I,\Sigma_O,L,L,M,cost,goal)$ be two optimization problems with $L_I\subset L$. Let $h$ be a distance function for $\bar{U}$ according to $L_I$, and let $U_r=(\Sigma_I,\Sigma_O,L,Ball_{r,h}(L_I),M,cost,goal)$ for every $r\in\mathbb{R}^+$. Let $A=\{A_{\varepsilon}\}_{\varepsilon > 0}$ be a PTAS for $U$
If for every $r > 0$ and every $\varepsilon > 0, A_{\varepsilon}$ is a $\delta_{r,\varepsilon}$-approximation algorithm for $U_r$, we say that the PTAS $A$ is stable according to $h$
If $\delta_{r,\varepsilon}\le f(\varepsilon)\cdot g(r)$, where
- $f$ and $g$ are some functions from $\mathbb{R}^{\ge0}$ to $\mathbb{R}^+$, and
- $\lim_{\varepsilon\to0}f(\varepsilon)=0$
then we say that the PTAS $A$ is superstable according to $h$
5
5.2
Definition 5.2.2.10
Let $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ be an optimization problem. For any positive real $\delta > 1$ a randomized algorithm $A$ is called a randomized $\delta$-approximation algorithm for $U$ if
- $Prob(A(x)\in M(x))=1$, and
- $Prob(R_A(x)\le\delta)\ge1/2
for every $x\in L_I
For every function $f:\mathbb{N}\to\mathbb{R}^+$, we say that $A$ is a randomized $f(n)$-approximation algorithm for $U$ if
$$ Prob(A(x)\in M(x))=1\ and\ Prob(R_A(x)\le f(|x|))\ge1/2 $$
for every $x\in L_I$
A randomized algorithm $A$ is called a randomized polynomial-time approximation scheme (RPTAS) for $U$ if there exists a function $p:L_I\times\mathbb{R}^+\to\mathbb{N}$ such that for every input $(x,\delta)\in L_I\times\mathbb{R}^+$:
- $Prob(A(x,\delta)\in M(x))=1$ for every random choice $A$ computes a feasible solution of $U$
- $Prob(\varepsilon_A(x,\delta)\le\delta)\ge1/2$ a feasible solution, whose relative error is at most $\delta$, is produced with the probability at least 1/2
- $Time_A(x,\delta^{-1})\le p(|x|,\delta^{-1})$ and $p$ is a polynomial in $|x|$
If $p(|x|,\delta^{-1})$ is polynomial in both its arguments $|x|$ and $\delta^{-1}$, then we say that $A$ is a randomized fully polynomial-time approximation scheme (RFPTAS) for $U$
Definition 5.2.2.11
Let $U=(\Sigma_I,\Sigma_O,L,L_I,M,cost,goal)$ be an approximation problem. For any positive real $\delta > 1$, a randomized algorithm $A$ is called a randomized $\delta$-expected approximation algorithm for $U$ if
- $Prob(A(x)\in M(x))=1$
- $E[R_A(x)]\le\delta$
for every $x\in L_I$