Theory Of Computation

LN1

Alphabets and Strings

An alphabet is a set of symbols
String: a sequence of symbols from some alphabet
Language: a set of strings
Unary numbers alphabet Σ={1}\Sigma=\{1\}
Unary number: 1 11 111...

String Operations

  • Concatenation wvwv
  • Reverse wRw^R
  • String Length w|w|
    Empty String: A string with no letters is denoted: ε\varepsilon or λ\lambda
  • Substring: a subsequence of consecutive characters
  • Prefix and Suffix w=uvw=uv uu is prefix and vv is suffix
  • Exponent Operation: wn=wwwnw^n=\underbrace{ww\cdots w}_{n} and w0=εw^0=\varepsilon
  • The * Operation
    Σ\Sigma^* :the set of all possible strings from alphabet Σ\Sigma
  • The + Operation
    Σ+=Σ{ε}\Sigma^+=\Sigma^*-\{\varepsilon\}

Languages

A language over alphabet Σ\Sigma is any subset of Σ\Sigma^*

Empty language {}\{\} or \varnothing

Operations on Languages

  • The usual set operations: union, intersection, difference, complement
  • Reverse: LR={wR:wL}L^R=\{w^R: w\in L\}
  • Concatenation: L1L2={xy:xL1,yL2}L_1L_2=\{ xy:x\in L_1,y\in L_2 \}
    L0={ε}L^0=\{\varepsilon\}
  • Star-Closure (Kleene *)
    All strings that can be constructed from LL
    L=L0L1L2L^*=L^0\cup L^1\cup L^2\cup\cdots
  • Positive Closure
    L+=L1L2L^+=L^1\cup L^2\cup\cdots

LN2 Deterministic Finite Automata

For every state, there is a transition for every symbol in the alphabet

Formal Definition
Deterministic Finite Automaton (DFA)
M=(Q,Σ,δ,q0,F)M=(Q,\Sigma,\delta,q_0,F)

  • QQ: set of states
  • Σ\Sigma: input alphabet εΣ\varepsilon\notin\Sigma (:the input alphabet never contains empty string)
  • δ\delta: transition function
    δ:Q×ΣQ,δ(q0,a)=q1\delta:Q\times\Sigma\rightarrow Q, \delta(q_0,a)=q_1
    Extended Transition Function δ(q,w)=q\delta^*(q,w)=q'
    Describes the resulting state after scanning string ww from state qq
  • q0q_0: initial state
  • FF: set of accepting states

Language Accepted by DFA
Language accepted by MM
L(M)={wΣ:δ(q0,w)F}L(M)=\{w\in\Sigma^*:\delta^*(q_0,w)\in F\}
Language rejected by MM

Regular Languages
A language LL is regular if there is a DFA MM that accepts it, which means L(M)=LL(M)=L
The languages accepted by all DFAs form the family of regular languages

LN3 Non-Deterministic Finite Automata

A NFA accepts a string:
if there is a computation path of the NFA that accepts the string.
all the symbols of input string are processed and the last is an accepting state2

A NFA rejects a string:
if there is no computation of the NFA that accepts the string.
For each possible computation path:
All the input is consumed and the automaton is in a non accepting state
or
The input cannot be consumed

  • δ\delta: transition function
    δ(q0,x)={q1,q2,,qk}\delta(q_0,x)=\{q_1,q_2,\cdots,q_k\} (can be \varnothing)

Note that for any state q,qδ(q,ε)q, q\in \delta^*(q, \varepsilon)

Equivalence of Machines
Machine M1M_1 is equivalent to machine M2M_2 if L(M1)=L(M2)L(M_1)=L(M_2)
Languages accepted by NFAs = Regular Languages (aka Languages accepted by DFAs)

NFAs and DFAs have the same computation power, namely, they accept the same set of languages.

Proof:

  1. Languages accepted by NFAs \supseteqq Regular Languages. Because every DFA is trivially a NFA;
  2. Languages accepted by NFAs \subseteqq Regular Languages. Becasue any NFA can be converted to an equivalent DFA(We will show it later).

Conversion Procedure Steps

  1. Initial state of NFA is q0q_0 and δ(q0,ε)={q0,}\delta^*(q_0, \varepsilon)=\{q_0,\cdots\}, so that initial state of DFA is {q0,}\{q_0,\cdots\};
  2. For every DFA’s state {qi,,qj}\{q_i,\cdots, q_j\}, compute in the NFA δ(qi,a)δ(qj,a){ql,,qn}\delta^*(q_i,a)\cup\cdots\cup\delta^*(q_j,a)\rightarrow \{q_l',\cdots,q_n'\}, then add transition to DFA δ({qi,,qj},q)={ql,,qn}\delta(\{q_i,\cdots, q_j\}, q)=\{q_l',\cdots,q_n'\};
  3. Repeat Step 2 for every state in DFA and symbols in alphabet until no more states can be added in the DFA;
  4. For any DFA state {qi,,qj}\{q_i,\cdots, q_j\}, if some qjq_j is accepting state in NFA, then {qi,,qj}\{q_i,\cdots , q_j\} is accepting state in DFA.

LN4 Properties of Regular Languages

We say Regular languages are closed under

  • Union L1L2L_1\cup L_2
  • Concatenation L1L2L_1L_2
  • Star L1L_1^*
  • Reversal L1RL_1^R
  • Complement L1\overline{L_1}
  • Intersection L1L2L_1\cap L_2

Every NFA(also include NFA without accepting state) is equivalent to a NFA which contains single accept state.

LN5 Regular Expressions

Regular Expressions

Regular expressions describe regular languages
(a+bc)(a+b\cdot c)^* describes the language {a,bc}={ε,a,bc,aa,abc,bca,bcbc,}\{a,bc\}^*=\{\varepsilon,a,bc,aa,abc,bca,bcbc,\cdots \}

Primitive regular expressions: ,ε,a\varnothing, \varepsilon, a
Given regular expressions r1,r2r_1,r_2, r1+r2,r1r2,r1,(r1)r_1+r_2,r_1\cdot r_2, r_1^*, (r_1) are regular expressions

Languages of Regular Expressions

L(r)L(r) means langguage of regular expression rr
L()=L(\varnothing)=\varnothing
L(ε)={ε}L(\varepsilon)=\{\varepsilon\}
L(a)={a}L(a)=\{a\}
L(r1+r2)=L(r1)L(r2)L(r_1+r_2)=L(r_1)\cup L(r_2)
L(r1r2)=L(r1)L(r2)L(r_1\cdot r_2)=L(r_1)L(r_2)
L(r1)=(L(r1))L(r_1^*)=(L(r_1))^*
L((r1))=L(r1)L((r_1))=L(r_1)

Equivalent Regular Expressions

Theorem:Languages Generated by Regular Expressions = Regular Languages

r=r1r2(r4+r3r1r2)r=r_1^*r_2(r_4+r_3r_1^*r_2)^*

When we say we are given a Regular Language LL, we mean Language LL is in a standard representation (DFA, NFA, or Regular Expression)

LN6 Regular Grammars

Gramars

Grammar G=(V,T,S,P)G=(V,T,S,P)

SaSbS\rightarrow aSb
SλS\rightarrow\lambda

  • VV: Set of variables V={S}V=\{S\}
  • TT: Set of terminal symbols T={a,b}T=\{a,b\}
  • SS: Start variable
  • PP: Set of Production rules P={SaSb,Sλ}P=\{S\rightarrow aSb, S\rightarrow\lambda\}

Sentential Form :A sentence that contains variables and terminals

Language of a Grammar

For a grammar GG with start variable SS
L(G)={w:Sw}L(G)=\{w:S\xRightarrow{*} w\} ww is one string of terminals
AaAbλA\rightarrow aAb|\lambda

Linear Grammars

Linear Grammars: Grammars with at most one variable at the right side of a production

A Non-Linear Grammar
SSSλaSbbSaS\rightarrow SS | \lambda | aSb | bSa
L(G)={w:na(w)=nb(w)}L(G) = \{w:n_a(w)=n_b(w)\} na(w)n_a(w) means number of aa in string ww

Right-Linear Grammars:
All productions have form AxBA\rightarrow xB or AxA\rightarrow x

Left-Linear Grammars:
All productions have form ABxA\rightarrow Bx or AxA\rightarrow x

Regular Grammars

A regular grammar is any right-linear or left-linear grammar
Regular grammars generate regular languages
Theorem Languages Generated by Regular Grammars Regular Languages = Regular Languages
For any transition qapq\xrightarrow{a}p add production aapa\rightarrow ap
For any final state qfq_f add production qfλq_f\rightarrow \lambda

LN7-8 Non-regular languages (Pumping Lemma)

Non-regular languages

{anbn:n0}\{ a^nb^n:n\ge 0\}
{vvR:v{a,b}}\{vv^R:v\in\{a,b\}^*\}

The Pigeonhole Principle and DFAs

Considering number of states of DFA is pp, for a walk of ww, if wp|w|\ge p, which means numbers of states in walk is at least p+1p+1, then a state is repeated in the walk ww

The Pumping Lemma

Take an infinite regular language, LL There exists a DFA that accepts LL
Take string wLw\in L with wp|w|\ge p, then, at least one state is repeated in the walk of ww
Take qq to be the first state repeated
We can write w=xyzw=xyz, Where yy corresponds to substring between first and second occurrence of qq
Note that xyp|xy|\le p, because of unique states in xyxy (Since, in xyxy no state is repeated execpt qq)
Since there is at least one transition in loop, y1|y|\ge 1
We do not care about the form of string zz (zz may actually overlap with the paths of xx and yy)
In General, the string xyizxy^iz is accepted i=0,1,2i=0,1,2
Therefore, xyizL,i=0,1,2,xy^iz\in L,i=0,1,2,\cdots

The Pumping Lemma:

  • Given a infinite regular language LL
  • there exists an integer pp (critical length)
  • for any string wLw\in L with length wp|w|\ge p
  • we can write w=xyzw=xyz
  • with xyp|xy|\le p and y1|y|\ge 1
  • such that xyizL,i=0,1,2,xy^iz\in L,i=0,1,2,\cdots

Applications of the Pumping Lemma

Every language of finite size has to be regular
Therefore, every non-regular language has to be of infinite size

L={anbn:n0}L=\{ a^nb^n:n\ge 0\}
Assume for contradiction that LL is a regular language
Since LL is infinite we can apply the Pumping Lemma
Let pp be the critical length for LL
We pick w=apbpw=a^pb^p
We can write y=ak,1kpy=a^k,1\le k\le p
Thus xy2zL,ap+kbpLxy^2z\in L,a^{p+k}b^p\in L
But ap+kbpLa^{p+k}b^p\notin L, CONTRADICTION!!!

LN9 Context-Free Languages

Context-Free Grammars

Grammar G=(V,T,S,P)G=(V,T,S,P)

G:SaSbSSεG: S\rightarrow aSb | SS | \varepsilon
L(G)L(G) Describes matched parentheses: ((())())(())((())())(())

Derivation Order and Derivation Trees

SAB,AaaAε,BBbεS\rightarrow AB, A\rightarrow aaA | \varepsilon, B\rightarrow Bb | \varepsilon

derivation 1 2 3 4 5 6
leftmost SS ABAB aaABaaAB aaBaaB aaBbaaBb aabaab
rightmost SS ABAB ABbABb AbAb aaAbaaAb aabaab

They give same derivation tree

Ambiguity

A context-free grammar GG is ambiguous if there is a string wL(G)w\in L(G) which has:
two different derivation trees or two leftmost derivations (Two different derivation trees give two different leftmost derivations and vice-versa)

In general, ambiguity is bad and we want to remove it
Sometimes it is possible to find a non-ambiguous grammar for a language
But, in general ιt is difficult to achieve this

LN10 Simplifications of Context-Free Grammars

A Substitution Rule

Nullable Variables ε\varepsilon-production
Unit-Productions
Useless Productions

Normal Forms for Context-free Grammars

Chomsky Normal Form Each production has form: ABCA\rightarrow BC or AaA\rightarrow a
Conversion to Chomsky Normal Form
It is easy to find the Chomsky normal form for any context-free grammar (which doesn’t generate ε\varepsilon)

Greinbach Normal Form All productions have form: AaV1V2Vk,k0A\rightarrow aV_1V_2\cdots V_k, k\ge 0
Greinbach normal forms are very good for parsing strings (better than Chomsky Normal Forms)
However, it is difficult to find the Greinbach normal of a grammar

LN11 Properties of Context-Free languages

Closed

  • Union SS1S2S\rightarrow S_1 | S_2
  • Concatenation SS1S2S\rightarrow S_1S_2
  • Star Operation SS1SλS\rightarrow S_1S | \lambda

Negative Properties of Context-Free Languages (not closed)

  • Intersection
  • Complement

L1={anbncm},L2={anbmcm}L_1=\{a^nb^nc^m\},L_2=\{a^nb^mc^m\}

Intersection of Context-free languages and Regular Languages(Applications of Regular Closure)

The intersection of a context-free language and a regular language is a context-free language
Prove that L={w:na=nb=nc}L=\{w:n_a=n_b=n_c\} is not context-free
Becasue L2={abc}L_2=\{a^*b^*c^*\} is regular, LL2={anbncn}L\cap L_2=\{a^nb^nc^n\} is not context-free.
So it's impossible that LL is context-free

LN12 Pushdown Automata PDAs

Pushdown Automaton -- PDA


Initial Stack Symbol

The States

  • replace: a,bca,b\rightarrow c
  • push: a,εca,\varepsilon\rightarrow c
  • pop: a,bεa,b\rightarrow\varepsilon
  • no change: a,εεa,\varepsilon\rightarrow\varepsilon

PDAs are non-deterministic
Allowed non-deterministic transitions

A string is accepted if there is a computation such that:
All the input is consumed AND The last state is an accepting state
we do not care about the stack contents at the end of the accepting computation

Formalities for PDAs

Pushdown Automaton (PDA)
M=(Q,Σ,Γ,δ,q0,z,F)M = (Q, \Sigma, \Gamma, \delta, q_0, z, F)

  • QQ state
  • Σ\Sigma Input alphabet
  • Γ\Gamma Stack alphabet
  • δ\delta Transition function δ(q1,a,w1)={(q2,w2),(q3,w3)}\delta(q_1,a,w_1)=\{(q_2,w_2),(q_3,w_3)\}
  • q0q_0 Initial state
  • zz Stack start symbol
  • FF Accept states
    (q,u,s)(q,u,s)
  • qq Current state
  • uu Remaining input
  • ss Current stack content

(q1,bbb,aaa$)(q2,bb,aa$)(q_1, bbb, aaa\$) \succ (q_2, bb, aa\$)

Language L(M)L(M) accepted by PDA MM:

L(M)={w:(q0,w,z)(qf,ε,s)}L(M)=\{w:(q_0,w,z)\overset{*}{\succ} (q_f,\varepsilon,s)\}

LN13 PDAs Accept Context-Free Languages

Theorem: Context-Free Languages (Grammars) = Languages Accepted by PDAs

Convert Context-Free Grammars to PDAs

For each terminal aa in GG, add a,aεa,a\rightarrow\varepsilon
For each production AwA\rightarrow w in GG, add ε,Aw\varepsilon, A\rightarrow w
e.g. G:SaSTbb,TTaεG: S\rightarrow aSTb|b, T\rightarrow Ta|\varepsilon
PDA: ε,SaSTb;ε,Sb;ε,TTa;ε,Tε;a,aε;b,bε\varepsilon,S\rightarrow aSTb;\varepsilon,S\rightarrow b;\varepsilon,T\rightarrow Ta; \varepsilon,T\rightarrow \varepsilon;a,a\rightarrow \varepsilon;b, b\rightarrow \varepsilon

Grammar Leftmost Derivation
SaSTbabTbabTababab\begin{array}{l} S\\ \Rightarrow aSTb \\ \Rightarrow abTb \\ \Rightarrow abTab \\ \Rightarrow abab\end{array}

PDA Computation

(q0,abab,$)(q1,abab,S$)(q1,bab,STb$)(q1,bab,bTb$)(q1,ab,Tb$)(q1,ab,Tab$)(q1,ab,ab$)(q1,b,b$)(q1,ε,$)(q2,ε,$)\begin{array}{l} (q_0, abab, \$) \\ \succ (q_1, abab, S\$) \\ \succ (q_1, bab, STb\$) \\ \succ (q_1, bab, bTb\$) \\ \succ (q_1, ab, Tb\$) \\ \succ (q_1, ab, Tab\$) \\ \succ (q_1, ab, ab\$) \\ \succ (q_1, b, b\$) \\ \succ (q_1, \varepsilon, \$) \\ \succ(q_2, \varepsilon, \$) \\ \end{array}

Grammar GG generates string ww SwS\xRightarrow{*}w if and only if PDA MM accepts ww (q0,w,$)(q2,ε,$)(q_0,w,\$)\overset{*}{\succ} (q_2,\varepsilon,\$)

Convert PDAs to Context-Free Grammars

Too long to wirte down here

LN14 DPDA Deterministic PDA

Definition

L(M)={anbn:n0}L(M)=\{a^nb^n:n\ge 0\} is DPDA
L(M)={vvR:v{a,b}}L(M)=\{vv^R:v\in\{a,b\}^*\} is not DPDA
because ?,ab;?,ac?,a\rightarrow b;?,a\rightarrow c is not allowed in DPDA

PDAs Have More Power than DPDAs

We will show that there exists a context-free language LL which is not accepted by any DPDA
Which means Deterministic Context-Free Languages(DPDA) \subsetneqq Context-Free
Languages(PDA)

e.g. L={anbn}{anb2n}L=\{a^nb^n\}\cup \{a^nb^{2n}\} is context-free but not deterministic context-free
Because if there is a DPDA accept LL, we will get a DPDA accepting {anbncn}\{a^nb^nc^n\} by modifying MM, replacing bb with cc
The language {anbncn}\{a^nb^nc^n\} is not context-free(we can prove this using pumping lemma later)

LN15-16 Pumping Lemma for Context-free Languages

The Pumping Lemma:
For any infinite context-free language LL
there exists an integer pp such that
for any string wL,wpw\in L,|w|\ge p
we can write w=uvxyzw=uvxyz
with length vxyp|vxy|\le p and vy>0|vy|>0
and it must be that: uvixyizL,iNuv^ixy^iz\in L,\forall i\in \mathbb{N}

LN17 Turing Machines

Turing Machines are deterministic
The tap with infinite length
The head at each transition (time step):

  1. Reads a symbol
  2. Writes a symbol
  3. Moves Left or Right
    Turing Machines are deterministic(No ε\varepsilon-transitions allowed)
    No transition for input symbol is allowed

Halting

The machine halts in a state if there is no transition to follow
Accepting states have no outgoing transitions
The machine halts and accepts
Accept Input String = If machine halts in an accept state
Reject Input String = If machine halts in a non-accept state or If machine enters an infinite loop
In order to accept an input string, it is not necessary to scan all the symbols of the input string

Formal Definitions for Turing Machines

M=(Q,Σ,Γ,δ,q0,,F)M=(Q,\Sigma,\Gamma,\delta,q_0,\lozenge,F)
\lozenge: Blank
A move q0xaybxq1aybq_0 xayb\succ x q_1 ayb
L(M)={w:q0wx1qfx2}L(M)=\{w:q_0w\overset{*}{\succ} x_1q_fx_2\}
If a language LL is accepted by a Turing machine then we say that is:

  • Turing Recognizable
  • Turing Acceptable
  • Recursively Enumerable

LN18 Turing’s Thesis

Turing’s thesis (1930):
Any computation carried out by mechanical means can be performed by a Turing Machine
An algorithm for a problem is a Turing Machine which solves the problem

Variations of the Turing Machine

Turing machines with:

  • Stay-Option
  • Semi-Infinite Tape
  • Multitape
  • Multidimensional
  • Nondeterministic

Same Power of two machine classes:
both classes accept the same set of languages
each new class has the same power with Standard Turing Machine(accept Turing-Recognizable Languages)

Turing Machines with Stay-Option

The head can stay in the same position
L,R,S: possible head moves

  • Stay-Option Machines simulate Standard Turing machines
    because any standard Turing machine is also a Stay-Option machine
  • Standard Turing machines simulate Stay-Option machines
    because for any transition q1q2[ab,S]q_1\rightarrow q_2[a\rightarrow b,S] we can replace it with q1q?[ab,L]q_1\rightarrow q_?[a\rightarrow b,L] and q?q2[xx,R]q_?\rightarrow q_2[x\rightarrow x,R] for every possible tape symbol xx

Semi-Infinite Tape

Multiple Track Tape
It is a standard Turing machine, but each tape alphabet symbol describes a pair of actual useful symbols q1q2[(b,a)(c,d),L]q_1\rightarrow q_2[(b,a)\rightarrow (c,d),L]

The head extends infinitely only to the right and Initial position is the leftmost cell. When the head moves left from the border, it returns back to leftmost position

  • Standard Turing machines simulate Semi-Infinite machines
    We can insert special symbol # at left of input string and add a self-loop to every state (except states with no outgoing transitions) q1q1[##,R]q_1\rightarrow q_1[\#\rightarrow \#,R]
  • Semi-Infinite tape machines simulate Standard Turing machines
    choose a reference point and generate a Semi-Infinite tape machine with two tracks(right part and left part)

Multi-tape Turing Machines

q1q2[(b,f)(g,d),L,R]q_1\rightarrow q_2[(b,f)\rightarrow (g,d),L,R]

  • Multi-tape machines simulate Standard Turing machines
    Just use one tape
  • Standard Turing machines simulate Multi-tape machines
    Uses a multi-track tape to simulate the multiple tapes. A tape of the Multi-tape machine corresponds to a pair of tracks

Note that same power doesn’t imply same speed
L={anbn}L=\{a^nb^n\}, Standard Turing machine needs O(n2)O(n^2) time but 2-tape machine needs O(n)O(n) time(copy bnb^n to tape 2 and compare tape 1 and tape 2)

Multidimensional Turing Machines

MOVES: L,R,U,D

  • Multidimensional machines simulate Standard Turing machines
    Just use one dimension
  • Standard Turing machines simulate Multidimensional machines
    Use a two track tape; Store symbols in track 1; Store coordinates in track 2

Nondeterministic Turing Machines

  • Nondeterministic machines simulate Standard Turing machines
    every deterministic machine is also nondeterministic
  • Standard (deterministic) Turing machines simulate Nondeterministic machines
    Uses a 2-dimensional tape(equivalent to standard Turing machine with one tape)

LN19 Universal Turing Machine

A Universal Turing Machine

Tape 1 contents of Universal Turing Machine: binary encoding of the simulated machine MM which is description of M1M1 (different numbers of '1's means different symbols while '0's means separators)

A Turing Machine is described with a binary string of 0’s and 1’s
Therefore, the set of Turing machines forms a language: each string of this language is the binary encoding of a Turing Machine

Tape 2 contents of Universal Turing Machine: Tapes Contents of MM
Tape 3 contents of Universal Turing Machine: State of MM

Countable Sets

There is a one to one correspondence (injection) of elements of the set to Positive integers (1,2,3,…)
e.g. The set of rational numbers is countable

Let SS be a set of strings (Language)
An enumerator for SS is a Turing Machine that generates (prints on tape) all the strings of SS one by one and each string is generated in finite time

Theorem: The set of all Turing Machines is countable
Proof: Any Turing Machine can be encoded with a binary string of 0’s and 1’s.
Find an enumeration procedure for the set of Turing Machine strings
Enumerator: Repeat:

  1. Generate the next binary string of 0’s and 1’s in proper order
  2. Check if the string describes a Turing Machine
    if YES: print string on output tape
    if NO: ignore string

Simpler Proof:
Each Turing machine binary string is mapped to the number representing its value

Uncountable Sets

If SS is an infinite countable set, then the powerset 2S2^S of SS is uncountable.

Since Languages accepted by Turing machines XX is countable but All possible languages 2S2^S is uncountable. There is a language not accepted by any Turing Machine.

LN20 Decidable

Decidable Languages

Turing-Acceptable=Turing-Recognizable=Recursively-enumerable

A language LL is decidable if there is a Turing machine (decider) MM which accepts LL and halts on every input string (Also known as recursive languages).

For any input string ww
If wLw\in L, MM halts in an accept state; else MM halts in a non-accept state.

Note that Every decidable language is Turing-Acceptable

We can convert any Turing machine to have single accept and reject states
Decider: For each input string, the computation halts in the accept or reject state
Turing Machine for LL: It is possible that for some input string the machine enters an infinite loop

A computational problem is decidable if the corresponding language is decidable. We also say that the problem is solvable.

Problem: Is number xx prime?
Corresponding language: PRIMES={1,2,3,5,7,}\text{PRIMES}=\{1,2,3,5,7,\cdots\}
Decider for PRIMES\text{PRIMES}: On input xx, divide xx with all possible numbers between 22 and x\sqrt{x}. If any of them divides xx Then reject else accept.

Problem: Does DFA MM accpet the empty language L(M)=L(M)=\varnothing?
Corresponding Language: EMPTYDFA={M:M is a DFA that accpets empty language }\text{EMPTY}_{\text{DFA}}=\{\langle M\rangle:M \text{ is a DFA that accpets empty language }\varnothing\}
Decider for EMPTYDFA\text{EMPTY}_{\text{DFA}}: On input M\langle M\rangle, Determine whether there is a path from the initial state to any accepting state

Problem: Does DFA MM accpet a finite language?
Corresponding Language: FINITEDFA={M:M is a DFA that accpets a finite language }\text{FINITE}_{\text{DFA}}=\{\langle M\rangle:M \text{ is a DFA that accpets a finite language }\}
Decider for FINITEDFA\text{FINITE}_{\text{DFA}}: On input M\langle M\rangle, Check if there is a walk with a cycle from the initial state to an accepting state

ADFA={M,w:M is a DFA that accpets string w}\text{A}_{\text{DFA}}=\{ \langle M,w\rangle: M\text{ is a DFA that accpets string }w \}

EQUALDFA={M1,M2:M1 and M2 are DFAs that accept the same languages}\text{EQUAL}_{\text{DFA}}=\{ \langle M_1,M_2\rangle: M_1 \text{ and } M_2 \text{ are DFAs that accept the same languages}\}
Check if L(M)=(L1L2)(L1L2)=L(M)=(L_1\cap \overline{L_2}) \cup (\overline{L_1}\cap L_2)=\varnothing
which is a solvable problem for DFAs: EMPTYDFA\text{EMPTY}_{\text{DFA}}

Theorem: If a language LL is decidable, then its complement LL is decidable too

Undecidable Languages

An undecidable language has no decider: Any Turing machine that accepts LL does not halt on some input string

If LL is Turing-acceptable but L\overline{L} is not Turing-acceptable, LL is undecidable such as L={ai:aiL(Mi)}L=\{a^i:a^i \in L(M_i)\}

LN21 Undecidable

Undecidable Problems

Membership problem
ATM={M,w:M is a Turing machine that accpets string w}\text{A}_{\text{TM}}=\{ \langle M,w\rangle: M\text{ is a Turing machine that accpets string }w \} is Turing-acceptable but undecidable

Halting Problem
HALTTM={M,w:M is a Turing machine that halts on input string w}\text{HALT}_{\text{TM}}=\{ \langle M,w\rangle: M\text{ is a Turing machine that halts on input string }w \} is Turing-acceptable but undecidable

LN22 Reductions

Reductions

Computable function ff: There is a deterministic Turing machine MM which for any input string ww computes f(w)f(w) and writes it on the tape

Problem xx is reduced to problem yy: If we can solve problem yy then we can solve problem xx, which means there is a computable function ff such that wAf(w)Bw\in A\Leftrightarrow f(w)\in B

Theorem 1:
If: Language AA is reduced to BB and language BB is decidable
Then: AA is decidable