
- Automata Theory - Applications
- Automata Terminology
- Basics of String in Automata
- Set Theory for Automata
- Finite Sets and Infinite Sets
- Algebraic Operations on Sets
- Relations Sets in Automata Theory
- Graph and Tree in Automata Theory
- Transition Table in Automata
- What is Queue Automata?
- Compound Finite Automata
- Complementation Process in DFA
- Closure Properties in Automata
- Concatenation Process in DFA
- Language and Grammars
- Language and Grammar
- Grammars in Theory of Computation
- Language Generated by a Grammar
- Chomsky Classification of Grammars
- Context-Sensitive Languages
- Finite Automata
- What is Finite Automata?
- Finite Automata Types
- Applications of Finite Automata
- Limitations of Finite Automata
- Two-way Deterministic Finite Automata
- Deterministic Finite Automaton (DFA)
- Non-deterministic Finite Automaton (NFA)
- NDFA to DFA Conversion
- Equivalence of NFA and DFA
- Dead State in Finite Automata
- Minimization of DFA
- Automata Moore Machine
- Automata Mealy Machine
- Moore vs Mealy Machines
- Moore to Mealy Machine
- Mealy to Moore Machine
- Myhill–Nerode Theorem
- Mealy Machine for 1’s Complement
- Finite Automata Exercises
- Complement of DFA
- Regular Expressions
- Regular Expression in Automata
- Regular Expression Identities
- Applications of Regular Expression
- Regular Expressions vs Regular Grammar
- Kleene Closure in Automata
- Arden’s Theorem in Automata
- Convert Regular Expression to Finite Automata
- Conversion of Regular Expression to DFA
- Equivalence of Two Finite Automata
- Equivalence of Two Regular Expressions
- Convert Regular Expression to Regular Grammar
- Convert Regular Grammar to Finite Automata
- Pumping Lemma in Theory of Computation
- Pumping Lemma for Regular Grammar
- Pumping Lemma for Regular Expression
- Pumping Lemma for Regular Languages
- Applications of Pumping Lemma
- Closure Properties of Regular Set
- Closure Properties of Regular Language
- Decision Problems for Regular Languages
- Decision Problems for Automata and Grammars
- Conversion of Epsilon-NFA to DFA
- Regular Sets in Theory of Computation
- Context-Free Grammars
- Context-Free Grammars (CFG)
- Derivation Tree
- Parse Tree
- Ambiguity in Context-Free Grammar
- CFG vs Regular Grammar
- Applications of Context-Free Grammar
- Left Recursion and Left Factoring
- Closure Properties of Context Free Languages
- Simplifying Context Free Grammars
- Removal of Useless Symbols in CFG
- Removal Unit Production in CFG
- Removal of Null Productions in CFG
- Linear Grammar
- Chomsky Normal Form (CNF)
- Greibach Normal Form (GNF)
- Pumping Lemma for Context-Free Grammars
- Decision Problems of CFG
- Pushdown Automata
- Pushdown Automata (PDA)
- Pushdown Automata Acceptance
- Deterministic Pushdown Automata
- Non-deterministic Pushdown Automata
- Construction of PDA from CFG
- CFG Equivalent to PDA Conversion
- Pushdown Automata Graphical Notation
- Pushdown Automata and Parsing
- Two-stack Pushdown Automata
- Turing Machines
- Basics of Turing Machine (TM)
- Representation of Turing Machine
- Examples of Turing Machine
- Turing Machine Accepted Languages
- Variations of Turing Machine
- Multi-tape Turing Machine
- Multi-head Turing Machine
- Multitrack Turing Machine
- Non-Deterministic Turing Machine
- Semi-Infinite Tape Turing Machine
- K-dimensional Turing Machine
- Enumerator Turing Machine
- Universal Turing Machine
- Restricted Turing Machine
- Convert Regular Expression to Turing Machine
- Two-stack PDA and Turing Machine
- Turing Machine as Integer Function
- Post–Turing Machine
- Turing Machine for Addition
- Turing Machine for Copying Data
- Turing Machine as Comparator
- Turing Machine for Multiplication
- Turing Machine for Subtraction
- Modifications to Standard Turing Machine
- Linear-Bounded Automata (LBA)
- Church's Thesis for Turing Machine
- Recursively Enumerable Language
- Computability & Undecidability
- Turing Language Decidability
- Undecidable Languages
- Turing Machine and Grammar
- Kuroda Normal Form
- Converting Grammar to Kuroda Normal Form
- Decidability
- Undecidability
- Reducibility
- Halting Problem
- Turing Machine Halting Problem
- Rice's Theorem in Theory of Computation
- Post’s Correspondence Problem (PCP)
- Types of Functions
- Recursive Functions
- Injective Functions
- Surjective Function
- Bijective Function
- Partial Recursive Function
- Total Recursive Function
- Primitive Recursive Function
- μ Recursive Function
- Ackermann’s Function
- Russell’s Paradox
- Gödel Numbering
- Recursive Enumerations
- Kleene's Theorem
- Kleene's Recursion Theorem
- Advanced Concepts
- Matrix Grammars
- Probabilistic Finite Automata
- Cellular Automata
- Reduction of CFG
- Reduction Theorem
- Regular expression to ∈-NFA
- Quotient Operation
- Parikh’s Theorem
- Ladner’s Theorem
Ackermanns Function in Automata Theory
The Ackermann's Function was discovered by the German mathematician Wilhelm Friedrich Ackermann. This function is an example with which we can understand the limitations of primitive recursive functions. Unlike primitive recursive functions, Ackermann's function grows very rapidly and shows that not all total computable functions are primitive recursive.
In this chapter, we will see the basics of Ackermann's function and go through several examples for a better understanding.
Ackermann's Function
The Ackermann's function is a popular example in theoretical computer science because it is one of the simplest and earliest-discovered functions that is not primitive recursive.
The Ackermann's function is simple in structure but grows faster than any primitive recursive function. This property makes it a fundamental example in the study of computability theory. And this shows the power of μ functions, which is although total and computable.
Definition of Ackermann's Function
Ackermann's function is defined for two non-negative integers x and y as follows −
It is A(x, y), and defined as follows −
- y + 1 (if x = 0)
- A(x - 1, 1) (if x > 0 and y = 0)
- A(x - 1, A(x, y - 1)) (if x > 0 and y > 0)
This recursive function has three cases −
- If x is 0, the function simply returns y + 1.
- If x is greater than 0 and y is 0, the function returns the result of A(x − 1, 1)
- If both x and y are greater than 0, the function applies recursion to both arguments, making it highly complex.
For every non-negative integer x and y, the function A(x, y) can be computed. So, it is a total function.
A(x, 0) = A((x - 1), 1) = A((x - 2), A((x - 1), (1 - 1))) = A((x - 2), A((x - 1), 0)) [this will continue recursively].
Here, we are not getting a zero function, which proves that it is not primitive recursive.
Properties of Ackermann's Function
The Ackermann's function is total, meaning it provides a result for any non-negative integers x and y. However, the function is not primitive recursive, which means it cannot be constructed using only basic operations like zero functions, successor functions, and composition.
The rapid growth of Ackermann's function exceeds any primitive recursive function, which makes it a powerful example in understanding the boundaries of computability.
Examples of Ackermann's Function
To better understand Ackermann's function, let us go through some examples that demonstrate how it works.
Example 1: Calculating A(1, 3)
Let's compute A(1, 3) step by step −
- Start with A(1, 3) = A(0, A(1, 2)) because x > 0 and y > 0.
- Next, we calculate A(1, 2) = A(0, A(1, 1)).
- Then, calculate A(1, 1) = A(0, A(1, 0)).
- Now, A(1, 0) = A(0, 1), which equals 2.
- Substituting back, A(1, 1) = A(0, 2) = 3.
- Substituting again, A(1, 2) = A(0, 3) = 4.
- Finally, A(1, 3) = A(0, 4) = 5.
So, A(1, 3) = 5.
Example 2: Calculating A(2, 2)
Now, let's compute A(2, 2) −
- Start with A(2, 2) = A(1, A(2, 1)).
- Compute A(2, 1) = A(1, A(2, 0)).
- Next, A(2, 0) = A(1, 1) = 3 (from the previous example).
- Therefore, A(2, 1) = A(1, 3) = 5 (as calculated earlier).
- Now, A(2, 2) = A(1, 5).
Let's break down A(1,5) −
- A(1, 5) = A(0, A(1, 4))
- A(1, 4) = A(0, 5) = 6.
- Therefore, A(1, 5) = 7.
- Finally, A(2, 2) = 7.
Example 3: Calculating A(3, 2)
Let's compute A(3, 2) −
- A(3, 2) = A(2, A(3, 1)).
- Calculate A(3, 1) = A(2, A(3, 0)).
- Since A(3, 0) = A(2, 1) and we know A(2, 1) = 5, we get A(3, 1) = A(2, 5).
To break it down −
- A(2, 5) = A(1, A(2, 4))
- A(2, 4) = A(1, A(2, 3))
- A(2, 3) = A(1, A(2, 2)) = A(1, 7)
We already know that A(1, 7) will follow the pattern −
$$\mathrm{A(1, 7) \:=\: A(0, 8) \:=\: 9}$$
Therefore,
$$\mathrm{A(2, 3) \:=\: 9,\: A(2,4) \:=\: 10,\: and\: A(2, 5) \:=\: 11}$$
Finally, A(3, 2) = A(2, 11), and the computation will become increasingly complex from here, showing the rapid growth of Ackermann's function.
The Growth Rate of Ackermann's Function
As we have seen in the examples above, it shows how quickly Ackermann's function grows, even for small inputs. This rapid growth rate is what makes the function non-primitive recursive.
Primitive recursive functions can be computed with loops and basic recursion. The Ackermann's function requires deep levels of recursion that cannot be simplified to a primitive recursive form.
Practical Implications
The Ackermann's function is more than a theoretical idea; it can be treated as benchmark in the field of computer science. Its growth rate is often used to test the efficiency of algorithms and computational processes. For instance, in analyzing the performance of certain data structures like union-find, Ackermann's function can be used to express the time complexity in a compact form.
Conclusion
In this chapter, we explained the Ackermann's function in detail including its definition and the basic structure of its recursive cases. We discussed how this function, though total and computable, is not primitive recursive due to its rapid growth rate.
Through several examples, we demonstrated how Ackermann's function operates and how quickly its values escalate even for relatively small inputs.