Ygor Serpa
2 min readJun 25, 2021

--

This is a fundamental breakthrough in decidability that dates back to Turing, Church and Godel, but can be easily deducted from some basic notions of computability. In a few words, there simply is an infinite number of problems for each possible algorithm. Here is a more formal explanation:

In computability lingo, problems are languages (L) and languages are subsets of the alphabet (Σ), which can be repeated infinitely (Σ*). Thus, problems are the set of all subsets of the alphabet.

Turing Machines are finite and, thus, we can derive a mapping from any Turing Machine to a natural number, such as 1, 2 or 3. Therefore, one can enumerate all possible Turing Machines (ie, algorithms) by just enumerating the natural numbers.

This shows that Turing Machines are "a smaller infinity" than Problems. While Turing Machines are 1 to 1 with the natural numbers, Problems are 1 to 1 with the reals. So, using Cantor's diagonal argument, there are infinitely more problems than Turing Machines. In other words, there are infinitely more problems than algorithms, which entails that there are far more uncomputable problems than computable ones.

If you are unfamiliar with this line of reasoning, you ought to read a bit about Cantor and his whole idea of infinity. A brief intruction would be to realize that we can "count natural numbers", even though they are infinite. However, the real numbers are uncountable. For instance, what is the number that comes after 1? is it 2 or 1.1? or maybe 1.01? or even 1.000001? In the above proof, we show that algorithms can be mapped to natural numbers, thus, algorithms are countable. However, problems are not.

If you wish to explore more on the subject, I highly recommend you reading Godel Escher Bach, which is a great non-technical/mathy read on computability theory

--

--

Ygor Serpa
Ygor Serpa

Written by Ygor Serpa

Former game developer turned data scientist after falling in love with AI and all its branches.

Responses (1)