The 75th anniversary of Turing’s seminal paper and his centennial anniversary occur in 2011 and 2012, respectively. It is natural to review and assess Turing’s contributions in diverse fields in the light of new developments that his thought has triggered in many scientific communities. Here, the main idea is to discuss how the work of Turing allows us to change our views on the foundations of Mathematics, much as quantum mechanics changed our conception of the world of Physics. Basic notions like computability and universality are discussed in a broad context, placing special emphasis on how the notion of complexity can be given a precise meaning after Turing, i.e., not just qualitatively but also quantitatively Turing’s work is given some historical perspective with respect to some of his precursors, contemporaries and mathematicians who took his ideas further.

El 75 aniversario del artículo seminal de Turing y el centenario de su nacimiento ocurren en 2011 y 2012, respectivamente. Es natural revisar y valorar las contribuciones que hizo Turing en campos muy diversos a la luz de los desarrollos que sus pensamientos han producido en muchas comunidades científicas. Aquí, la idea principal es discutir como el trabajo de Turing nos permite cambiar nuestra visión sobre los fundamentos de las Matemáticas, de forma similar a como la mecánica cuántica cambió nuestra concepción de la Física. Nociones básicas como compatibilidad y universalidad se discuten en un contexto amplio, haciendo énfasis especial en como a la noción de complejidad se le puede dar un significado preciso después de Turing, es decir, no solo cualitativo sino cuantitativo. Al trabajo de Turing se le da una perspectiva histórica en relación a algunos de sus precursores, contemporáneos y matemáticos que tomaron y llevaron sus ideas aún más allá.

At the year of this writing, 2011, it is 75 years since the seminal paper by Alan Mathison Turing (Turing,

Turing made a gigantic effort to understand how the human mind works at the level of finding mechanical procedures to compute things and devise appropriate definitions of what algorithms are (Turing,

He invented the theory of computability. What is more important, this affects the way Mathematics must be understood at a fundamental level, the calculus. And he did more. His results have revolutionized the way we address axioms, i.e., the very fundamentals of Mathematical disciplines. He addressed questions such as when a set of axioms is complete, what to do when it is not. Ultimately, this leads to the very notion of mathematical creativity.

Turing has achieved considerable recognition in Engineering Schools, such as Computer Science and many others. However, it is rather disappointing to see that the figure of Turing and his work still does not have a central, pivotal role in the curricula of university mathematics departments, a fact which merits a word or two. Firstly, Turing’s theory can be regarded as the fundamental to what a calculation should be in Mathematics. It underlines all previous knowledge on calculus and analysis in Mathematics in a way that was implicit before Turing. After Turing, it is systematized in a way that it becomes mechanical and algorithmic: the holy grail of any theory. Secondly, it affects the way axioms have to be considered in Mathematics. The big surprise is that Mathematics is not closed in the sense predicted by David Hilbert, but it is an open system capable of increasing its amount of knowledge by adding new axioms to a discipline.

And the same goes for university physics departments, where the computer science part of in the curricula is mostly reduced to learning manuals of software instead of the fundamentals of computation. Manuals are changeable, version after version, but Turing’s foundations on computability theory remain.

Turing’s work has led to the development of 3 major disciplines:

These disciplines are part of computer science as envisaged by Turing and extend to many branches of science like Physics, Mathematics, Engineering, etc. This extension will increase with our better understanding of Nature and will apply to more descriptive sciences like Biology.

From Turing’s work it is apparent that with a finite set of axioms it is not possible to cover Mathematics as a whole. There are irreducible truths, axioms that are not self-evident in the sense of Euclid or Hilbert, and must be added to as independent axioms. This makes a TOE (Theory of Everything) of Mathematics impossible.

Turing was the first to separate software from hardware in a very concrete way. He did that by first focusing on the theoretical problem of having a well-defined notion of a computing machine. In his later years, he also got involved in constructing computing machine in practice (Turing,

Turing first goal was to scrutinize all steps that a person realizes during a calculation, like arithmetic, and separate irrelevant aspects from the relevant properties necessary to carry out the calculation. In doing so, he realized that there were two main ingredients: ‘local information’ and ‘state of mind’. Local information means that at each calculation step, only a small part of the whole operation is being performed. State of mind means that the steps after a local calculation is carried out, depend on the rules stored in the person’s mind which defines the calculation itself. Turing realized that it was enough to use a one-dimensional roll of paper or pad to write the intermediate (local) calculations and that the rules of the state of mind could be also stored in a table of operations. After this analysis, Turing came up with an abstract construct of his machine.

He was so convinced that this definition of machine represented the most general possible algorithm for calculus that he formulated the basic principle of computation by means of his construction:

Turing named his machine 'a-machine' for automatic machine (Turing,

A basic and fundamental result of the notion of a TM is that the set of TMs is countable, infinitely denumerable. It corresponds to bit-strings. Let ^{∞}. A Turing Machine TM is an application

However, the notion of a TM is tight to the computation of a given function or problem. Changing the function means changing the TM. Here comes the notion of universality as a property of a special TM that can compute what any other TM can do.

It is very remarkable that the definition of a TM allows for this property of universality. The basic idea behind the UTM is the observation that a TM ^{*}^{*}^{*}

In doing so, Turing was giving birth to programming and compiling. A universal TM is the notion of a general-purpose programmable computer today. After Turing gave the first construction of a UTM (Turing,

Von Neumann realized that Turing had achieved the goal of defining the notion of universal computing machine, and went on to think about practical implementations of this theoretical computer. It was clear that this was the crucial notion of a flexible computer that was needed and was lacking thus far. Therefore, the distinction between software and hardware is clear in Turing’s work and it is a consequence of it. Turing did not care about practical implementations at his time because he wanted to isolate, to single out the very notion of what a computer is, in theory. In doing so, he was inspired by D. Hilbert and his ideas about a formal set of axioms from which theorems would be provable by means of a mechanical procedure. This led to the notion of TM and the solution of Hilbert’s tenth problem.

As the title of his

Fortunately, all algebraic numbers, as well as, π, e, and many other transcendental numbers are computable reals.

In addressing non-computable problems in Sect.III, it is useful to introduce a variant of Turing machine due to Chaitin (Chaitin,

This means that the TM knows when to stop by itself, without needing a special mark indicator or blank character. Formally, it is an application _{2}

An explicit construction of a Chaitin machine is as follows. It has three elements: i/ a finite program tape; ii/ a doubly-infinity work tape; iii/ a head with one arrow scanning the program tape and another arrow scanning the work tape. The alphabet is binary 0, 1 and the blank space is not allowed to mark the halting of the machine. The initial state of a CM is the program

Notice that this construction of a TM is self-delimiting since the read arrow head cannot read-off the right-most square of the finite program tape. Also, in an ordinary TM, a program that halts is necessarily prefix-free: it cannot be extended into another program that halts.

Procedures exist to make a given set of bit-strings into a self-delimiting set. For a bit-string

For instance, from the above _{2} we construct ^{S}_{2}= {010, 011, 00101, 00110} ⊂ _{4}, which is prefix-free. Thus, the length increases only by an additive logarithmic term in the transition from a bit-string to its self-delimiting presentation:

asymptotically. An important property is that universal Chaitin Machines also exist: the universal CM _{c} that indicates which CM to simulate, followed by the binary program for that machine, _{c}

It is a twist of fate that in the same paper where Turing shows what we can compute in a very precise and universal way ... he also proves that there are things that we cannot compute at all.

Gödel’s theorem on incompleteness (Gödel,

It is easy to write programs, in pseudocode, that will never halt:

will loop forever. Another less evident example of looping program is:

It produces the cycle 1, 4, 2, 1 forever.

Thus, a skilled debugger may envisage the task of finding all possible loops in programs and with a look-up table, get rid of them. Or maybe, one has to study more and it is necessary to classify families of loops etc. Turing’s proof shows that this dream is impossible and does not depend on how smart the debugger is. It is at the roots of computational theory. In fact, we can guess that the purposed debugger may easily run into unknown territory. For instance, we can use the

It has been confirmed that this program stops for very large values, ^{58} (Oliveira e Silva,

A basic and fundamental result of the notion of a TM is that the set of TMs is countable, infinitely denumerable. It corresponds to bit-strings ^{∞}_{n=0} _{n}^{n}.

Cantor’s diagonal method provides a clever way to see that there are more real numbers R than natural numbers N (Cantor,

As an illustration, consider the following table where we place eight bit-strings _{1}, _{2}, ... , _{8}} ⊂ _{9} ∉ _{9} := 00000000 which is new.

The diagonal method is very general. It applies both to finite sets like ^{∞}: the set of infinite binary strings. A consequence of this is that ^{∞} has infinite cardinality but it is uncountable. To show this, we proof it by reductio ad absurdum. Assume that ^{∞} is countable so that we make a table like (7) with infinite elements ordered by the integers N. All elements are thus listed, but with the diagonal we can create another bit-string _{d}:

where _{i,i}^{∞}. But then, _{d}^{∞}, which is a contradiction. The assumption that ^{∞} was a countable set is not true.

In fact, we can go on and prove that the set of real numbers R is uncountable by establishing a bijection between ^{∞} and R. Both N and R are infinite sets, but of a different quality. The cardinality of N is denoted by ℵ_{0}. R has the cardinality of the continuum.

_{1}, the second transfinite cardinal introduced by Cantor, or equivalently, that every infinite subset of R must apply bijectively on either N or on R itself: 2^{ℵ0} = ℵ_{1}

In other words, there is no set with an intermediate cardinality between N and R, there is a gap. CH was introduced by Cantor but was unable to prove it (Dauben,

The set of computable reals with a TM is quite small. Given a TM with |^{2|S|} different numbers. Using Cantor’s diagonal method, Turing was able to prove that there are uncountably many noncomputable numbers. Most of the real numbers, the continuum, is inaccessible to a TM.

The diagonal method has proved extremely useful in fundamental problems of Mathematics. Some instances are Russell’s paradox in set theory, Gödel’s first theorem of incompleteness and Turing’s solution to the 10th Hilbert’s problem.

A crucial assumption in Turing’s formulation of this problem is that there is no limit for the running time of the computer. By computer is meant a TM. Under these circumstances, there is no mechanical procedure that can decide in advance whether a computer will ever halt. A more formal statement is the following:

Let _{n}_{n}

In bit-string notation _{n}_{r}_{n}_{m}_{m}_{r} labelling TMs. Each subset of _{n}

Now, we are in the situation of applying Cantor’s diagonal method. The set _{n}

By construction, _{n}

Turing did not use the terminology of `halting problem’ in his

After Turing found an explicit and crucial example of a non-computable problem, it was natural to ask whether more examples of this kind could be found. In

where |_{N} and defined

This is a well-defined function ∑_{N} : N → N , but it is noncomputable: it grows faster than any computable function ƒ(_{N} > ƒ(_{N} cannot be bounded in the form of ∑_{N} =

After Turing’s halting problem, we may ask: can we quantify non-computability on mathematical grounds? G. Chaitin has done a great deal of work (Chaitin,

A first consequence of this definition is that

where the algorithmic complexity (14) is defined for programs

Although non-computable, algorithmic complexity is well-defined and it has very useful properties like subadditivity: the joint complexity is bounded by the sum of the complexities of the individuals:

This allows us to construct big programs out of small ones. Another crucial property follows from a proper definition of relative entropy

Thus, the joint complexity of two bit-strings can be computed knowing the absolute complexity of the first one plus the relative complexity of the second given the first one. The key point for this result to hold true is the definition of relative complexity of

A fundamental property of Chaitin machines is that they allow us to define halting probabilities for TMs, or the algorithmic probability of a bit-string, also known as universal probability _{U}

which is the probability that a program randomly drawn as a sequence of fair coin flips _{1}_{2}... will compute the string x

A central theorem relates algorithmic complexities with algorithmic probabilities:

This relation tells us that near-minimum size programs for calculating something, elegant programs, are essentially unique. This is a mathematical formulation of Occam’s Razor. Essentially, this relation tells us that AIT is equivalent to Probability Theory, although this probability has to do with randomness in programs, rather than statistical randomness but we shall get back to this later.

The idea behind structural or logical randomness is lack of structure or pattern in a program or bit-string. Thus, a program or bit-string is random if it has no pattern or inner structure, consequently, it cannot be compressed. The only way to address it is by printing the whole program as it is: there is no theory behind it from which it can be derived. By theory, we mean a simpler procedure to recover the bit-string, i.e. something compressible. Now, we can give a precise definition of randomness using information-theoretic notions like algorithmic complexity. This was defended by Chaitin in AIT. It is necessary to distinguish between finite bit-strings ^{∞}, _{n}_{n=1}^{∞} (Chaitin,

i/ Random finite bit-strings:

ii/ Random infinite bit-strings:

Notice that _{n}

It is possible to prove that the definition of randomness for infinite strings from AIT (21) is equivalent to the statistical definition of random real numbers in classical probabilistic theory, as introduced by Martin-Löf (Martin-Löf,

We can now define Chaitin’s Ω number and use it to assess logical randomness in Information Theory, the issue of non-computability. The motivation is to define the halting probability of a TM, i.e.,

where the sum runs over prefix-free strings and the universal computer

It measures the probability that a randomly chosen program _{U}

What is behind Ω is a very compact way of encoding the halting problem, or any other non-computable problem.

The Chaitin Ω number is a real number in (0, 1) which is logically random (21): let us truncate it up to programs of bit-size

then, it is possible to prove that _{N}) > _{N} are lower bounds to the actual Ω. This truncation also produces an unbounded function Ω_{N} that reflects its non-computability. Knowing the first _{N} := 0._{1}_{2} ... _{N} then it is possible to decide the truth of _{N} enables us to decide all programs of length |

After having met the limits of computability, the natural question is: can we go beyond them? This depends on what is called the Turing barrier (Feynman,

This notion has originated a line or resesearch called Hypercomputation. It speculates that it is possible to devise theoretical or physical machines that can compute problems that are non-computable by the TM model (Copeland, Proudfoot,

The following list by no means aims to give a full account of everyone who might have been involved directly or indirectly in research touching upon Turing’s work, but simply to present some important facts that are interesting in connection to his work and later developments. Due to space constraints we cannot dwell upon the work of such as Georg Cantor (the diagonal method (Cantor,

Leibniz made a crucial discovery that today is taken for granted but was a major breakthrough in computational theory: the binary numeral system (base 2) {0, 1} as a system for calculus. He went on to create a mechanical machine that worked simple multiplication operations with this binary system. He dreamt of human reason reduced to calculation and of powerful mechanical engines to carry out those calculations.

Leibniz asked and thought about fundamental questions and ideas about what is Science and Nature (Chaitin,

In addressing those questions, Leibniz touched upon the roots of what a physical law must be: simplicity must be the key. To show this, he posed a very concrete mathematical example. Suppose you are given a set of points in a plane that represent the experimental data you want to explain by a law. It is well-known from interpolation techniques, like Lagrangian interpolation, which he anticipated, that we can always find a function that fits a given finite number of points. How do we know then, that a physical law exists behind them? Leibniz’s answer is: only if the rule to fit the data is simple enough. His basic principle is Occam’s Razor. With Turing, we know how to quantify complexity for instance by means of the notion of compression.

Leibniz also stated that the Universe has a duality relationship between complexity vs. simplicity. On one side, the Universe is extremely diverse and rich, complex. On the other hand, it can be made out of very simple rules that we call fundamental laws. Complexity out of simplicity: like in a Beethoven symphony. In today’s computer age we have a typical example of this phenomenon: a laptop computer can produce a fabulous number of complicated images, movies, games etc. Yet, all there is underneath is Leibniz’s binary system. In this way, he anticipated the notion of emergent phenomena that is so influential and modern in theoretical physics.

Weyl became interested in Mathematical Logic and the foundations of Mathematics since his thesis supervisor was David Hilbert in Gottingen. He wrote a thorough book (Weyl, 1949) on these topics in which he calls the attention of Leibniz’s unpublished work (Leibniz,

He states that the problem of simplicity is of central importance for the epistemology of the natural sciences. As an example of the principle of simplicity in physics, he claims that it is a sure sign of being on the wrong scent if one’s theory suffers the fate of Ptolemy’s epicycles whose number had to be increased every time the accuracy of observation improved. The three laws of Kepler were much simpler and yet agreed noticeably better with the observations than the most complicated system of epicycles that had been dreamed up.

Weyl took Leibniz’s thoughts about complexity to the extreme case and established that if we allow arbitrary high complexity in a law of physics, then the law ceases to be a law ... because then there is always a law. Thus, some sort of balance has to be reached.

Admitting that the concept of simplicity appears to be so inaccessible to objective formulation, he failed to come up with a precise definition of complexity, see Sect.VI.

In year 1931 Gödel surprised the great mathematicians of his time by showing that Hilbert’s proposal of finding a complete axiomatic formalization of Mathematics was impossible (Gödel,

A common misconception about Gödel’s work is that it is destructive towards Mathematics since it looks like an attack on what Mathematics was understood to be: a well-defined formal system to solve problems. Quite the contrary, this objective is still true after Gödel’s results, but has to be revised and made precise by considering incompleteness as a key ingredient in Mathematics. Although people think that Gödel’s theorem was bad news, a closer analysis reveals that it was good news and positive as it allowed creativity to play a key role in the foundations of Mathematics, and this can be done in a rigorous way as it demands.

The heart of Gödel’s proof relies is using a self-reference proposition like

or equivalently, the liar’s paradox

to undermine the logical system of Hilbert and followers. The latter was based on a set of axioms from which the proof of theorems followed like a mechanical checker. Whichever option you take on the statement (26),(27), true or false, you get the opposite. Then, Gödel went on performing a series of transformations into that initial paradox, some of them involving properties of prime numbers, and making it into definite statements in number theory. And this was very clever and imaginative. As such, one cannot ignore a statement in number theory which is not provable. Hence, Godel’s results deserved to be taken seriously.

In year

When time gives more perspective to Godel’s work, it will be considered similarly to what happened with the advent of non-Ecludian geometry in the XIX century, or more plainly, how the discovery of irrational numbers shocked the Pythagoreans.

The same applies to those who developed Turing’s theory further as to his predecessors, and with the same proviso on the number of figures that should be mentioned. For instance, all the recipients of the Turing award (Association for Computing Machinery,

Radó made a great contribution in the theory of Turing Machines in his later life, in

He invented the Busy Beaver function (Radó,

Gregory J. Chaitin, together with Ray Solomonoff and Andrei N. Kolmogorov, are the founding fathers of the subject called Algorithmic Complexity, Kolmogorov Complexity, or Algorithmic Information Theory (AIT) (Solomonoff,

Chaitin approached the two fundamental discoveries by Godel

Gödel’s theorem can be traced back to the `liar’s paradox’ (27) while Chatin’s halting probability is related to the `Berry’s paradox’:

In principle, that proposition defines a certain positive integer since the set of words is finite while the set of integers is infinite. However, as that proposition has only ten words, it cannot be defined by that (28). This is the paradox. A similar situation arises in the definition of algorithmic complexity (14): if algorithmic complexity were computable by a TM, then similar paradoxes to (28) would appear. Berry’s paradox was formulated by B. Russell inspired by a librarian at Oxford whose name was G.G. Berry. Chaitin explains that he wanted to show Godel in 1974 how he could prove the incompleteness theorem using Berry’s paradox instead of liar’s paradox (27), but Chaitin was not able to meet Gödel.

He introduced the Ω number: the halting probability of a Turing machines (24). It is a natural example of a random infinite sequence of bits. Besides providing a connection with the work of Turing, Ω makes randomness in Mathematics more concrete and more believable. Chaitin has shown that this logical randomness is at the very heart of pure Mathematics: provable theorems are islands surrounded by vast oceans of unprovable truths.

David Deutsch culminated the formulation of a quantum computer in a way that it is a well-established extension of the work by Turing into the quantum world. R.P. Feynman gave fundamental steps prior to him, as well as P. Benioff. A precise definition of a quantum TM and its functioning can be found in Galindo and Martin-Delgado (Galindo, Martin-Delgado,

This is a further extension of the Turing hypothesis into the physical world.

Quantum versions of algorithmic complexity, Sect.IIl, has been formulated (Vitanyi,

Complexity is a word that has proliferated in a large number of scientific disciplines: ... Most of the time, its use is rather vague, volatile and qualitative. After Turing, it is important to realize that a rigorous, mathematical definition of complexity can be given and made quantifiable.

A very primitive and inefficient way to assess complexity in Mathematics is to define it in terms of how long or difficult it is to write the equations of a given theory. Naive as it may look, its use is very widespread in the scientific community. This is not appropriate since this notion is very dependent on the language we use to write equations, and this may change over the times. A proper definition of complexity calls for something more intrinsic.

If we want to quantify the complexity of a theory or discipline, we must look at how it relates to the experimental data that it wants to explain. Thus, we consider the pair formed by a given theory and its experimental data, and map it into another pair which is a program that produces a certain output:

This latter pair is related to a computer that takes the program and finds the output. We can call this a computational mapping . With this mapping, now we can apply complexity theory from computer science in order to find the complexity of a certain theory or discipline. This is an information-theoretic approach to study complexity by using Turing’s ideas in order to make things more precise.

In Information Theory (IT), there are two major notions of complexity: algorithmic complexity and computational complexity.

This notion of complexity has no practical applications per se. It is very useful to study the fundamentals of Mathematics and its foundations.

Although algorithmic complexity is somewhat conceptual, it may be also very inspiring in practical cases. There is an example that captures the essence of this complexity: the language used for storing image files. There are two basic procedures: using bitmap graphics or vectorial graphics. The former takes a brute force approach, storing all the bits of a given image. The latter is more elegant since it tries to store the formula that generates a particular image. This is more efficient and versatile since it preserves the image under change of scale.

A recent new development by Chaitin is to use AIT concepts and tools in order to give a mathematical proof of Darwin’s theory of evolution (Chaitin,

For more practical purposes, the notion of computational complexity is preferred. Once a problem is declared computable, then we need to know if we can compute it efficiently or we cannot. This leads to the notion of computational complexity.

Many computational tasks can be decomposed in simpler parts called decision problems.

It is very convenient to arrange sets of problems with the same complexity behaviour into complexity classes.

The most important class is the one that defines theoretically what an efficient algorithm is. This is the class P.

_{k}, for certain

_{k}, for certain

The class P is theoretically a natural choice of what an efficient algorithm is. The reason is that it is closed under operations that arise naturally in computation, like sum, product or composition of polynomials that are again polynomials. On the contrary, examples of inefficient algorithms are packed in the class EXP.

^{p(N)}, for some polynomial

A central problem in solving problems in computer science is the difference between finding a solution to a problem and verifying that a certain instance is a solution of the problem. For instance, the decision problem `is

With the advent of quantum Turing machines, the field of computational complexity has been revolutionized and enriched. New complexity classes can be defined substituting the classical TM by a quantum version. For instance, the natural version of the class P for quantum computers is called BQP, for the class of bounded quantum polynomial problems. Scott Aaronson has done systematic studies of a huge number of both classical and quantum complexity classes (Aaronson, Kuperberg, Granade,

An example of complexity class relationship is P ≠ EXP. Another is P ⊂ NP and NP ⊂ EXP.

This is considered the central problem in computational complexity, and in computer science in general. Behind this question is whether computational creativity can be automated or not. Thus, at first it looks like the natural answer to this problem is yes. However, there are neither proofs that P ≠ NP or P = NP.

There is a third way to approach this problem. Notice that this problem is considered as a problem in complexity theory, not on computability. However, this is not the case. True as it is that deciding whether a problem is either P or NP is a complexity problem, the P vs. NP problem is equivalent to construct a mechanical procedure to decide whether it is true or false, and this is a problem on computability. Therefore, we have to face also the possibility that it is non-computable. This means that it would be an irreducible axiom that one may or may not add to his theory of computer science and go on to produce different types of theories, both equally valid and sensitive. Thus, if this third-way were true, then the natural choice P ≠ NP would be like Euclidean geometry, while the non-natural choice P = NP would be like non-Euclidean geometry. But this is also a conjecture.

There is not accepted definition of what a complex system is. Qualitatively, it is usually referred to a system compressed of various parts, usually many, such that they are interconnected somehow up to a certain degree, and the behaviour of the whole system cannot be anticipated from the behaviour of its individual parts. Remarkably, this is precisely the situation that we basically have with a TMs working with simple binary system given rise to both computable and non-computable behaviours, Sect.II, III. Thus, when the computational mapping (29) can be applied to a certain system, arbitrary as it may be, we may give a sufficient criterion for having complex behaviour by appealing to the notion of a hard problem:

When a problem can be solved by an algorithm that can be reduced to one that can solve any problem in NP, then it is called NP-hard. A problem that is both NP and NP-hard is called NP-complete. These problems are at least as hard as the hardest problems in NP. Examples of NP hard problems are the `subset sum problem’ and the `travelling salesman’. They both are also NP complete. If P ≠ NP, then NP ≠ NP Hard, otherwise, they are equal.

The notion of an efficient algorithm is defined by means of class P as explained in Sect.VI. There we saw that a good theoretical definition for this class P is that it is closed under natural operations that occur in computations. However, theoretically well-sounded as it may be, it runs into problems when dealing with practical cases and real computers. For instance, an algorithm with a time complexity growing like ^{100} would never attract the interest of any programmer. It would be good to complement that notion of theoretical efficiency with another of `practical efficiency’.

Let us consider the following practical situation. We are given an algorithm whose time complexity is in P as it grows thus:

where

up to a maximum achievable size _{max}, which will depend on the technological resources available when obtaining the data (31). It may so happen, and it is currently the case, that the set of data is not enough to discover a law we are searching for. This is another version of the situation thought by Leibniz in Sect.IV. Thus, we need a bigger value of _{max}, but we are limited by the technological resources of our time, i.e., the time of the data (31). In order to assess how good the time complexity (30) is, we need to compare with the estimated improvement of the technological resources. An example of this is Moore’s law for computers (Moore, _{min} of the computer chips that run the computations. Thus, the smaller the size the faster the computer:

where

In order to discover the law, we need to increase the maximum current size _{max} by a certain factor ƒ > 1, such that the set of data up to ƒ_{max} is now enough to determine the pattern. The question in turn is how much we need to improve our technology in order to achieve this. Thus, we can derive a sort of uncertainty relation between _{max} and _{min}:

with _{max} depending on the relative value of α w.r.t. _{max} = const/^{α/k}_{min}. A possible situation could be that

Therefore, a practical criterion for the class P is to compare the integer

Another important practical case we may face is the existence of technological barriers. For example, today computer technology has reached the nanometre scale. Suppose we have a certain set of data like (31) obtained with a class P algorithm, but we need to increase the maximum size by a factor ƒ > 1 such that then we need to go down well beyond the size of angstroms. Then, for those smaller sizes, the computer leaves the classical behaviour and enter the realm of quantum mechanics, so that we may well need a quantum computer to expand the range of data and be able to find our law.

The halting problem has many implications as we have shown. It is a concept of practical use in games, particularly in advanced games like chess. There, it is important to make sure that the rules of the game (axioms) will ensure that the match will terminate. Until 1929, players were not aware that the set of rules already known made it possible to produce never-ending chess matches. In that year, Max Euwe, a mathematician later to become the fifth world chess champion of modern history 1935-37, settled the question by rediscovering the Thue-Marston (Thue,

In binary language, the Thue-Marston sequence is defined by the following generating moves:

For instance, the first elements of the sequence are

An element of a sequence is cube-free if it contains no subsequence of the form

A chess match is divided in three parts: opening, middle-game and end-game. The end-game is characterized by the presence of very few pieces on the board compared to the opening. Thus, a theory of the end-game in chess is highly developed as its complexity is reduced. It had been known that repetition of movements, a loop, may happen in certain situations. Rules were established to declare a draw when repetition of moves become endless. Euwe (Euwe,

When Bobby Fischer was an active chess player, he would say “Gods put the middle game after the opening”, meaning that the complexity of the middle game was so high that it was unknown territory, where written manuals for openings were useless, and he would feel at his best. After retirement, in the 1980’s Fischer sent a warning call saying that chess was becoming too technical, mechanical and with little room for creativity. He proposed to change the rules of the opening somehow, interestingly enough, introducing some randomness in chess. In particular, by randomizing the starting position of the main pieces in the first row of each player side. And this happened way before a computer, Deep Blue, defeated the World Chess Champion G. Kasparov in 1997. Many people thought this to be unbelievable before year 2000. This does not mean that computers are more intelligent than us, or intelligent at all. It means that their brute force of calculation is stronger than ours at playing chess.

Mozart composed many divertimentos, a musical form very common in the Classical era prior to the success of the sonata form by Haydn. We may produce a divertimento playing with Turing’s ideas in music.

Music is more than a language, but insofar as it is a language, we can apply Turing’s results to it and prove some amusing results which may surprise music theorists, particularly given that they can be proved mathematically.

i/ There is an endless number of different musical compositions.

ii/ There are musical compositions that cannot be composed.

Statement i/ implies that musical creativity is infinite, for sure, while ii/ means that, nevertheless, it also has some limits.

To proof i/ we use a code such that the music symbols and rules of composition are encoded with a given alphabet . This can be binary for instance. Then, we use the same code alphabet to label all known compositions. This can be done by lexicographic order, forming a table like (7). Now, applying the diagonal method we obtain another composition which is certain to be new. Though it is unlikely that these mathematical type of compositions would have pleased Mozart and Haydn, it may produce a different reaction in B. Bartók, A. Schongberg, J. Cage, G. Ligeti, K. Stockhausen, I. Xenakis, P. Boulez, C. Halffter... Nevertheless, what is remarkable, and unimaginable before Turing, is that a computer could be of help as a composition machine as they are used nowadays.

To prove ii/ we need to realize that each music composition is like a TM. Thus, it may or may not halt. For instance, we can produce simple scores that repeat themselves forever. Accepting this proviso about endless compositions is essential. Suppose now that we want to write a music composition that with our language would be equivalent to a program that finds when any other musical composition will ever halt. Then, that score is impossible to be written.

Cellular automata are used to produce musical scores (Wolfram,

In the early 20th century, Arnold Schongberg evaluated the situation of classical music and judged that the tonal system based on major and minor scales, Greek modes etc. was absolutely worn out. Subjective as this may be, he went on to search for new composition systems by relaxing the rigidities of the old system. For instance, allowing all tones in a dodecaphonic scale to play the same role, without dominant or tonic tones. This produce non-tonal systems like the twelve-tone method and many others to follow, even by introducing random methods and other tools from Mathematics, like set theory. Again, that situation arose because creativity was judged to be exhausted, and a change or extension of axioms was proposed instead, leading to controversy. Nevertheless, controversy is unavoidable here since music is more than a language and personal taste plays a major role.

Turing revolutionized the fundamental roots of what we understand by scientific knowledge, and he will continue to do so as many new applications of his works arise. At the same time, his scientific work still lacks the recognition that it deserves in his own field of Mathematics. As he also founded modern computer science, recognition came first mainly from Engineering and Physics.

The part of Turing’s

There is a parallelism between intrinsic randomness in Mathematics and in Physics, and we can learn from it. In Physics it appeared in the 1920’s in Quantum Mechanics, and also produced a shocking revolution that removed the holy grail of classical Physics, determinism, from the central status it had been enjoying. Nowadays, Quantum Mechanics is a successful theory and has been accepted both logically and empirically, due to unprecedentedly accurate experimental results. In Mathematics, logical randomness appeared in 1930’s and is also becoming accepted.

After the work of Gödel, Turing and Chaitin it is certain that a TOE of Mathematics is impossible. But, what about Physics? Inasmuch as Physics inherits the language of Mathematics to express its laws and work out its consequences, we may immediately deduce that the same applies to Physics and there is no TOE for it. However, Physics is more than a language and the ultimate word relies on experience, on the natural law. Our physical knowledge is like a window on an energy scale, ranging from some point in the infrared to some point in the ultraviolet, i.e., large distance scales to small distance scales. From this finite window scale we may bet on two possibilities: i/ that no TOE of Physics exists, since as we enlarge the energy window we will get new laws of Physics that were not anticipated; ii/ that a TOE of Physics do exists and from our current window of knowledge, or probably a better one, we can deduce the whole range of physical laws in the entire energy scale, i.e., Physics would be finite and closed as a source of knowledge. Following Turing’s work, I believe that option i/ is the correct one, and experience will tell us. The non-existence of a TOE in Physics is good news for creativity in contrast to reductionism.

Is it true that true randomness is only quantum? The Ω number is a real number whose binary expansion yields bits of information that are true for no reason, they have no structure or pattern, it is incompressible and its bits totally random as Chaitin has shown.

The following question may help to face this outcome situation not so dramatically. How can it be that Natural Sciences like Physics, Mathematics, etc have become so successful if we live in a world plagued by intrinsic randomness? A clue to this question is to take the example of what type of real numbers are employed in successful theories. We will see that we always have real numbers like √2, π, e, etc. Although they are irrational with an infinite number of decimals, we have very short algorithms that generate that series of decimals very efficiently. I.e., they are actually maximally compressible numbers.

This fact can be extrapolated to the whole structure of successful theories of Nature: they are very simple, they can be compressed, reduced to a simple set of axioms or laws of Nature. The rest of the universe that remains unknown is due in part because it is not compressible and we live in a small region of the whole space of theories or knowledge. We may divide our ‘sphere of knowledge’ into three parts: i/ current science (known); ii/ future science (to be known); and iii/ unknowable or irreducible.

Physicists are willing to find and adopt new physical principles, laws that expand their knowledge of the universe. Mathematicians, standard and formal ones, tend to stick rigidly to axioms and not to modify them. They should adopt a more experimental attitude. With Turing, the fields of Mathematics and Physics become more unified.