In December 1999, “Time” magazine chose Albert Einstein as “Person of the Century.” This was undoubtedly a reasonable choice, but as I will argue in this article, there are also good reasons for contending that Turing might have received this honor. One such reason I consider here is his purely scientific work, which stems from the greatest mathematical tradition, and how it affected the development of mathematics itself and was finally instrumental in shaping a new technological world. This is true both as regards the computation and treatment of information as well as the establishment of new forms of social relations. In relation to the foregoing, we have Turing’s contributions to the deciphering of secret codes during the Second World War, which in a somewhat metaphorical sense may be regarded as a new tool for undermining personal privacy, that civil right whose denial finally ruined his life.

En diciembre de 1999, la revista Time eligió a Albert Einstein “The Person of the Century”. Fue, no cabe duda, una elección razonable, pero, como se argumenta en este artículo, existen también buenos argumentos para sostener que Turing podría haber recibido tal honor. En apoyo de semejante tesis están sus trabajos puramente científicos, que se esbozan aquí, trabajos que entroncan con la mejor tradición matemática, y cómo afectaron al desarrollo matemático, siendo finalmente instrumentales en la configuración de un nuevo mundo tecnológico, tanto en lo que al cálculo y manejo de información se refiere, como en lo relativo al establecimiento de nuevas formas de relaciones sociales. Relacionadas con lo anterior, se encuentran las aportaciones que hizo durante la Segunda Guerra Mundial al desciframiento de códigos secretos, que, en cierto sentido, metafórico, se pueden considerar como una nueva herramienta para socavar la privacidad, ese derecho civil cuya negación arruinó su propia vida.

The December 31st, 1999 issue of

“So how can we go about choosing the Person of the Century, the one who, for better or worse, personified our times and will be recorded by history as having the most lasting significance?

Let’s begin by noting what our century will be remembered for. Out of the fog of proximity, three great themes emerge:

· The grand struggle between totalitarianism and democracy.

· The ability of courageous individuals to resist authority in order to secure their civil rights.

· The explosion of scientific and technical knowledge that unveiled the mysteries of the universe and helped secure the triumph of freedom by unleashing the power of free minds and free markets.”

Confronted with these grand themes,

The opening article, signed by Walter Isaacson (

“A century that will be remembered foremost for its science and technology”, wrote Isaacson, and indeed I agree with him. Of course, Einstein was a good choice. However, I will argue that another plausible choice, though certainly not so popular, would have been Alan Turing (1912-1954).

As arguments in favor of Einstein,

It is difficult to answer to this question. The deep meaning of the special and general theories of relativity, with all they tell us about such basic concepts as space and time, makes it hard to deny their fundamental importance when compared with practically any other scientific theory. In a similar way, although not in the same sense as relativity to space and time, we might also speak about quantum physics. Actually, as far as its social (political and economic) consequences are concerned, quantum physics has proved to be far more important than relativity (we only have to think about the transistor), although in this case Einstein was simply one more name alongside others, especially Max Planck, Niels Bohr, Werner Heisenberg and Erwin Schrödinger.

If we look around, a possible answer to the question of what the most important innovation for our XXIst society is might be found in all the gadgets and control systems which make Globalization possible - what we call the Information Era - and it is here where Alan Turing’s name stands out very prominently. Of course, I do not mean to say that his is the only name to consider, but his position is reinforced if we take into account the number and diversity of his contributions - to logic, mathematics, cryptanalysis, philosophy, and formatively to the areas later known as computer science, cognitive science, artificial intelligence and artificial life.

Actually,

To mention Turing only in connection with computers is a great misunderstanding. He belongs to a mathematical movement or tradition that during the second half of the XIXth century dramatically changed the very idea of mathematics. A more detailed presentation however should also necessarily include Galois, who was highly influential in converting mathematics into the study of structures; Lobachevskii, Bolyai and Riemann with their non-Euclidean geometries; Felix Klein with his Erlangen Program in which he put forward the idea that there are as many geometries as transformation groups, and George Cantor, with his theory of transfinite numbers, which gave a new dimension to set theory. As forerunners to the mathematical tradition to which Turing belonged, I would also mention George Boole (1815-1864) and David Hilbert (1862-1943).

In the opening lines of

“The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolic language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; to make that method itself the basis of a general method for the application of the mathematical doctrine of Probabilities; and, finally, to collect from the various elements of truth brought to view in the course of these inquires some probable intimations concerning the nature and constitution of the human mind.”

And further later on in the book, almost at the end of chapter II:

“Let us conceive […] of an Algebra in with symbols x, y, z, etc. admit indifferently of 0 and 1, and of these values alone. The laws, the axioms, and the processes, of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic.”

Boole of course was speaking about what we now know as “digitalization.”

As to Hilbert, in 1899 he published a book,

In the paper he read in Bologna, Hilbert addressed a very important problem, the so-called

“In a series of presentations in the course of the last years I have […] embarked upon a new way of dealing with fundamental questions. With this new foundation of mathematics, which one can conveniently call proof theory, I believe the fundamental questions in mathematics are finally eliminated, by making every mathematical statement a concretely demonstrable and strictly derivable formula […]

In mathematics there is no

Of course, Hilbert could not imagine that only three years later, in 1931, the Austrian logician Kurt Gödel (1906-1978) would demonstrate that indeed there is

In 1936, and partially using Gödel’s results, the American logician Alonzo Church (1903-1995) tackled Hilbert’s

This now famous article began as follows: “The ‘computable’ numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly computable

“Although the class of computable numbers is so great, and in many ways similar to the class of real numbers, it is nevertheless enumerable. In § 81 examine certain arguments which would seem to prove the contrary. By the correct application of one of these arguments, conclusions are reached which are superficially similar to those of Gödel. These results have valuable applications. In particular, it is shown (§11) that the Hilbertian Entscheidungsproblem can have no solution.” Conscious of Church’s work, Turing immediately added: “In a recent paper Alonzo Church has introduced an idea of ‘effective calculability’, which is equivalent to my ‘computability’, but is very differently defined. Church also reaches similar conclusions about the Entscheidungsproblem. The proof of equivalence between ‘computability’ and ‘effective calculability’ is outlined in an appendix to the present paper.”

Since Church preceded Turing in the demonstration of Hilbert’s

The following year, in an article published in

“As is well-known, A. M. Turing, using the notion of a computing machine, gave a definition of the computable function of the first order. But, had this notion not already been intelligible, the question of whether Turing’s definition is adequate would be meaningless […]

It is well-known that A. M. Turing has given an elaborate definition of the concept of a

Of course, it is clear why Gödel was interested in Turing’s works: Turing’s contribution gave a more general definition of formal system, something that allowed Gödel to refine later his theorem showing that incompleteness could “be proved rigorously for every consistent formal system containing a certain amount of finitary number theory.” (Gödel,

In the same year that “On computable numbers” appeared, and following the suggestion by Newman, Turing left his fellowship at King’s College, Cambridge (he had been elected fellow of King’s in 1935), for the United States and Princeton University, where in addition to Church he found luminaries such as Albert Einstein, John von Neumann and Herman Weyl, the latter three at the Institute for Advanced Studies, as well as Solomon Lefschetz, while Richard Courant and Godfrey H. Hardy were visitors that year. He had hoped to find Gödel, but he was not there. “The mathematics department here,” he wrote home on October 6, 1936, “comes fully up to expectations. There is a great number of the most distinguished mathematicians here. J. v. Neumann, Weyl, Courant, Hardy, Einstein, Lefschetz, as well as hosts of smaller fry. Unfortunatelly there are not nearly so many logic people here as last year. Church is here of course, but Gödel, Kleene, Rosser and Bernays who were here last year have left. I don’t think I mind very much missing any of these except Gödel. Kleene and Rosser are, I imagine, just disciples of Church and have not much to offer that I could not get from Church. Bernays [I] think is getting rather ’vieux jeu’: that is the impression I get from his writings, but if I were to meet him I might get a different impression”

Under Church’s supervision Turing, who spent the years 1936-1938 at Princeton, wrote a Ph.D. entitled

In his 1936 paper, Turing introduced his famous “Turing machine.” Basically, a Turing Machine consists of a scanner and a limitless memory-tape that moves back and forth past the scanner. The tape is divided into squares, each of which may be blank or may bear a single symbol; for example: 1 or 0. In other words, the Turing machine gave reality to Boole’s theoretical idea of digitalization. As a matter of fact, Turing emphasized this characteristic of the new computing engines. Thus, in a lecture he delivered on February 20, 1947 at the London Mathematical Society he stated (Copeland,

“The automatic computing engine now being designed at N.P.L. [National Physical Laboratory] is a typical large scale electronic digital computing machine […] From the point of view of the mathematician, the property of being digital should be of greater interest that that of being electronic. That it is electronic is certainly important because these machines owe their high speed to this, and without the speed it is doubtful if financial support for their construction would be forthcoming. But this is virtually all that there is to be said on that subject. That the machine is digital however has more subtle significance. It means firstly that numbers are represented by sequences of digits which can be as long as one wishes. One can therefore work to any desired degree of accuracy […] A second advantage of digital computing machines is that they are not restricted in their applications to any particular type of problem.”

According to Copeland, who edited Turing’s main papers and lectures, “On computable numbers” “is regarded as the founding publication of the modern science of computing. It contributed vital ideas to the development, in the 1940s, of the electronic stored-program digital computer. [It] is the birthplace of the fundamental principle of the modern computer, the idea of controlling the machine’s operations by means of a program of coded instructions stored in the computer’s memory.” (see Copeland,

As Copeland pointed out, Turing was not the only the theoretical mind to have produced an idea basic for the development of computers: he was also involved in their construction. We do not know, and cannot know, whether he would have developed such interests if the political events had been different, but the fact is that what happened shortly after he returned to England prompted him in such directions.

It was in the summer of 1938 when Turing returned to Cambridge, to his fellowship at King’s College, although not for long: in September

Before Turing left England for America, he had become involved at Bletchley Park in another problem: how to translate the German messages sent by a machine, called

Peter Hilton (1923-2010), another of the mathematicians who worked in Bletchley Park, recalled Turing’s work there in the following terms (Hilton,

“It is a rare experience to meet an authentic genius. Those of us privileged to inhabit the world of scholarship are familiar with the intellectual stimulation furnished by talented colleagues. We can admire the ideas they share with us and are usually able to understand their source; we may even often believe that we ourselves could have created such concepts and originated such thoughts. However, the experience of sharing the intellectual life of a genius is entirely different; one realizes that one is in the presence of an intelligence, a sensibility of such profundity and originality that one is filled with wonder and excitement.

Alan Turing was such a genius, and those, like myself, who had the astonishing and unexpected opportunity, created by the strange exigencies of the Second World War, to be able to count Turing as colleague and friend will never forget that experience, nor can we ever lose its immense benefit to us.

Turing was a mathematician, a logician, a scientist, a philosopher – in short, a thinker […]

Much has been written in recent years of the astonishing success of ‘Britain’s secret weapon’ […] Others of us shared the excitement of successful achievement; some, like the mathematician Max Newman, deserved great credit for providing the organizational framework – not to be confused with its antithesis of bureaucratic structure – essential to the full exploitation of that success; but Turing stood alone in his total comprehension of the nature of the problem and in devising its solution – essentially by inventing the computer.”

Also worth quoting is what Max Newman, Turing’s mentor at Cambridge and colleague at Bletchley Park, said in the obituary he wrote for the Royal Society (Newman,

“In 1938 Turing returned to Cambridge; in

In any event, everything that was produced during the war convinced Turing of the possibilities and feasibility of computers. Thus, he declined an offer of a Cambridge University lectureship, accepting instead in 1945 a position at the National Physical Laboratory to form part of a group dedicated to the design, construction and use of a large automatic computing machine. He stayed there for three years, contributing to the design of the first plan of that computer, ACE. In 1948, he accepted a Readership at Manchester University, where he was appointed Assistant Director of

“We have to have some experience with the machine before we know its capabilities. It may take years before we settle down to the new possibilities but I do not see why it should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”

It is well-known that Albert Einstein wrote a letter to President Roosevelt asking him to promote nuclear research in view of the danger posed by German scientists who having discovered the fission of uranium in December 1938, might be able to produce an atomic bomb. On October 21, 1941, together with a few colleagues (W. G. Welchman, C. H. O’D. Alexander and P. S. Milner-Barry), Turing also wrote a letter in a similar vein to Winston Churchill asking him to help in a scientific and technological project. As one of my purposes in this paper is to compare Einstein and Turing, as well as to point out the special status of the former in the history of the XXth century, his preoccupations and interventions in social and political matters are invariably mentioned, I quote the opening paragraphs from that letter:

“Dear Prime Minister,

Some weeks ago you paid us the honour of a visit, and we believed that you regard our work as important. You will have seen that, thanks largely to the energy and foresight of Commander Travis, we have been well supplied with the ‘bombes’ for the breaking of the German Enigma codes. We think, however, that you ought to know that this work is being held up, and in some cases is not being done at all, principally because we cannot get sufficient staff to deal with it. Our reason for writing direct is that for months we have done everything that we possible can through the normal channels, and that we despair of any early improvement without your intervention. No doubt in the long run these particular requirements will be met, but meanwhile still more precious months will have been wasted and as our needs are continually expanding we see little hope of ever being adequately staffed.”

Logicians such as Boole, Russell, Whitehead, Gödel and Church were impressive mathematicians and logicians who can certainly hold their own with Turing as regards the importance of their contributions to the foundations of mathematics, but none of them had what Turing had: the interest in and ability to contribute to technology as well. It is not only, nor mainly, that “On computable numbers” contained the essential ideas of the computer, but rather that Turing combined a wide range of mathematical advances with far-sighted applications. This aspect of his personality and work was recognized early on; what follows is taken from an obituary published in the

“In the death of Alan Turing, mathematics and science have lost a great original thinker. It is in connection with the big computing machines, which he helped to design and then to use, that he is best known to the general public, but it was, ironically enough, in the course of a ‘logical’ proof that not all mathematics can be mechanised that he was first led to give the specification of a ‘universal’ computing machine. To show that no machine can answer all mathematical questions, you have to say precisely what you mean by a machine.

Turing’s answer to this question was theoretical in the sense that considerations of ‘how fast?’ and ‘how large?’ (then irrelevant) were ignored. But it was a real machine, with a paper tape, and he was already interested at that time in the possibility of making it. Later he threw himself with enthusiasm into the work of designing a computing machine for practical use, making use of ideas which others had had independently in the meantime.

Turing took a particular delight in problems, large or small, that enable him to combine mathematical theory with experiments he could carry out, in whole or part, with his own hands. He was ready to tackle anything which combined these two interests.”

As far as I know, only John von Neumann (1903-1957), also an extraordinary mathematician, combined such abilities, both theoretical and applied

Since the name of von Neumann has arisen, I would like to point out, following George Dyson, that they were quite different personalities (see Dyson,

“Turing and von Neumann were as far apart, in everything except their common interest in computers, as it was possible to get. Von Neumann rarely appeared in public without a business suit; Turing was usually unkempt. ‘He tended to be slovenly,’ even his mother admits. Von Neumann spoke freely and with great precision; Turing’s speech was hesitating, as if words could not keep up with his thoughts. Turing stayed in hostels and was a competitive long-distance runner; Von Neumann was resolutely nonathletic and stayed in first-class hotels. Von Neumann had an eye for women, while Turing preferred men.

When von Neumann spoke about computing, he never mentioned artificial intelligence. Turing spoke about little else.”

One of the characteristics which often goes with great innovators is that they imagine – perhaps it would be more appropriate to say, that they dream – possibilities which do not materialize soon, if ever. Thus, Albert Einstein thought of a unified theory for all the forces known at the time (electromagnetism and gravitation), which he never achieved, and which when it was recreated by others took a very different form, in the quantum realm. One of Turing “dreams” was Artificial Intelligence, a field – still not named in that manner – in which he was the first to carry out substantial research, at least in what we may call a “modern way.”

As a matter of fact, that idea came to him in a rather natural way, as an extension of his previous work on computers. Such a connection can readily be discerned in the already mentioned lecture that he delivered in 1947 at the London Mathematical Society (see Copeland,

“It has been said that computing machines can only carry out the processes that they are instructed to do. This is certainly true in the sense that if they do something other than what they were instructed then they have just made some mistake. It is also true that the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands what in principle is going on all the time. Up till the present machines have only been used in this way. But is it necessary that they should always be used in such a manner? Let us suppose we have set up a machine with certain initial instruction tables, so constructed that these tables might on occasion, if good reason arose, modify those tables. One can imagine that after the machine had been operating for some time, the instructions would have altered out of all recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations. Possibly it might still be getting results of the type desired when the machine was first set up, but in a much more efficient manner. In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. I feel that one is obliged to regard the machine as showing intelligence.”

Four years later, in another lecture, this time broadcast by BBC Radio on May 15th, 1951, he took the opportunity to insist on the possibility of Artificial Intelligence (see Copeland,

“Digital computers have often been described as mechanical brains. Most scientists probably regard this description as a mere newspaper stunt, but some do not [...] In this talk I shall […] give most attention to the view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains […]

[The outlook of the majority of scientists] was well summed up by Lady Lovelace over a hundred years ago, speaking of Babbage’s Analytical Engine. She said […] ‘The Analytical Engine has no pretensions whatever to

There is however [another] point of view, which I hold myself. I agree with Lady Lovelace’s dictum as far as it goes, but I believe that its validity depends on considering how digital computers

Turing must also be credited with another idea, the so-called “Turing test,” which was followed, sometimes with opposition, by those working in the field of Artificial Intelligence. He put forward this idea in an article entitled “Computing machinery and intelligence” published in

“I don’t want to give a definition of thinking, but if I had to I should probably be unable to say anything more about it than it was a sort of buzzing that went on inside my head. But I don’t really see that we need to agree on a definition of a brain, or of a man, that we want to discuss, and those that we don’t. To take an extreme case, we are not interested in the fact that the brain has the consistency of cold porridge […] I would like to suggest a particular kind of test that one might apply to a machine. You avoid begging the question, and say that the machines that pass are (let’s say) ‘Grade A’ machines. The idea is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. –a considerable proportion of a jury, who should not be expert about machines, must be taken in by the pretence. They aren’t allowed to see the machine itself – that would make it too easy. So the machine is kept in a far away room and the jury are allowed to ask it questions, which are transmitted through to it: it sends back a typewritten answer.”

It is interesting to see what Gödel thought about Artificial Intelligence and, more specifically, about Turing’s ideas in this regard. In an alternative version of some remarks Gödel intended to publish in

“Turing, in

Though I cannot develop this question here, I should mention that the Turing test has not been accepted by everybody. Among its critics are figures such as John Searle, who put forward the so-called “Chinese room argument,” and Paul and Patricia Smith Churchland

As to Turing’s expectations of when Artificial Intelligence would be achieved, he was cautious (Copeland, ^{9}, to make them play an imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated thinking will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

More than fifty years have passed and Artificial Intelligence is yet to arrive. I am not sure, however, that the kind of AI that Turing had in mind has not yet been produced, as the case of chess shows with the victory that the IBM “Deep Blue” computer achieved in May 11, 1997 in its match against world-champion Garry Kasparov.

Indeed, to anybody interested in artificial intelligence, chess is a very good playground. It is therefore not surprising that Turing made some excursions into that topic, such as the essay entitled “Chess”, which appeared in a collection published in 1953 under the title

“When one is asked ‘Could one make a machine play chess?’, there are several possible meanings which might be given to the words. Here are a few:

i) Could one make a machine which would obey the rules of chess, i. e. one which would play random legal moves, or which could tell one whether a given move is a legal one?

ii) Could one make a machine which would solve chess problems, e.g. tell one whether, in a given position, white has a forced mate in three?

iii) Could one make a machine which would play a reasonably good game of chess, i.e. which confronted with an ordinary (that is, not particularly unusual) chess position, would after two or three minutes of calculation, indicate a passably good legal move?

iv) Could one make a machine play chess, and improve its play, game by game, profiting from its experience?”

Related to his interests in Artificial Intelligence, Turing also came up with what we might call “networks of artificial neurons”, sometimes called

“So far we have been considering machines which are designed for a definite purpose (though the universal machines are in a sense an exception). We might instead consider what happens when we make up a machine in a comparatively unsystematic way for some kind of standard components. We could consider some particular machine of this nature and find out what sort of things it is likely to do. Machines which are largely random in their construction in this way will be called ‘unorganised machines’. This does not pretend to be an accurate term. It is conceivable that the same machine might be regarded by one man as organized and by another as unorganised.

A typical example of an unorganized machine would be as follows. The machine is made up from a rather large number N of similar units. Each unit has two input terminals, and has an output terminal which can be connected to the input terminals of (0 or more) other units.”

A possibility that occurred to Turing was that the human brain is in fact closer to an unorganized machine than to an organized one; an unorganized machine which nevertheless is able to produce organized thoughts.

Another step in Turing’s continuous intellectual development was the work he did on Morphogenesis, the science that studies the biological processes that cause an organism to develop its shape. As we can easily imagine, he used his expertise in computers to construct computer simulations to investigate the organization and patterns in living beings. Soon after the world’s first manufactured general-purpose electronic digital computer, the

As we see, Turing’s contributions to science covered a wide range, from the foundations of mathematics to cryptanalysis, computers, artificial intelligence and morphogenesis, contributions that always had deep philosophical implications

Interdisciplinarity, the collaboration between specialists in different fields with the aim of understanding nature better, is in my opinion an emergent field which continues to grow during the XXIst century, to the extent that it constitutes one of its main characteristics

Now, let us go back to the question of the “Person of the Century,” and to the two finalists who, for

Among the arguments used by

Unlike Gandhi’s civil disobedience movement, Turing did not oppose publically those who suppressed an individual right, the freedom of sexual expression. Indeed, Turing did not fight openly those who supported the penalization of homosexuality and consequently he himself, but rather he did so in another more dramatic way: by committing suicide. After having accepted treatment with female hormones (chemical castration) as an alternative to prison, he ate an apple injected with cyanide

Summing up the merits that the

“As the century’s greatest thinker, as an immigrant who fled from oppression to freedom, as a political idealist, he best embodies what historians will regard as significant about the 20^{th} century. And as philosopher with faith both in science and in the beauty of God’s handiwork, he personifies the legacy that has been bequeathed to the next century.

In a hundred years, as we turn to another century – nay, ten times a hundred years, when we turn to another new millennium – the name that will prove most enduring from our amazing era will be that of Albert Einstein: genius, political refugee, humanitarian, locksmith of the mysteries of the atom and the universe.”

With the exception, if taken literally, of the reference to his supposed “faith in the beauty of God’s handiwork,” such a summary is an accurate expression of Einstein’s achievements. However, it would also have been possible to write a moving and precise characterization of Turing’s achievements, as follows:

“As a humble and most creative man, who travelled through several of the most basic sciences created by humanity, adding new ideas and perspectives to them, without distinguishing between science on one side and technology on the other, thereby revealing a new way of perceiving Nature, Turing personifies the legacy that has been bequeathed to the next century.

As a individual who fought with the best of his abilities for the freedom of his country and the world at a time when freedom was in serious danger, although that freedom was not afforded him until later, the name of Alan Turing will be remembered when the memory of his time will perhaps be but an obscure shadow.”

This article is an expanded version of the lecture I delivered at the International Symposium

Hilbert’s lecture was of course delivered in German (“Mathematische probleme”). This is how Hilbert enunciated Problem number: “When we are engaged in investigating the foundations of a science, we must set up a system of axioms which contains an exact and complete description of the relations subsisting between the elementary ideas of that science. The axioms so set up are at the same time the definitions of those elementary ideas; and no statement within the realm of the science whose foundation we are testing is held to be correct unless it can be derived from those axioms by means of a finite number of logical steps. Upon closer consideration the question arises:

Turing attended Newman’s lectures on logic in Cambridge. For more information about Turing’s life and career, see Newman,

My presentation of the history of this important chapter of the history of logics is too limited. More information, coming from another of the protagonists, is contained in (Kleene,

See Turing,

Both, these lectures and its postcriptum, were published in Davis (

Reproduced in Copeland (

Turing’s thesis was published with the same title in Turing (

Solomon Feferman, “Turing’s thesis,” in

For more first-hand information about Bletchley Park, see Hilton (

Quoted in Turing (

Reproduced in Hilton (

It would be interesting, though rather long, to explore the similarities, analogies and relations between Turing’s and von Neumann’s work. As mentioned before, Turing met von Neumann in Princeton, and the Hungarian mathematician thought sufficiently highly of Turing’s talents to offer him a position as his assistant at the Institute for Advanced Study, which Turing refused, deciding to return to England. For von Neumann’s opinions of Turing’s 1937 work, see, for instance what he wrote about Turing’s theory of computing automata in von Neumann (

See Searle (

Actually, the title Turing gave to this essay in his typescript was “Digital computers applied to games.”

Reproduced in AlanTuring.net (“The Turing Archive for the History of Computing”).

Reproduced in AlanTuring.net (“The Turing Archive for the History of Computing”).

I have not dealt with another field which Turing influenced: linguistics. As a mere example of such influence, I will reproduce part of the abstract from an article by Noan Chomsky, the great linguist: “A grammar can be regarded as a device that enumerates the sentences of a language. We study a sequence of restrictions that limit grammars first to Turing machines, then to two types of systems from which a phrase structure description of the generated language can be drawn and finally state Markov sources (finite automata).” See Chomsky (

I have developed this thesis in Sánchez Ron (

There are people who think that Turing did not commit suicide, but that he ate an apple which had been contaminated accidentally.