Copyright © 1995 Jeff Makey. All rights reserved.

Foreword by the Author

I originally wrote this paper in 1981 for a course in writing research papers at Rose-Hulman Institute of Technology. It was written on a DEC PDP-11/70 computer using the RUNOFF text formatting program, and having it on line from the beginning made it easy to save an electronic copy for future use. The instructor, Dr. Peter Parshall (of "Peter Parshall picked apart my perfect paper" fame), awarded the grade of A- to my work.

In 1995, with the World Wide Web available as a means of publication, I retrieved the original document from my archives and converted it to the HTML format seen here. Other than format conversions and the deletion of the bibliography (which the Notes section renders superfluous), the paper is exactly as I wrote it then. (Well, I also fixed a couple of spelling errors and added a missing word. These modifications are identified in the HTML source.) I am both gratified and disappointed that the conclusions I drew then are still valid.

Jeff Makey <jeff@sdsc.edu>
12 March 1995


Gödel's Incompleteness Theorem is Not an Obstacle to Artificial Intelligence

Artificial Intelligence. The idea of men building a machine which is capable of thinking, originating ideas, and responding to external stimuli in the same manner as a man might is fascinating to some people -- frightening to others. Whether or not artificial intelligence (or AI) is possible has been the subject of debate for quite some time now. As early as 1842, a memoir by Lady Ada Lovelace read: "The Analytical Engine has no pretentions whatever to originate anything. It can do whatever we know how to order it to perform."[1] This attitude -- that machines cannot "think" for themselves -- has been widespread since then, and many arguments to this effect have been presented. Two of the more common of these arguments run along the lines of: "thinking is a function of man's immortal soul;"[2] or that computers cannot really experience emotions and hence cannot be like people.[3] Both of these arguments are as unsubstantiated as they appear. What is, perhaps, the most convincing of any of the arguments against AI is based upon Kurt Gödel's Incompleteness Theorem which says that a "sufficiently powerful" formal system cannot consistently produce certain theorems which are isomorphic to true statements of number theory.[4] The implication of this Theorem, according to J. R. Lucas, is "that minds cannot be explained as machines,"[5] that machines are inherently inferior. The purpose of this paper is to show that, in fact, Gödel's Theorem has little if any impact upon the prospect of producing artificial intelligence comparable to man's. In this paper the terms "machine", "artificial intelligence machine" (or "AI machine"), "computer", "computer program", and "electronic digital computer" will all be taken to mean the same thing. This is because the only means by which anyone today realistically expects to produce artificial intelligence is by programming a digital electronic computer. Some familiarity with modern computers is expected of the reader.

In 1931 a German mathematician named Kurt Gödel published a paper which included a theorem which was to become known as his Incompleteness Theorem. This theorem stated that:

To every w-consistent recursive class k of formulae there correspond recursive class-signs r, such that neither v Gen r nor Neg (v Gen r) belongs to Flg(k) (where v is the free variable of r).[6]
In more common mathematical terms, this means that "all consistent axiomatic formulations of number theory include undecidable propositions."[7] One more time: any consistent formal system which is capable of producing simple arithmetic is incomplete in that there are true statements of number theory which can be expressed in the notation of the system, but which are not theorems of the system. Gödel's Theorem uses several terms, the implications of which must be examined in determining its applicability to AI. These terms are: formal system, consistency, completeness, and theorem.

A formal system is defined by a finite set of meaningless symbols and a few very rigid rules of inference. A group of symbols arranged in a particular way is called a string, and there are a fixed number of strings which are called axioms (or postulates). The rules of inference provide a means of producing theorems from the axiom(s) by telling exactly how to manipulate symbols in a string to produce a new string. A theorem is any string which can be can be derived from the axiom(s) by applying zero or more of the rules of inference in succession to the axiom(s).

A formal system is a concept just like a number is. In the same manner that numerals are used to represent numbers, a theorem of a formal system is represented by a group of symbols called a string.

A very simple example of a formal system (called SFS, for Simple Formal System) uses only the symbol "X". Note that the quotes are not part of the symbol, but only delimit it. There is one axiom for SFS; it is the string "X". There are two rules of inference in SFS:

  1. Given a SFS string, any single instance of the substring "X" may be replaced by the substring "XX".

  2. Given a SFS string, any single instance of the substring "XX" may be replaced by the substring "X".

Thus, given the axiom "X" only the first rule may be applied: replacing the X in the string with XX gives the result "XX". Either of the two rules may be applied to this new theorem; rule 1 produces "XXX" and rule 2 leaves "X", which is already known to be a theorem. SFS is trivial enough that the following observations about it are quite obvious:

  1. Starting with the axiom, repeated application of rule 1 will result in new, unique theorems being formed while rule 2 can only produce old theorems.
  2. There are an infinite number of theorems in SFS.
  3. By repeated application of one of the two rules of inference, any theorem can be derived from any other theorem.

So far, there has been no mention of any meaning for the theorems of SFS. This is because SFS theorems have no meaning; they are merely strings of X's, such as "XXXX" or "XXXXXXXXXXXXXXXX". Nevertheless we, as intelligent beings, could give meaning to SFS theorems. For example, if we count the number of X's in a theorem (the only distinguishing feature between them) we could say, for instance, that the theorem "XXXXX" represents (or means) the number five. If we wish to use this interpretation of SFS then it would be very reasonable to call the first rule of inference the Increment Rule and rule 2 the Decrement Rule.

SFS is so simple that it does not meet the "sufficiently powerful" clause of Gödel's Theorem. SFS only produces numbers -- it does not make any assertions, such as "3 + 4 = 7." However, it is possible to construct a formal system which has theorems which, for example, can be interpreted as "there are infinitely many prime numbers."[8]

Gödel proved his Incompleteness Theorem in a rather bizarre but effective manner. He said that, given a formal system which can produce statements of number theory, there is a string (written in the notation of the system in question) which is not a theorem of the formal system, but it can be seen that the interpretation of that string is a true statement. This string (called G[9]) is interpreted as "this string is not a theorem of formal system X," where X represents the formal system in question. If X is a consistent formal system then it cannot produce G as a theorem, for if it were to do so then X would contradict itself by saying that G is a theorem (i.e., true) while G, which has been asserted to be true, says that G is not a theorem (i.e., false). This is what is meant by consistency in a formal system: "every theorem [of the formal system], upon interpretation, comes out true (in some imaginable world)."[10]

In his article Minds, Machines and Gödel, J. R. Lucas explains quite thoroughly why he believes that Gödel's Theorem is evidence "that there is some elusive and ineffable quality to human intelligence" which no machine can duplicate.[11] He sums up his own arguments as follows:

However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Gödel procedure for finding a formula unprovable-in-that- system. This formula the machine will be unable to produce as true, although a mind can see that it is true. And so the machine will not be an adequate model of the mind.[12]

What he means is that Gödel's Theorem applies to any machine that can be built; humans, from the outside of the formal system of the computer, "can concoct a certain statement of number theory which is true, but the computer is blind to the statement's truth."[13] Hence, a man knows something (statement G) which the computer cannot know and the conclusion is that a computer cannot be as intelligent as a man, even in principle.

Lucas has erred in his ideas about the limits (or the lack thereof) of a mind and the implications of this toward AI. He infers from Gödel's Theorem that the statement "G is not a theorem of formal system X" is true and so there is something that a mind can prove which cannot be proved by formal system X. He then goes on to conclude that -- for this aforementioned reason -- minds and machines cannot be the same.[14] No objection to this will be made, but this conclusion of Lucas' comes as no surprise as not even two minds should be expected to be the same. Lucas oversteps his bounds when he goes on to defy Gödel's Theorem by saying that minds are complete[15] -- that there is nothing minds cannot (as opposed to do not) know. He argues that "it is inherent in our idea of a conscious mind that it can reflect upon itself and criticize its own performances, and no extra part is required to do this."[16] In contrast, "a machine [which] can be made in a manner of speaking to 'consider' its own performance . . . cannot take this 'into account' without thereby becoming a different machine, namely the old machine with a new part added."[17] Lucas does not say so outright, but the implication is strong from this that Gödel's Theorem does not apply to minds, that there is "some elusive and ineffable quality to human intelligence, which makes it unattainable by . . . [machines]."[18]

In a moment we shall see that Gödel's Theorem is, indeed, applicable to a mind, but before proceeding further the meanings of the terms "true" and "false" must be considered. Consistency demands that the concepts of "true" and "false" be mutually exclusive -- something cannot be both true and false at the same time. Now, a mind may, by itself, come to some particular conclusion by following any line of reasoning it chooses: formal or informal, consistent or inconsistent, logical or illogical. However, in order for a mind to convince a different mind to come to the same conclusion it must: 1) convince the target mind to follow this same line of reasoning; 2) convince the target mind that this line of reasoning makes sense and is "correct;" and 3) that this line of reasoning actually does lead to the desired conclusion. Fortunately there is a method of reasoning -- called logic -- which is almost universally accepted by humans as "the way to prove something." Logic is a highly formalized, necessarily consistent method of reasoning, so it is no surprise that it can be likened to a formal system. This formal system, whose rules effect the small "obvious" steps between ideas will be referred to as the "Human Logic System" or HLS.[19] It so happens that anything which can be derived within HLS is called "true" by human minds. Thus, the negation of a HLS theorem is called "false." It is generally held, although Gödel's Second Theorem[20] says it cannot be proved, that HLS is consistent. To assume otherwise would lay open to question the truth of every idea that man has ever asserted.

When Lucas says, "'G is not a theorem of formal system X' is true," it is apparent that his statement is no more than a theorem of HLS. In the same light, Gödel's Incompleteness Theorem is a theorem of HLS, so it is called true. Let us apply Gödel's theorem to HLS by considering the statement: "G is an undecidable proposition of HLS." This can be restated as: "G is neither a theorem nor the negation of a theorem of HLS." The meaning of G is "this statement is not a theorem of HLS," but the undecidability is driven home when G is expressed as:

"This statement is false."[21]
This version of G is, quite simply, undecidable in HLS (try it!). Thus it is seen that Gödel's Incompleteness Theorem applies directly to logical human thought processes as well as to abstract formal systems so that, in this sense, a mind can not claim superiority over a machine.

Thus, if our minds act like consistent formal systems -- and we have seen evidence that this is true -- then minds cannot claim superiority (by virtue of Gödel's Theorem) over machines since the machine can use virtually the same argument to claim its superiority over the mind. What all this boils down to is that Gödel's Incompleteness Theorem apples equally to machines and minds. There is a statement G which, when suitably represented, cannot be produced as a theorem of any mechanical formal system, nor by a mind. Hence there is no valid reason to use Gödel's Theorem to discredit the possibility of creating artificial intelligence.

One way to circumvent Gödel's Theorem is by creating an inconsistent system. Lucas claims that an inconsistent mind would "remain content with . . . [its] inconsistencies and . . . happily affirm both halves of a contradiction."[22] Nevertheless, while a mind can be shown to exhibit behavior like that of a consistent formal system, it can also be shown to act inconsistently. Consider a variation on the statement G: "Lucas cannot consistently determine that this statement is true."[23] Any person other than Lucas can, without being inconsistent, determine that the statement is true, but Lucas cannot for if he says that it is true then he does, indeed, contradict himself. What might seem amazing is that Lucas, after thinking for a moment or two, will announce that this version of G is true. Now, if Lucas could be represented by a consistent formal system, then Gödel's Theorem says that he would be unable to determine the truth of this G. Therefore we must conclude that Lucas cannot be represented by a consistent formal system. This statement can be generalized by noting that anyone could be substituted for Lucas in the above example and so we must conclude that human minds, in general, cannot be represented by consistent formal systems.

Just a moment ago we noted that minds act like consistent formal systems and now we are saying that minds are inconsistent (now that's inconsistent, isn't it?). Most people act rationally most of the time (for the purposes of this paper, the terms rational and consistent are synonymous) so it appears, in spite of Lucas,[24] that a mind is capable of maintaining a good deal of control over its inconsistencies.

The mind can be thought of as a consistent formal system which is capable of entertaining inconsistent ideas. This is absolutely necessary if the mind is to be capable of producing indirect proofs -- which use a hypothesis (assumed to be false) to produce a statement which is inconsistent with known facts, thus proving that the opposite of the hypothesis is true -- which rely on the mind's ability to to create and then detect inconsistency.

We have seen that minds can behave both as consistent formal systems, and as inconsistent systems. Therefore, if we are to build a machine which can be compared fairly with a human mind it is only logical to expect the machine to behave both consistently and inconsistently, as well. Inconsistency is sufficient as an attribute of a system to make Gödel's Theorem inapplicable and, contrary to Lucas,[25] an inconsistent system need not always behave inconsistently as evidenced by the vast number of rational people in this world. There are many hurdles to be overcome before Man creates anything worthy of the name, "Artificial Intelligence." Pattern recognition is probably the most difficult problem to solve: when are things the same and when are they different?[26] Yet, progress in AI is hampered by a lack of understanding of how the human mind functions rather than by some theoretical brick wall. Artificial intelligence is not a reality . . . yet.

NOTES

[1] Ada Lovelace, Notes by the Translator of "Sketch of the Analytical Engine Invented by Charles Babbage," in Charles Babbage and His Calculating Engines, ed. Philip Morrison and Emily Morrison (New York: Dover Publications, 1961), p. 284.

[2] A. M. Turing, "Computing Machinery and Intelligence," in Minds and Machines, ed. Alan R. Anderson (Englewood Cliffs, NJ: Prentice Hall, 1964), p. 14.

[3] Turing, p. 17.

[4] Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Random House, 1980), p. 101.

[5] J. R. Lucas, "Minds, Machines and Gödel," in Minds and Machines, ed. Alan R. Anderson (Englewood Cliffs, NJ: Prentice Hall, 1964), p. 14.

[6] Hofstadter, p. 17.

[7] Hofstadter, p. 17.

[8] Hofstadter, p. 204.

[9] Hofstadter, p. 18.

[10] Hofstadter, p. 101.

[11] Hofstadter, p. 471.

[12] Lucas, p. 48.

[13] Hofstadter, p. 472.

[14] Lucas, p. 48.

[15] Lucas, p. 57.

[16] Lucas, p. 57.

[17] Lucas, p. 57.

[18] Hofstadter, p. 471.

[19] The idea of HLS was developed by the author and Daniel C. Day in private conversation.

[20] Hofstadter, p. 450.

[21] Hofstadter, p. 17.

[22] Lucas, p. 53.

[23] Hofstadter, p. 477.

[24] Lucas, p. 53.

[25] Lucas, p. 53.

[26] Hofstadter, p. 148.