June 28, 2024
What do we mean by “genius”? A commonsense definition might be, “mental ability beyond the ordinary, a higher mode of mental functioning.” But much confusion seems to result when we try to define what that higher functioning might be. The author of this discussion is not a genius, but I hope there will some value in simply trying to clarify the issue. My mentor, Northrop Frye, was generally regarded as an authentic genius, and I have always thought that part—not all—of his genius was simple clear-sightedness. He was remarkably free of neuroses, ideological obsessions, egocentric ambitions, and specialist narrowness. Often, his writing produces clarity where there had been a muddle, like a telescope brought into sharp focus. That is, in itself, one kind of genius. All my life, I have thought that, although I am not capable of Frye’s originality, at least I might aspire to his sanity. It is his freedom from the usual baggage that makes him such a pleasure to read.
It is interesting, and a possible clue to what we talk about when we talk about genius, that the word originally meant not a personality trait but an indwelling spirit. The Romans took over Greek mythology, with its system of projected supernatural deities, but imposed it atop a native religion of immanent powers. These powers were inherent in everything: we still use the term genius loci, the spirit of a place. It seems to have been a form of pantheism in which everything had its genius, and was thus sacralized. St. Augustine mocks this form of pagan “idolatry” in The City of God: imagine a people ignorant enough to worship hundreds of gods! But that is just ideological polemics. Genius considered as an indwelling spirit is related to the Greek idea of a daimon, the most famous of which belonged to no less than Socrates, who said that an internal voice spoke to him, giving him advice. We still recognize a distinction between someone’s genius and their ordinary ego personality. Einstein’s genius boldly went where no one had gone before, but Einstein’s ordinary self was lucky to match his socks. A more extreme version of this split are savants, people who are mentally impaired but capable of startling feats of memory and calculation. They were called “idiot savants” back when “idiot” was actually a psychiatric term for low intelligence, but in fact they are more likely to be neurodivergent. The number of films about such people—Rain Man, Forrest Gump, A Brilliant Mind—indicates how fascinated we are by the conjunction of disability and genius.
I find myself wondering about another type of savant-style genius, or at least talent, than the rabbit-out-of-the-hat type, the type that can derive 100 prime numbers before breakfast. There is perhaps also the type that is 99% perspiration, so to speak. A trait of some savants is the ability to focus intensely, obsessively, incessantly on some mental task. I am told this is true of some forms of ADHD, and that makes me wonder a little about myself. Editing Northrop Frye’s notebooks, as Robert Denham and I did for 15 years—15 years!—involved sitting for hundreds of hours, first transcribing hard-to-read handwriting into a computer, then editing volumes that required on the average of 1000 endnotes per volume. What kind of maniac does this, and not even for money? You could say that this is the opposite of genius, just pure plodding. But in The Psychology of Science, Maslow concedes that such types are perhaps as necessary to the scientific enterprise as the self-actualizing geniuses with their epiphanies. Someone has to count the number of blood cells in the sample. Yes, computers, and now AI, can take some of these tasks over. I remember my awe back in grad school in my research methods class at the people who first edited Shakespeare, before computers, sometimes inventing their own simple, mechanical devices, such as the Hinman collator, for the purpose of comparing dozens of textual versions, looking for variants. And behind them are all those monks who sat transcribing the Bible and other texts by hand. In physical anthropology and paleontology there are the people who reconstruct skeletons by gluing tiny slivers back together, piece by piece. The point is not whether such activities can be automated. The question is whether the capability of intense focus is not at least a component of genius, or one kind of genius. Frye himself manifested this trait. He sometimes said that all his work depended on moments of insight that took up perhaps five minutes of his life—but the rest of his life consisted of sitting and writing: 25 volumes of published work, 4000 pages of unpublished notebooks.
Modern notions of genius date from the 18th and early 19th centuries, from the Enlightenment and Romantic periods, the twin and contrasting movements that created the world we live in. We have not perhaps thought enough about the fact that our inheritance is twofold, from the Age of Reason and the age of imagination, riven by a certain tension. These eras gave us two quite different definitions of intelligence, which, oversimplifying for convenience, we may call the analytical and the associative. The first thing to say about them is that they really can’t be thought about apart from each other—each is “always already,” as the deconstructionists would say, latent in the other. As they would probably not say, that does not mean that the distinction is an illusion, merely that it is a paradoxical identity-in-difference, like the Taoist yin and yang. Still, the two types are contradictory, and attempts to define intelligence may run aground at this point.
Deeply afraid, and for good reason, of the irrationalism that ravaged the 17th century with unending religious wars, the Enlightenment defined intelligence as rational discourse. Descartes said that philosophy had to be founded upon “clear and distinct ideas.” In practice, “clear and distinct” means a generalization, abstracted from the particulars that would distract from its clarity. Locke’s philosophy pursues “primary qualities,” which are abstractions like size, shape, and motion, by subtracting the particular “secondary qualities” that make something the individual object that it is. Blake’s response to this epitomizes the difference between Enlightenment and Romantic modes of thought: “To generalize is to be an idiot.” But discursive thought moves towards generality and abstraction, from the thing itself to ideas about the thing, thus reversing Wallace Stevens’ famous phrase.
The quintessential form of this in literature was the heroic couplet of Alexander Pope, stating aphoristic generalities, “What oft was thought but ne’er so well expressed,” in balanced and symmetrical form. In painting, Joshua Reynolds advocated a “generalizing” style that attempted to idealize a painting’s subject. Reynolds was content with merely glamorizing his portraits, like someone doctoring their Facebook profile picture. But in the 20th and 21st centuries, abstraction has proceeded much further, in both popular and high-culture visual art. In Understanding Comics, Scott McCloud shows how, although some famous comics, such as Hal Foster’s Prince Valiant are rendered in realistic detail, most comic art is stylized through abstraction. As example, he shows half a dozen versions of a face, beginning with photographic realism and ending with a few iconographic lines—a smiley button of a circle, two dots, and a curve for a smile. Contemporary painting and sculpture have become for the most part completely abstract, and much modern art is conceptual, to be appreciated by the critical understanding more than by the senses. The viewer has to be, essentially, an art critic. Whether the rejection of representation is a good thing depends largely on ideology. Liking a picture of a cow because it looks like a cow is sometimes dismissed as Philistine vulgarity. I just read an attack on Donald Trump for saying that Renoir was his favorite painter. To be fair, I’m surprised he’s even heard of Renoir.
At the far extreme, rational thought’s tendency to abstraction verges upon the clear and distinct ideas of mathematics, which is regarded as the ultimate language of truth by philosophers as far back as Plato. The popular conception of science is the empirical gathering of facts, but Newtonian physics, while mechanistic and, to be sure, backed up by the empirical observations of astronomers such as Galileo, is dependent on the differential calculus invented simultaneously by Newton and Leibniz. Yes, the new universe is one of material balls revolving in space, but what was really important for the Age of Reason was not the matter but the set of laws that governed its behavior. Genius—and Newton was the iconic genius of his age as Einstein is of ours—is the discovery of rational laws that order the cosmos, and the discovery of those laws is humanity’s godlike faculty: “God said, ‘Let Newton be!’ and all was light!” as Pope put it. The beginnings of what are now the social sciences lie in the Enlightenment attempt to discover rational laws governing human psychology and society. Despite the lies of Christian nationalism, which revives the theocratic delusions of the 17th century, the Founding Fathers were mostly Deists who believed in a Christianity purified to rational principles. When Thomas Jefferson says, “We hold these truths to be self-evident,” he is alluding to the belief that there are natural laws of divine origin that govern human nature, laws that are self-evident to reason. Civilization is “reasonable,” founded upon such rational principles, and the task of civilization is to spread the knowledge of such principles to benighted “savages” in other cultures, who are governed by “superstition,” meaning irrational religious beliefs. Church and state must be separated so that no group’s irrational beliefs may dominate society. This implies that superstitious irrationality is not just characteristic of foreign cultures. Latent within each of us is a superstitious savage, who has to be kept restrained. The Enlightenment feared the “enthusiasm” of irrational religion, but it also harbored a deep fear of the imagination as delusional and regressive. Samuel Johnson’s satire Rasselas, portraying imagination as the source of all the delusions that make the human race unhappy was, as we know from Boswell’s biography of him, personally motivated by his own fear of madness.
The privileging of reason leads to a fascination with logic and logical relations, thence to symbolic logic and the idea of computer languages. Logic is based on the Aristotelian rule of non-contradiction: A is not B. It is thus based on difference, not the association or identity that are the principles governing intuition and imagination. A genius of the later 19th century, Charles Babbage (1791-1871), invented the idea of the computer as a “difference engine.” Computer language is based on the differential language of 1’s and 0’s. Patterns of those differences are elaborated according to logical rules, programming logic. Computer programs have syntax, comparable to the syntax of language: a set of rules putting various particulars into patterns of logical relation. English syntax is based on a default pattern of SVO: a subject followed by a verb, leading towards an object or complement. More complicated sentences are created by attaching clauses and phrases, which are units of words that have relational patterns within themselves but also attach to the main sentence in a way that signals various relationships, such as dependency, qualification, negation, and the like. Syntactic structures are a set of rules for words; computer languages are a set of rules for numbers and symbols. The invention of the means of programming a computer with such languages was partly the work of another genius, Ada Lovelace (1815-1852), who was, startlingly enough, the daughter of the poet Byron.
Some intellectuals speak rather condescendingly of the Enlightenment, and it is true that that age grew rather overconfident of its own “reasonableness.” But in what may seem to be an age of unreason, we are still in the grip of the Enlightenment preoccupation with rules and laws that govern all things. Human intelligence, genius or otherwise, is more than ever understood according to a computer model. There is a strong tendency to define the human mind as a self-programming computer, which accounts for the popularity, reaching almost to minor cult status, of Douglas Hofstadter’s Gödel, Escher, Bach (1979). The book is an inquiry into the affinities between mathematics as a structure of rules (as in Kurt Gödel’s philosophy of mathematics, and also the work of the logician Willard Van Orman Quine); the rules of computer languages; and the rules governing certain types of art that are dominated by “recursiveness,” a quality of self-referentiality, showing up as “strange loops” that turn back on themselves, analyze themselves, and revise themselves. A Bach fugue starts with an innocuous theme, which is then developed through various permutations, including canon (like “Row, row, row your boat"); crab canon, in which the theme occurs backward; and so on. If we are basically computational machines, as argued in the work of Daniel Dennett, the philosopher of consciousness who just recently died, then we cannot be conscious in the old sense of possessing some intangible essence that thinks and therefore is. Human beings differ from animals in being not just conscious but self-conscious, and self-referential “strange loops” seem to resemble that self-consciousness, so much so that Hofstadter published a sequel book titled I Am a Strange Loop (2007).
That is to say, consciousness is just another word for a certain type of programming. We keep asking whether anyone has produced true artificial intelligence yet, but what we should do is look in the mirror. We have met AI, and it is us. That is, if you accept the computational premise, which, as we shall see, not everyone does. All machine intelligence can do, from what I can see as very much a lay person in this department, is exceed human capacity in one of three ways: (1) how much knowledge it is capable of bringing to bear on a problem, in other words the size of its data base; (2) the speed in which it can analyze and process complex operations; (3) its designed-in capability for making autonomous conclusions and decisions, which might involve reprogramming itself or even building better versions of itself. Computers have been surpassing human ability in the first two areas for a long time, thereby winning chess games and the like. And the third is a recurrent nightmare in science fiction: Isaac Asimov’s Three Laws of Robotics were designed to limit the autonomy of machines, but the anxiety is evident in the malevolence of HAL in Kubrick’s 2001, not to mention the Terminator movies.
What, then, is genius? Within this perspective, it is the degree to which exceptional human beings approximate machine intelligence, in each of these three ways. First, we admire polymaths as geniuses, people with vast erudition and an enormous memory, perhaps a photographic one. As far back as 1941, Robert Heinlein postulated the social position of “synthesist” in his utopia Beyond This Horizon. To deal with the complexity of modern society is beyond the capacity of mere politicians. A synthesist an administrator who is a polymath with a photographic memory, capable of mastering modern complexity in its vast detail and synthesizing it into solutions. This is very much in the Enlightenment spirit: it was the 18th century that produced the Encyclopédie as a way of helping to govern a knowledge-based society. In his Foundation trilogy, Asimov imagines an Encyclopedia Galactica in the far future. Babbage was a polymath. Heinlein saw that scientific culture led to specialization, but that such fragmentation of knowledge led to a blind-men-and-the-elephant situation: what is needed is the Big Picture of the synthesist. Second, we admire the lightning calculators, the savants. Third, of course, we admire most of all those capable of making creative leaps beyond the mere repetition of common formulas. Such a capability is the most important criterion of genius, although one that begins to hint at a blind spot in the computer model of consciousness, for it implies a kind of spontaneity and ability to leap beyond the rule-governed that ought to be impossible if we are truly organic machines.
Be that as it may, our culture proliferates with images of genius as basically ratiocination: the genius is the brilliant calculator. Edgar Allan Poe invented detective fiction as a type of fiction whose plot was a puzzle to be solved. Most mystery stories are about a murder, yet the trauma and violence are usually kept offstage, and what predominates is a puzzle-solving game. It is no accident that in “The Philosophy of Composition,” Poe treats poetic composition as a kind of algebra, the opposite of some kind of intuitive inspiration. Poe’s detective, Lupin, is a calculating machine, as have been many of the great detectives, including the greatest, Sherlock Holmes, whose behavior, it is often observed, suggests neurodivergence. So does that of Agatha Christie’s Hercule Poirot. Kenneth Branagh’s recent trilogy of Poirot films emphasizes his obsessiveness about trivia such as the correct preparation of his egg, and attributes it to trauma suffered during World War I. Branagh’s Poirot reduces his world to one of logical deduction as a way of controlling it, in other words.
Modern times have dreamed of a new kind of aristocracy based on superior intelligence rather than birth and lineage. It has been a dangerous dream, especially when it has become involved with efforts at social engineering, which it has from its inception, the first IQ test, invented by Alfred Binet in 1904, revised by Lewis Terman in 1916 to become the Stanford-Binet, the IQ test of my youth, which gained me entrance into a “gifted and talented” program from the 4th through the 8th grade, the Canton High Ability Program or CHAP in Canton, Ohio. It was pretty simple and provincial, I’m sure, compared to more prestigious elite programs, but it has provided me with a means of understanding the complexity of the moral questions inherent in gifted and talented programs. Entrance into the CHAP program was contingent upon an IQ score of 125 or above, which, according to the scale, is in the middle of a band labeled Superior, above High Average but below Very Superior, which begins at 130, which is the minimum cutoff point for membership in Mensa. Our exact scores were withheld from us, to prevent rivalry. By the way, the Stanford-Binet scale stops at 160, which is apparently how that number has gotten the popular reputation of genius level. You may find, all over the Internet, sites confidently speaking of people with IQ’s over 200, some of them people who lived centuries ago. I think it is a mark of superior intelligence to dismiss these as nonsense.
Though the IQ tests claim to measure half a dozen kinds of ability, those are blended into one final score that seems to me to privilege what we could call Rubik’s cube intelligence, the logical and puzzle-working kind of ability that in fact I lack almost entirely. Nor do I care, because most games bore me. I am not even good at Scrabble, though I might be expected to be. Scrabble is not based on verbal but rather strategic intelligence. It is more like chess (at which I am hopeless) than like writing a sonnet (which I can do). I am dreadful in math, and could never have majored in philosophy, because the abstract language of most philosophical works seems either beyond my grasp or irritating in its willful obscurity. Computer programming was never going to be my field, nor could I have become a lawyer or accountant. I must have some kind of left brain (to switch paradigms for a moment), since within criticism I was drawn to literary theory—and yet the kind of philosophy-based theory of the post-structuralist variety usually seems to me a Bermuda Triangle of abstraction. I am a bit puzzled how I got into the program, because I honestly lack the kind of discursive intelligence that our society most values. That is not an expression of regret: I am quite happy with who I am, though it has limited me economically.
My case is only interesting as an example of something about gifted and talented programs in particular and, more generally, about society’s attitude towards intelligence. The CHAP program as I experienced it did not attempt to produce the kind of calculating and logical intelligence needed for STEM subjects. It was not a hothouse program for growing little engineers. It did not coach students to do well on logic tests. It simply provided an enriched environment, promoting creativity more than superior reasoning. And the enriched, creative atmosphere was less the result of pedagogy than of simply gathering together a group of bright and naturally creative children into a single program in which they could stimulate one another for several years. It is group psychology: an environment of lively kindred spirits is going to fertilize any child’s general intelligence and creativity. The second way in which the program benefitted us was by shielding us from the anti-intellectual peer group pressure of an ordinary classroom, especially during that time, when “egghead” was a term of abuse. Years later, a lot of programs like ours were abolished in favor of “mainstreaming,” which mean integrating bright kids into ordinary classrooms so that the average and below average students could benefit from contact with their intelligence. Since we have all spent time in classrooms, we know that the exact opposite really occurs: the bright students are forced to dumb themselves down to the least common denominator, or else be punished for standing out. Mainstreaming was simply one more kind of American anti-intellectualism. So I remember my program fondly, and argue for the value of any program set up on a similar basis.
That anti-intellectualism has, if anything, intensified. Donald Trump commonly claims he is a genius, and is presently boasting that he will not bother to prepare for the imminent presidential debate. This is doubtless a lie, but, as Amanda Marcotte notes, it appeals to the MAGA resentment of “elites.” As she says in her Salon.com newsletter, “They’re rarely speaking of the actual elite, the one-percenters who are funding Trump’s campaign. The vitriol is aimed at the educated middle and upper middle class: Lawyers, teachers, scientists, college professors, journalists, and doctors.” The ignoramus they worship has told them he loves the poorly educated, and they have faith that his “genius” will defeat all those educated types they feel look down on them. Among the MAGA leadership, there is a good deal of hypocrisy about this educated “elitism.” While the “send in the clowns” contingent may not be highly educated—Lauren Boebert dropped out of high school and has a GED certificate—the real power players—DeSantis (Yale, Harvard), Cruz (Princeton, Harvard), Abbott (Vanderbilt), Vance, (Yale)—have degrees from elite institutions. They are not salt of the earth: they are well educated foxes trying to get hold of the key to the henhouse.
There is, unfortunately, more to be said, however, about high-ability programs because in the early 20th century the project of measuring intelligence got mixed up with the eugenics movement. There was a desire to produce a superior breed, Homo superior, in order to govern society as elitist authoritarians since Plato have felt it should be governed, by the best and the brightest. Plato advocated philosopher-kings. Before World War II, even decent people like George Bernard Shaw were drawn to advocate programs to breed the Superman. Shaw’s version was relatively harmless, partly because he could see the ironies. According to a story that is probably apocryphal but still very funny, a famous actress approached Shaw proposing a private breeding program of their own, so that the offspring might possess her beauty and his brains. Shaw is said to have replied, “Alas, my dear, what if it turns out to have my beauty and your brains?” Other versions were not so funny, especially those that led to “scientific racism,” an oxymoron if there ever was one, in which pseudo-science was used to “prove” the inferior intelligence of the “lesser races,” sometimes by devices so crude as a museum display in which skulls of an ape, an African, and a European were set in a row, supposedly to demonstrate how much more apelike the African skull was. Biologist Stephen Jay Gould documented these practices in The Mismeasure of Man (1981). His target was not just the benighted past but the conservative present in which Richard J. Herrnstein and Charles Murray argued for the genetically inferior intelligence of African Americans in The Bell Curve (1994).
Science fiction became obsessed with mutants, exceptional flukes who had not just superior intellect but various super powers. In Slan (1940) by A.E. Van Vogt (a dreadful yet influential writer), mutants are persecuted by the non-mutant population in a way more familiar to a younger audience through the X-Men comics, where Prof. Xavier keeps a School for Gifted Youngsters that is really a haven for young, ostracized mutants. But evil mutants with a God complex, such as the title character in Olaf Stapledon’s Odd John (1935), are the other side of the story. Another way of producing the genius or superior being takes us back to where we began, where a large part of genius is simply mental clarity. A Polish count named Alfred Korzybski (1879-1950), in a book called Science and Sanity (1933) that was hugely popular for decades, argued that, because language determines how we think, reforming our language into something more logical will help make us all into something at least closer to geniuses. A.E. Van Vogt dramatized this in his Null-A novels, “null-A” standing for “non-Aristotelian.” At the same time, L. Ron Hubbard created two cults, first Dianetics and then Scientology, based on the premise that a clear brain is a superior brain. All this is depressing enough to turn one off the idea of genius altogether.
But there was also a different way of regarding human intelligence, deriving from Romanticism, which said that there was a second form of mental functioning that grasped things according to identity rather than difference. The verbal unit of identity is the metaphor, which says that A is B. Just as analytical thinking has to admit some principle of unity, an element of syntactic relationship, so the Romantic theory of the imagination harbors within itself a principle of difference. In fact, Romanticism produced a new kind of Creation myth that was really a myth of the birth and development of consciousness. A prominent later example of this (despite being flawed by some unfortunate sexism) was Erich Neumann’s Origin and History of Consciousness (1949), a quasi-Jungian version of a mythical pattern with Romantic origins. In the beginning is the unconscious, in which the opposites are unified, but only in an inferior way, by being undifferentiated, a condition comparable to mythical Chaos. Consciousness arises out of this unconscious primal unity as an awakening sense of differentiated order. The increasing sense of difference and otherness can be seen in child development. An infant is hardly aware at all of a difference between itself and its environment, which is largely the mother. Children are capable of all sorts of magical thinking because they are still partly caught in the imaginary, which is not at all the same as the imaginative. But the unconscious sense of association and identification is not something to be simply outgrown. It remains as a latent faculty in the deeper levels of the mind, but may unpredictably leap into ordinary consciousness as a hunch, intuition, or inspiration. Occasionally, it may erupt more powerfully as what Maslow called a peak experience. As Maslow came to recognize, the natural vocabulary to describe such epiphanies is that of religion, not necessarily in the supernatural sense but in the sense of an immanent power that sacralizes ordinary reality. We are back to the idea of genius as indwelling spirit again.
Nor is the faculty of identification merely what Samuel Johnson and Freud both thought it was, a regressive and infantile tendency that tempts us away from objective reality. That is only true if it remains unintegrated into consciousness. The idea that all resemblance, association, and identity are dangerous illusions is an ideological obsession. Reality is in fact an interplay of identity and difference, and the perception of what unites is as genuine a faculty as that which perceives differences. It may not manifest itself explicitly but inform consciousness from within. We now speak of emotional and social intelligence, which people with highly developed analytical intelligence may not possess. What we may call imaginative intelligence is the basis of emotional and social intelligence because it sees another human being in terms of empathy based on deep identification. Without this sense of identification, other people are just “other,” and the ground is laid for some kind of paranoid conspiracy theory based on perceived threat. The love lyrics of Donne provide an exhaustive treatment of the intuition of the love relationship as a two-in-one, and it is no accident that it is the same poet who says that no man is an island. Beyond falling in love, the way to develop emotional and social intelligence is through liberal education, which is an education of the imagination.
We could say that imaginative intelligence is measurable by the same three criteria as analytical intelligence on a computer model: size of data base, speed and complexity of operation, and autonomy or freedom from the predestination of programming. The data base of the imagination is what last week we called the Story of All Things, an encyclopedic erudition that builds up a vision of order that encompasses God, the cosmos, and the human mind, what Northrop Frye calls the order of words. The operation of the imagination is the perception of similarity in difference, which is the capacity for metaphor that Aristotle called the basis of genius in a poet, the “multeity-in-unity” that was Coleridge’s definition of beauty. And the autonomy of the imagination is quite simply creativity. Coleridge distinguished Fancy from Imagination in that Fancy just plays with “fixities and definites,” in other words just manipulates formulas, producing novelty but not true creativity. Just when we are totally sick of the clichés of the Petrarchan sonnet and are ready to declare that it is exhausted and no longer a permissible vehicle, Shakespeare opens a sonnet with “The expense of spirit in a waste of shame / Is lust in action,” and, startled, we wake up: wake up to a stunningly unorthodox sequence in which there are two objects of desire, neither of them conventional—a beautiful male youth and a Dark Lady—and we have no idea where all this is going to end up. That is genius, by quite another definition than the computational. Yet it is by no means confined to the arts. The most famous example of creativity in science is Kekulé’s dream, which gave him the secret on which organic chemistry is founded by means of a vision of carbon atoms dancing, not in a line but in a ring. What Kekulé dreamed was a scientific version of a traditional metaphor, that of the cosmic dance, the vision of order visible at night in the revolving stars and heavenly bodies, the basis of Creation. It did not come to him by induction or deduction: it came in a dream, and it came in the form of that kind of unpredictable blessing traditionally called “grace.”
The positivism of the 19th century asserted that, since everything operates according to cause and effect, the universe is deterministic, and there is no free will. In the 20th and 21st centuries, various theories also deny the possibility of freedom. We are determined by ideology, by power systems, by the unconscious, by the prisonhouse of language. Yet this does not square with the facts. Every line of modern thought, if followed far enough, ends in a mysterious gap that abrogates the possibility of a completely closed system. Gödel is germane to Hofstadter’s argument because he showed that Russell and Whitehead’s attempt to prove that mathematics was a complete, closed system of rules was a magnificent failure. Their proof depends on certain principles that have to be assumed to be true but are not provable from the system itself, and that would be true of any proof whatsoever. Likewise, Heisenberg demonstrated that the system of rules that governs matter comes up against a final uncertainty in quantum mechanics. Where the positivists thought that the universe was a clockwork mechanism whose operation could be predicted to the end of time if one knew the initial position of every molecule, chaos theory proves that, at some point, what is predictable is the unpredictable. Freud said that there was a point at which, no matter how acute the analysis, the dream possessed a kind of umbilical cord that disappeared into mystery. One reaction to all this unpredictability is a philosophy of absurdism born of a kind of despair. But there is a third way, between determinism and absurdism. What is genius? The sudden flash of intellectual illumination, of creative power, of compassionate love, out of the mysterious darkness of our being. The machine that could experience that would truly deserve to be called intelligent, and more. It would deserve to be called kindred.