The pollsters who failed to predict the presidential election results failed because their polls were founded on the premise that people are rational. But the human race is not rational. This is hardly a new observation. In Gulliver’s Travels, published in 1726 after psychotic religious warfare had ravaged Europe for a century, Jonathan Swift portrayed the human race satirically as Yahoos, savage, superstitious, cruel beings whose intelligence only amplifies the effect of their viciousness. They are driven by herd mentality and incapable of civilization, having to be kept under control by the race of sane and rational horses known as the Houyhnhnms. Swift was an Anglican archbishop, and was drawing upon the Christian idea of the “natural man.” Humanity in the state of nature, without nurture, is a savage, a “primitive.”
Shakespeare portrays the natural man as the half-naked, half-crazed Poor Tom of King Lear and as the native islander Caliban in The Tempest. But Shakespeare makes a distinction that Swift does not. Poor Tom and Caliban are not truly evil. The truly evil people—Edmund, Goneril, and Regan in King Lear and Antonio and Sebastian in The Tempest—are well-brought-up aristocrats. Their viciousness is far worse than the savagry of the natural man, all the more so since they lack the liberal excuse that “They came from a deprived background.” As usual, life imitates art. The January 6 insurrectionists and the howling crowds of Trump’s MAGA rallies are the natural man, shouting, as Caliban does when leading an insurrection against Prospero, “Freedom! Heyday!” But the kind of people Trump is putting in power are simply malicious. Trump himself is both at once, which is the source of his unique influence.
All of the crises of modern times, the two world wars and the Cold War, but also the economic crises of the Great Depression and the 2008 meltdown, have been upsurges of irrationality out of the unconscious. Alan Lichtman, whose method of calculating presidential election results had functioned successfully for decades, was utterly wrong this time. Lichtman himself has explained that it is because his method of calculation is predicated upon rational voters making rational choices more often than not. This time, it was utterly different. He was subject to death threats and other forms of harassment in a way he had never experienced before. We have been inundated by a wave of mass psychosis. The threats against the harmless Lichtman show why I do not excuse Trump voters—no, not even nice old grandma—as poor deluded souls led astray by right-wing media. Those who would not themselves get their hands dirty by making death threats and the like have no excuse for not knowing that their fearless leader encourages such behavior. And most of them do know that hate and violence are in the offing. They say it is necessary to get stuff done. To make an omlette, you have to break eggs, and we are the eggs. That is the deal with the devil, and the devil is already in the process of taking their souls. This is what happened in 1789. The French Revolution began as an uprising catalyzed by terrible social injustice against an out-of-touch ruling class that told them to eat cake. The parallel is exact—it was the price of groceries that prompted Marie Antoinette’s remark. It quickly spun into a nihilistic chaos that shocked Europe.
I want this week to explore the rise and development of science in relation to the irrationality of modern history, inspired, if that is the word, by the appointment of Robert F. Kennedy, Jr. as the new head of Health and Human Services. After Europe had expended itself in a century of religious and civil conflicts, the 18th century saw a backlash in the form of the Englightenment or Age of Reason. Science grew up amidst the conflicts of the 17th century, and it may be that one of the catalysts of its growth was a recoil from the irrationalism that was causing so much grief. Scientific method was invented by thinkers like Descartes as a way of distinguishing the truth from various forms of delusion and magical thinking. The persecution of Galileo and the burning of Giordano Bruno at the stake for championing the Copernican heliocentric cosmology was really an attempt of the Church to assert the supremacy of truth as supernatural revelation against the claims of a new kind of truth, whose authority lay in logical and empirical proof rather than the arbitrary assertions of doctrine. In the 18th century, the power of both religion and the monarchy that backs up religion with temporal force were in the process of being curtailed, and science found a natural ally in the ascendent system of democracy. The foundational idea of the Age of Reason was that humanity began in a state of superstitious savagery but was capable of developing beyond it, that the human race could aspire to a rationality that was in effect utopian—it could remake the world. Science was to become a primary instrument of that aspiration, and in England, in 1660, the Royal Society was founded to promote the advance of science.
Most people tend to think of science as identical to technology, and this is perhaps especially true in America, with its cult of the inventor. The public is in fact normally suspicious of pure rather than applied science, as a project of seeking knowledge for its own sake rather than for what it can do for us. Pure scientific inquiry is easily regarded as a waste of taxpayer dollars. But science is not identical with what it can do. It begins as a mode of inquiry: it is a method based on certain epistemological assumptions. Scientific method accepts the model, first postulated by Descartes, of the subject-object division. Reality is what is perceived by a conscious subject (I think, therefore I am) looking outward upon a world objective to itself. Scientific method means to sharpen that distinction as much as possible, because it is vigilant about “superstition,” which means projection of biases onto the object, distorting the true perception of the object with subjective wishes and fears. The first attempts at comparative mythology date from the 18th century, and most of the theories about mythology assumed that it is a system of superstitious projections. Such a view is still going strong in Sir James Frazer’s The Golden Bough in 1913, which views cultural history as an evolution from magical thinking to religious thinking to a Promised Land of Victorian rationalism.
The outbreak of the First World War the next year, a war so irrational that no one can explain why it was even fought, tended to undermine the faith that every day in every way the human race is growing more rational. Yet even if the triumph of sweetness and light turned out to be not so inevitable as some had optimistically hoped, “higher civilization” was still superior to the state of indigenous cultures that world exploration was encountering all over the globe as imperialism advanced. The natives of non-modernized cultures were often treated as if they were basically children, because children have not yet learned to make a sharp distinction between subject and object, and so typically animate the world by infusing it with their feelings. The child’s world is the world of cartoons, a world in which animals and even inanimate objects may talk and act like fellow beings. According to a prominent theory, “animism” was the most basic form of religion, and “primitive” people are natural animists, living in an anthropomorphic world. It is not that such a theory is entirely wrong. What is wrong is the complacent distinction between naïve native animists and wise European “adults,” the great white fathers who have to rule their children for their own good. By the time Frazer published the definitive edition of The Golden Bough, Freud was hard at work showing that the unconscious of sophisticated, educated Europeans was just as primitive and childish as any natives’. How many of us are incapable of parting with our Barbies, or, in my case, the 50 or so Wild Republic stuffed monkeys collected by my former wife, which have been everywhere (and I mean everywhere) around me for 20 years?
Not all superstitions are warm and fuzzy, of course. The sleep of reason produces monsters, a process Goya shows occurring in a famous print at the turn of the 18th century. We abhor the sacrifice of children to Moloch, the burning of innocent women as witches. Human sacrifice is the epitome of human irrationality, the symbol of the deathwish that lies at the heart of it, which is why the central symbol of Christianity is the human sacrifice of the Crucifixion. The accusation, of course, is that it is those others who practice human sacrifice. From crude cartoons and movies about cannibals in “darkest Africa,” with their recipes for missionary stew, to the sophistication of T.S. Eliot’s play The Cocktail Party, whose central character achieves martyrdom by being crucified by natives on, bizarrely, an anthill, the practitioners of human sacrifice are condemned as abhorrent, the ultimate example being the Holocaust. But the attempt to project the guilt onto others is a blind. One of Blake’s profoundest insights was that Deism, the attempt to found religion on the supposed reasonableness of the natural man, will always fail. When the natural man drops the pretense of reasonableness, Deism turns into what Blake called Druidism, in which sacrifice becomes an intoxicating festival of power-lust, violence, and hate. Blake saw this in the European wars of his time, as the Modernists saw it in the trench warfare of World War I, whose soldiers were human sacrifices. Trump’s demonic crew can hardly wait to start sacrificing immigrants—you know, those scary others who eat your pets. The next four years are going to be an orgy of Druidism.
Many of the Founding Fathers were Deists, and the principles upon which American democracy is founded presume a citizenry capable of governing themselves through reason. But there have always been skeptics who point to the darker half of America’s imaginative heritage, that of the Puritans, who are praised for having the “vision of evil” where rational types want to talk about “joy.” Nevertheless, the Puritans exemplify the Nietzschean aphorism that if you stare long enough into the abyss, it will stare back at you, and also Blake’s chilling phrase “They became what they beheld.” Hawthorne’s Puritans are horrified by the Maypole of Merrymount, that symbol of fertility and sexuality, the energies of the devil; they fear the “natural man” in the form of the Indigenous people in the forests, more worshippers of Satan. But they became what they beheld: in executing women as witches, they revealed that they had become the true devil worshippers. In the same way, the Christian right, by exalting a sex criminal who is presently busy appointing other sex criminals to high offices, who preaches righteous hate instead of love, has revealed what it really worships.
Through institutional conditioning, science attempts to produce a type of consciousness detached from desires and fears, that looks dispassionately upon what it observes, that has the courage to confront the real world as it is, without distorting it through self-centered wish fulfilment and the resentments of a power complex. The model of the truly rational scientist is in many ways a very appealing one. Among other ways of thought, it has resemblances to Stoicism, perhaps even to the detachment from desire and fear that is the goal of Buddhism: in his later years, Arthur C. Clarke, one of the truly rational science fiction writers, described himself as a Buddhist of sorts, and I see traces of something similar in one of Clarke’s most gifted descendants, Kim Stanley Robinson. Ursula K. LeGuin identified with a similar detachment in Taoism.
We could call this detached consciousness the Vulcan ideal, out of Star Trek, but in doing so we point to a certain ambivalence. True Vulcan culture is alien to us: we approach it only via Spock, whom we like because he is only half Vulcan. The other half is human, swept by the same emotional storms as the rest of us. Gulliver admires the truly rational Houyhnhnms, but knows he cannot really emulate them. Moreover, the scientific gaze, being impersonal, can easily become inhuman. Its temptation is to become a machine intelligence, coldly condemning emotion as a disease. On the corny old TV show Lost in Space, whenever anyone did or said something that was less than logical, the robot would say, “That does not compute,” which turned into what we would now call a meme. There are people who gravitate to scientific research because they are uncomfortable not just with emotion but with the personal altogether. Dr. Strangelove in Stanley Kubrick’s famous movie has an artificial arm to symbolize that he has essentially turned himself into a machine. Such machinelike coldness may be a parody of true scientific detachment, but that does not mean there are not people who turn themselves into parodies.
A totally detached attitude turns everything and, worse, everyone, into “the other.” It objectifies, turns people into objects, and becomes incapable of empathy. Living in an abstract intellectual world cut off from much contact with other human beings can lead to other people becoming unreal, just ideas. There is a reason that this kind of cut-off-at-the-neck intellectual may end in the employ of dictators or ruthless capitalists, because they are capable of formulating plans that are useful or profitable without flinching at the suffering they would cause, since the suffering is unreal to them. The dictator himself may become such an abstract reasoner, who becomes a control freak through the desire for unity and efficiency. Blake satirizes this kind of dictator in his figure of Urizen, whose name puns on “reason” and “horizon.” Elon Musk is such a reasoner. He plans to raze the entire government to the ground so that he can rebuild it on more efficient principles. He is quite all right with the untold human suffering this is going to cause. Such people become monsters, no different than H.G. Wells’s Martians, giant hypertrophied brains whose intellects are, in a line that has become famous, “vast and cool and unsympathetic.” Sometimes they blind themselves with a kind of idealism. Mary Shelley provided an unforgettable portrait of such a type in Victor Frankenstein, to whom nothing matters but success in the attempt to rival the gods by creating life, never for a moment admitting that the life he creates might turn out to have feelings and needs, for which he as creator is responsible.
The more purely contemplative type of abstract reasoner is capable of doing damage by constructing a reductive, mechanical model of the universe. The first science to achieve maturity was Newtonian physics, which told us that the entire universe was a mechanism, like the works of a clock. Deism tried to reconcile the universe as an unconscious, automatic mechanism by postulating a God who made the universe as a watchmaker makes a watch. Once having created it, however, he sits back and contemplates it dispassionately, for it is an incompetent watchmaker who has to be constantly intervening to make his watch operate properly. The Romantics did not hate science: rather, they were fascinated by it, and Goethe actually wrote whole books about the theory of colors, the metamorphoses of plants, and other scientific topics. What they hated was mechanistic scientism, which they saw as not only wrong but dangerous because of its dehumanizing tendencies. That is why Blake condemned the science of “Bacon, Newton, and Locke,” his unholy trinity of abstract reasoners. Their science created the universe of “Starry Wheels” that grind all life within their unfeeling gears. It was not an exaggerated fear. A century later came Darwinism, which reduced life to an organic version of blind mechanism. Neo-Darwinist Richard Dawkins’ book The Blind Watchmaker contains its thesis in its title. No need to postulate an artificer: nature is its own watchmaker, but blindly, unconsciously. In the Afterword to another book, The Selfish Gene, he frankly admits that the upshot of his theory that we are controlled by our genes is that we are automatons, robots whose “consciousness” is just software programming. “What else should we be?” he responds.
Except that it does not work that way at all. The ideal of detached, objective knowledge is based on the subject-object model of reality that Blake called a “cloven fiction,” cloven like the devil’s hoof. Consciousness that withdraws from the object diminishes itself into what Blake called a “Spectre,” an attenuated ghost lost in a kind of underworld of abstraction called Ulro. The language that expresses this Spectral consciousness is the kind of needlessly formal prose that anyone with academic training is all too familiar with. Good formal prose is precise and graceful, a pleasure to read, although it may be a demanding pleasure, like that of listening to a Bach fugue. Bad critical prose is a train wreck, boxcars of abstract nouns strewn chaotically over the landscape, derailed by a tortuously complicated syntax. I used to be baffled and disheartened by such prose. My prose models were such writers as Milton, William James, George Bernard Shaw, Jacques Barzun, and Northrop Frye, and I could not understand why anyone would prefer to sound, as Frye himself once said, like a horse drinking water. There is rarely any gain in profundity: almost always, such writing could be revised into something that is, in Barzun’s phrase, “simple and direct.” And yet it is a preference, not mere ineptitude. Some students come to university already addicted to it, and are not always happy when I demand that they write in plain English. Nietzsche mocked German philosophical prose as polysyllabic stupidity, the antithesis of his own sharp, satiric style. However, such writing derives not from dullness but from withdrawal into a world of abstraction and impersonal intellect. While it pretends to express an austere Olympian detachment, it is really a way of hiding out, of shrinking from the world of direct experience.
The true act of knowing moves in the other direction: the subject moves towards a direct encounter with the object that, as it becomes more intense, may verge upon actual fusion or identification. Abraham Maslow spoke of “love knowledge,” the knowledge we acquire only by loving the object of desire. The subtitle of his book The Psychology of Science is A Reconnaisance, which literally means a re-knowing. In his usual simple and direct, disarmingly conversational style, Maslow proposes founding scientific method on a new paradigm that is the opposite of objective detachment. One chapter is titled “Interpersonal (I-Thou) Knowledge,” the parenthesis being an allusion to Jewish theologian Martin Buber’s famous book I and Thou. In Buber’s terms, detachment produces an “I-It” relationship to whatever is perceived. It reduces everything to the status of an object, even other human beings, even God. But love of what is perceived transforms the relationship into “I-Thou”—“Thou” rather than “You” indicating the intimate form of second-person address, obsolete in English but maintained in other languages, such as the difference between “vous” and “tu” in French.
We say that love blinds people, but in fact I know the one I love more deeply than others do because I love her. This seemingly simple insight has profound implications for those of us who are scholars—and by “scholars” I am by no means necessarily referring to academics but to those who are passionately in love with learning, for whom learning is an act of love. Maslow says something about knowing that has important implications for teaching, because it explains why so much teaching simply fails:
At the least it must mean “interest in” the object of study. It is difficult to see or hear that which is totally uninteresting or boring. It is also difficult to think about it, to remember it, to keep oneself at the job, to stick to it. All the defensive and restive powers of the person can be mobilized into action when one is forced by some external pressure to study something totally uninteresting. One forgets, one thinks of other things, the mind wanders, fatigue sets in, intelligence seems to diminish….At least a little passion (or libidinizing) seems to be needed. (109)
What follows is what is to me a wonderful passage describing the opposite of such disengaged boredom:
So far as the scientist is concerned, he knows that this is true for him if only because scientific study especially needs patience, stubbornness, stick-to-it-iveness, unswerving concentration on the task, the fortitude to overcome inevitable disappointments, etc. This is a minimal statement. What is really needed for long-time scientific success is passion, fascination, obsession. The fruitful scientist is the one who talks about his “problem” in about the same spirit as he does about the woman he loves, as an end rather than as a means to other ends. Rising above all distractions and becoming lost in his work means that he is not divided. All his intelligence is available for the one purpose that he is entirely given to. He gives it everything he’s got. | This can meaningfully be called an act of love… (110)
Maslow is a psychologist trying to understand human beings, but he says, “We must entertain the possibility that even the astronomer or geologist or chemist might be able to perceive more wholly even that which is least personal” (109).
My friend Robert Klips, of The Ohio State University, Marion Campus, is an inspiring example of the kind of scholarly passion that Maslow is talking about. Bob has recently published a book, Common Mosses, Liverworts, and Lichens of Ohio that is the result of thousands of hours of observations in the field, as well as photography. The various entries are written in good scientific prose, employing many technical terms, but moving in the opposite direction from wooly generality, trying to see each plant more specifically, in greater individualizing detail. When we first bought this 3.8 acre property, Bob and I took a walking tour around its perimeters. He could name every tree and every plant. Where I just saw “woods,” he saw he saw far more: individual species whose representatives had unique characteristics. And what he saw engaged and animated him. The opening of Bob’s book explains its purpose exactly in terms of coming to know through an intensified relationship that is, as the language makes clear, interpersonal:
Nature lovers are captured by diversity. When they are exploring a natural area, every plant and critter seems to be asking if anyone knows who they are. When bryophyte plants (mosses, liverworts, hornworts) and lichens speak up, their quiet voices are too often missed by otherwise astute botanically oriented naturalists. This is unfortunate because these small, beautiful, fascinating organisms make up a surprisingly large proportion of the vegetative diversity in midwestern ecosystems. They should not be distant and mysterious. Let’s make some new friends—the mosses, liverworts, and lichens. (1)
To speak of being friends with plants is not just sentimental anthropomorphism. In a remarkable essay called “Sight into Insight,” Annie Dillard is frustrated with herself because “I see what I expect” (744). Much of the time we don’t see what is really before us: we don’t see the apple before us but only our habitual idea of an apple. She wants to cleanse the doors of perception and see more, but in fact, once we throw out the habitual abstractions, wanting to see, as Wallace Stevens said, “not ideas about the thing but the thing itself,” we discover that we have to learn to see, because sight is in fact never divorced from insight. Everything we see is an interpretation of the data of the senses. Maslow explains something that we have known since Kant: “In a word, the observer partly creates the reality, i.e., the truth. Reality seems to be an alloy of the perceiver and the perceived, a sort of mutual product, a transaction” (111). Dillard cites an account of how, when people who had been blind from birth with cataracts were able to see for the first time after an operation, they had no idea what they were seeing. We all have to learn to see, but how do we do that? Dillard rejects, or at least sees the limitations, of a verbalizing approach: “When I see this way, I analyze and pry” (750). But, she says, “there is another kind of seeing that involves a letting go” (750). “When I see this way, I see truly,” she declares. “As Thoreau says, I return to my senses” (750).
The particulars of vision are not random. The sensory data, the individual facts, whether of the cosmos above or this middle earth below, form patterns of interconnection and interaction. Scientists speak of an order of nature, governed by laws. In The Productions of Time, I showed how imagination, as it intensifies, opens into a double vision—a vision of order and a vision of love. These eventually unite into the twin aspects of a total vision, as truth and beauty become one. The truly great scientists, as opposed to the mechanistic materialists, by no means dismiss such a vision as mystical nonsense. Einstein’s “religion” was a vision of the cosmos as a mathematical order that was at once beautiful and somehow sacred. At its most intense, such a vision becomes the “beatific vision” of Catholicism, in which someone may be granted, through grace, a direct experience of God, an I-Thou encounter in which God, humanity, and the cosmos are united in an experience of God as “all in all,” a phrase from I Corinthians 15:28. Dante describes such an experience at the end of the Paradiso, but Milton also regarded the “all in all” phrase as highly significant.
The opposite of such a vision of plenitude, as consciousness withdraws from its environment, is a demonic vision of the universe not just as meaningless and random but as terrifyingly alien and incomprehensible. In this view, the mind constructs the illusion of an ordered reality as a defense system, for if the doors of perception were cleansed, what we would see would drive us mad, would be madness itself. This is the “cosmic horror” of H.P. Lovecraft, what sets him apart from other horror writers who just scare people with monsters or violence. In science fiction, an early dramatization of the idea of reality as incomprehensibly alien is Algis Budrys’s Rogue Moon, in which volunteers try to wend their way through the labyrinth inside an alien artifact on the moon. But there turns out to be no way to comprehend the minds that fabricated the labyrinth. M. John Harrison has a trilogy of novels (Light, Nova Swing, and Empty Space) about something called the Kefahuchi Tract, described as a singularity without an event horizon, which wreaks havoc with time, space, and causality and destroys entire civilizations that have tried to penetrate it. An earthbound version of such a region of objectified schizophrenia is Area X in the Southern Reach series of four novels by Jeff VanderMeer, the first of which, Annihilation, was made into a surrealistic movie with Natalie Portman. Like Harrison’s Kefahuchi Tract, Area X has destroyed successive expeditions of those who would penetrate its secrets. A realistically displaced version of this kind of anti-heroic descent quest is Conrad’s Heart of Darkness, at whose center the protagonist Kurtz finds, in a famous line, “The horror! The horror!”
Thus, science in the largest view is a quest for the true vision of reality. However, the public knows almost nothing of pure science: it knows only technology. But even the most low-information citizen can hardly be unaware of how technology creates our life, and may tear the social structure down and create it again on a new basis every time there is a radical advance in technological capacity. As far back as the 17th century, Bacon in The New Atlantis (1626) foresaw the revolutionary remaking of society by technology. I grew up during the era of hope for an America transformed into a utopia by science. American science fiction nurtured the dream of redemptive science from its inauguration in Hugo Gernsback’s Amazing Stories in 1926. Much if not most of what Amazing and the other pulp magazines published before World War II was adventure stories, and there was a certain amount of generic finagling in which the scientist-inventor was fused with the action-adventure hero in order to please a popular audience. But much of the audience was young and did not care much about plausibility. I still have 35 volumes of the adventures of Tom Swift, Jr., 18-year-old genius inventor. The plot of each volume turned around a new invention.
Thus the spirit of utopian science found a vehicle in the ancient form of the romance, the tale of wonder. There was a sense of a real possibility for transformation, for real social progress that would grow out of the betterment of human life through applied science. The theme of the New York World’s Fair of 1964, which I visited with my parents at the age of 13, was the World of Tomorrow. The dream was of far more than a future full of new labor-saving devices and entertaining gimmicks. It was a new phase of the Enlightenment dream that humanity was capable, through reason, of lifting itself up by its bootstraps. If science and technology can deliver material prosperity, make possible the fulfillment of Maslow’s hierarchy of needs, then widespread self-actualization may blossom, resulting in what Maslow called Eupsychia, the psychologically good society. This was Gene Roddenbury’s vision in Star Trek. Without preaching, we were simply shown a world in which science had improved the human condition enough that a truly rational form of government, the Federation, could emerge for the first time. With basic needs fulfilled, the pathologies of sexism, racism, and xenophobia that are in fact mental deficiency diseases have clearly faded away. The diverse yet harmonious crew on the bridge of the Enterprise was a rainbow promise of what might be possible. That vision formed me, and I think its hopeful progressivism prepared me for the Romantic idea of the creative imagination, of which scientific utopianism is an adaptation.
British science fiction remained within the circumference of the pessimism of H.G. Wells. Wells wrote lifeless utopias when his socialist ideological side was in control, but when his imagination was let loose, its vision was dark and skeptical. Olaf Stapledon expanded Wells’s pessimism from romance to undisplaced myth in Last and First Men and Star Maker, and the Wells-Stapledon perspective was a powerful counterbalance to American rational optimism. Arthur C. Clarke exemplifies the tension between the light and darker modes of science fiction. Clarke published a nonfiction book called Profiles of the Future in 1960 that made a great impact on me while I was still in junior high school, a vision of scientific far horizons by the man who invented the concept of the communications satellite. Yet his greatest fictional work, Childhood’s End, is a melancholy prediction that humanity must evolve beyond the human because the universe is too vast, unknowable, and ultimately unliveable for us barely-evolved primates.
It was the Cold War that put an end to American scientific optimism. Robert Heinlein, the greatest scientific utopian writer of the so-called Golden Age of science fiction, succumbed to the neurotic conspiracy paranoia produced by the Cold War. The militarism of his Starship Troopers (1959) is a sad contrast to Gene Roddenbury’s peaceful and egalitarian Federation, and Farnham’s Freehold is a grim depiction of post-World War III survivalism. Scientific optimism is in short supply these days. Matt Ridley attempted it in a nonfiction book, The Rational Optimist, in 2010, but the results were mixed, to put it mildly. The good part of the book is an energetic catalogue of the countless ways in which science has transformed human life for the better in less than a century, starting with dramatically lengthened lifespans and ways to treat many diseases. Nor is it just the privileged developed countries: scientifically improved conditions, including better nutrition, have raised the poverty level all the way around the world without any meddling attempts at “foreign aid.” But Ridley is only able to cling to an unqualified optimism by omitting the way that progress may be undermined by human irrationality. He may be a rational optimist, but most of the world is not rational at all. The book was published two years after the 2008 meltdown, and in fact a British bank he helped oversee had to be bailed out by the British government. And unfortunately the author of many rewarding books of popular science became a climate change denialist.
Robert F. Kennedy, Jr. has achieved political power because the public is not only disenchanted but downright suspicious of science. Some of the widespread paranoia about vaccines goes back to the tragedy of the pandemic, which showed definitively that doctors are only human and are sometimes wrong, contradictory, or confused. Science is a consensus, but that consensus is a result of sometimes fierce debate and disagreement. When the public found that the scientists did not agree among themselves, they began to distrust. I have some sympathy with its frustration. Just this week it was announced that 47% of the American population is overweight or obese. Yet a layperson trying to research how to cope personally with the weight issue encounters rather a mess. Medical institutions both rely on the BMI to judge healthy or unhealthy amounts of weight and at the same time admit that it is a very imperfect instrument. When casting about for an alternative, one finds the theory of “ideal body weight,” which can be calculated in at least 6 different ways. All these ways suggest that I am about 5 pounds overweight, even though my BMI is 23.6, under the overweight limit of 24.9. At the same time, I read articles that say new studies show that people over 70 may actually live longer with a higher BMI than is currently allowed, as much as 27. So “ideal body weight” says I should weigh about 141, but the new study recommends something like 167, which is heavier than I have ever been in my life. Moreover, older people are supposed to get many more grams of protein than younger people because of the loss of muscle mass. Yet to get that much protein is almost impossible while staying as thin as “ideal body weight.” So, I want to “follow the science.” But the science is presently incoherent in an area of vital concern for ordinary people. Since public health is the interface between science and the common world, our imperfection in this area has negative repercussions.
But I have not lost all my rational optimism, merely tried to explore, through thinkers like Frye and Jung, how it can be supplemented by an analysis of the irrational, and therefore of the human predilection for evil. Technological enhancement of human life is not the only hope offered by science, and in the end it may not even be its most important gift. The smartphone is a technological miracle, and yet what do people use it for? Hours of mindless scrolling of nonsense. Why? Because their lives, although filled with much activity, are empty. Scrolling is anxious distraction from the sense of emptiness lying just beneath the surface for so many people. What if Maslow’s description of the dedicated scholar whose “problem” is their passion could be expanded beyond the special vocation of a few rather rare and curious individuals to become a universal criterion of a self-actualized personality? In fact, Maslow did say that some kind of vocation or beloved task was a criterion for self-actualization. Such a vocation turns life into a quest, and that quest is endless, unfolding in new developmental phases the older one gets. Moreover, the study of nature is only one kind of “scientific” scholarship. Frye was lambasted for suggesting in the opening of Anatomy of Criticism that literary study could be a kind of human science. But he meant “science” in the widened sense of a meditation upon what he called the order of words, which can also be studied just as passionately and inexhaustibly as the order of nature.
There are powerful ideological forces trying to discourage people from grounding themselves upon such a lifelong meditation, but one of the things a teacher does is try to model a life spent dedicated to the passion of learning and knowing. I have lived such a life, and I know how lucky I am. This kind of meditation demands a certain kind of solitude, which collides with the American lifestyle of compulsive extraversion. It may, in honesty, make relationships such as marriage more difficult by creating divided loyalties. If you have such a vocation, you must find a partner who understands the difference between a vocation and a job, between someone who is enraptured by a vision and a mere workaholic. You may in fact end up alone. But an increasing number of Americans are already alone, and a new inwardness may be a remedy for what the Surgeon General is calling an epidemic of loneliness. I live alone, yet I am not alone. I am surrounded by my books, music, and musical instruments, which are not mere collections but extensions of what goes on within. To read a book actively and not just for distraction is to join a conversation, not just with the author but with all the others with whom the author is in dialogue. Eventually you find that to read and think and imagine is to join a spiritual company that survives even when outwardly we have been driven by a psychotic society into a retreat that is figurately underground. That company transcends space and time, and may be our initiation into the vision of plenitude that is within us, in the midst of us, now, even as the darkness falls.
Corrections: I need a copy editor but don’t have one, so inevitably I make occasional dumb mistakes. My best friend Dennis has kindly pointed out a couple for me from recent newsletters. The musical Into the Woods is by Stephen Sondheim, not Neil Simon and the absurdist play Rhinoceros is by Ionesco, not Genet. These are “I blipped out” rather than “I didn’t know” errors, but my apologies nevertheless, and my thanks to Dennis.
References
Dillard, Annie. “Sight into Insight.” In The Norton Reader, 16th edition. Melissa Goldthwaite, Joseph Bizup, and Anne Fernald, editors. Norton, 2024. Originally published in Harpers Magazine, 1974. Incorporated into chapter 2 of Pilgrim at Tinker Creek, 1974.
Klips, Robert. Common Mosses, Liverworts, and Lichens of Ohio: A Visual Guide. Ohio University Press, 2022.
Maslow, Abraham. The Psychology of Science: A Reconnaissance. Harper & Row, 1966.
Thanks for your responses, Doug and Tom. It's always gratifying when people engage with the newsletters. I think we should declare this National Errata Week, as you'll notice my own list of corrections after the newsletter. Editors in the Collected Works project learned to be careful about Frye's quotations. He often quoted from memory, and apparently a photographic memory is not the same as a perfect one, since he occasionally misquoted. In editing Words with Power for the CW project, I found another type of error in Frye's endnotes--there was a profusion of errors involving page numbers of sources, dates, and the like. I asked Jane Widdicombe why Norrie didn't ask me to check those notes, since I was still acting as long-distance research assistant, and she replied that Norrie didn't want to burden me since I'd started at BW by that time, an answer that still chagrins me. At least I got a chance to correct those notes by accidentally becoming the CW editor of the volume.
Five years after Swift finished "Gulliver's Travels," the young Alexis de Tocqueville was visiting the new republic in North America. He wrote "Democracy in America" -- in French, in 1735 and 1740, but the English translation appeared soon after (part 1 in 1738 and part 2 in 1740). De Tocqueville hated the new monarchy in France (1730–1748) and much perferred democracy as he saw it in the U.S.A. But he listed the dangers that democracy faced, including lack of education in newly established areas and lack of a free and independent press in areas where a few people controlled the flow of information. Without these, he saw the possibilities of demagoguery and what he called the tyranny of the majority, when minority opinions get suppressed. In other words, he foresaw what our Founding Fathers most feared. He warned us.