Home About EV Subscribe Submit Work Contact Archive

Of Myths and Miracles
Harvey A. Smith, PhD

"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so."
—Mark Twain

As an undergraduate, I had the privilege of attending the lectures of the eminent philosopher, Adolf Grünbaum, who propounded what we students dubbed Grünbaum's theory of miracles: "If one asserts that an event is a miracle, he means that it is contrary to the laws of nature, which makes the unwarranted assumption that he knows all the laws of nature." According to Grünbaum, an event that appears to be a miracle is really an opportunity to discover new laws of nature. History is rife with instances in which precisely that has happened. Often the new laws that were discovered have violated the commonly held opinions about how the world works and have been opposed by those who believed they already knew all the laws. Fortunately, the culture of science holds everything subject to the test of experiment, so the new laws have been generally accepted when enough experimental evidence is accrued to support them.

I like to call commonly and firmly held ideas about the way the world works "myths", whether they are the animistic beliefs of primitive peoples or the views held by sophisticates. Similarly, I refer to an event that forces the elaboration or abandonment of a myth as a "miracle". Thus Aristotle's commonly accepted dictum that nature abhors a vacuum was a myth; the failure of lift pumps to raise water above a certain height—miraculous if one believed Aristotle—led Galileo, blind and in his last year, to amend the myth by asserting that nature does not abhor a vacuum above that height; and led Galileo's student, Torricelli, to invent the barometer and the idea of air pressure. (Torricelli's belief that there was a vacuum above the mercury in his barometer's tube became a new myth; it was later discovered that there was always some mercury vapor in that space, the amount depending on the temperature.)

Not only the ancient Greeks embraced myths about physics. In 1894, the great experimental physicist A. A. Michelson was convinced that all the laws were known and announced that in physics there were "no more fundamental discoveries to be made". Within two years, Röntgen and Becquerel had discovered the related miracles of x-rays and of radioactivity. Ironically, Michelson had encountered—and ignored—an important miracle seven years earlier, when he found that he could not detect the motion of the earth through the "luminiferous aether". According to the accepted physics of the time, this meant that the earth had to be standing still. Rather than abandon Copernicus and Galileo, Michelson decided that the miracle he had discovered was merely "a failed experiment". Reconsideration of its consequences by Poincaré and Einstein led to the myth Michelson had embraced being replaced by the theory of relativity.

In his 1788 treatise on analytical mechanics, Lagrange presented a vision of the universe in which the future (and the past) would be completely determinable, if only one knew the laws of nature and the precise conditions at some fixed time. (The latter are usually called initial conditions, since greater interest attaches to determining the future, starting from what one knows now.) Lagrange's vision of a fully predictable universe fired the imagination and inspired intellectuals throughout the nineteenth and much of the twentieth century; termed "scientism" by some, it became a predominant myth of that era. Auguste Comte was moved to proclaim a grandiose program of "social physics" (which he later renamed "sociology"), while Karl Marx produced a theory—termed "scientific socialism" by Engels—that claimed to predict the future development of society.

In the 1950s I worked with physicist John Mauchly, the co-inventor of the early electronic digital computers ENIAC and UNIVAC, who was trying to apply computers to predicting the weather. Believing in the Lagrangian myth, he felt sure that, with enough data about initial conditions and enough computing capacity, future weather must be predictable.

In 1960, Edward Lorenz, working with a mathematical computer model for weather, discovered a miracle that has come to be called "the butterfly effect". Lorenz found that the tiniest change in the initial conditions, such as the flapping of a butterfly's wing, could result in a drastic change in the weather predicted for the near future. Lagrange had been mathematically correct, but people had simply ignored the fact that one sometimes needed to know the initial conditions with infinite precision, which is physically impossible. Once attention was drawn to this problem, many analogous situations were recognized and a new branch of mathematics, sometimes called "chaos theory", developed. Scientists realized that they had been concentrating on situations where "chaos" did not occur and ignoring those in which it did. The Lagrangian myth was no longer credible. Weather forecasts, which previously had reflected the belief in that myth—"Rain is predicted for tomorrow"—are now phrased in statistical terms—"There is a 60% chance of rain tomorrow."

In the non-chaotic situations it traditionally addresses, physics deals with simple, isolated systems. Crucial experiments are relatively easy to devise and carry out—though they may now require a more elaborate and costly apparatus than in Galileo's time. Because the conditions under which an experiment has been conducted can be reproduced with great precision, it can be closely repeated. Chemistry is a bit more complicated; usually more conditions need to be controlled, but it is still manageable.

Repeating biological experiments is vastly more complicated. Individual organisms vary, so there is an industry devoted to supplying laboratory strains of organisms (mice, for instance) that are as identical as possible in genetics and nutrition. Even so, the results of experiments often vary widely, and, like weather reports, usually must be presented statistically. John Crabbe, a neuroscientist at the Oregon Health and Science University, performed a series of experiments on mouse behavior in laboratories in Albany, New York, Edmonton, Alberta, and Portland, Oregon. The same strains of mice, shipped on the same day from the same supplier, were used in each lab. The animals were raised in the same kind of enclosure, with the same brand of bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed exactly the same food. They were handled, with the same kind of surgical glove, and they were tested on the same equipment, at the same time of day.

Despite this uniformity, when a particular strain of mouse was injected with cocaine, the average response of the mice in Edmonton was more than eight times as great as the average of those in Portland. This, and similar inconsistencies in other tests, did not follow any detectable pattern. Either the variables involved were too many and too subtle, or, possibly, this was another chaotic situation in which the conditions must be known with impossible precision.
I do not mean to imply that biology cannot be a science, but only that it is much more difficult than it is made to appear in grade-school presentations of "the scientific method", and that experiments may need to examine an unexpectedly large number of individual cases to produce meaningful statistics. The study of genomes offers some hope for more exact prediction of biological phenomena, but it is, as yet, a largely unrealized promise.

In medicine and human psychology, which can be considered highly specialized sub-disciplines of biology, experimentation is constrained by ethics and must deal with a wild population, which makes the scientific study of those subjects even more difficult. Even laymen are aware of the frequent changes in the established myths of these fields. Epidemiologist John P.A. Ioannidis, of the Stanford Prevention Research Center, has addressed the difficulties in a PLoS Medicine paper with the provocative title "Why Most Published Research Findings Are False".

The psychologist Daniel Kahneman has described what he calls "the illusion of understanding" and "the illusion of validity". The illusion of understanding is, in my terms, the creation of a myth. It is the human tendency to believe we understand an event when we have come up with a plausible and intuitively persuasive explanation, based on a few easily grasped factors, while ignoring its underlying complexity and the limitations of our knowledge. Kahneman's illusion of validity is the tendency to maintain belief in an intuitively appealing myth, despite its demonstrated failure. These illusions are particularly predominant in the social sciences, but they also occur in the physical sciences, as Michelson's example shows.

In some disciplines, including meteorology and many of the social sciences, controlled experimentation is not possible. Instead, spontaneously occurring historical events are examined. Statistics can be gathered about what the outcomes were when closely similar initial conditions occurred in the past, and probabilities are then estimated for the recurrence of the various outcomes. That is how meteorologists estimate "a 60% chance of rain tomorrow"; as in the biological sciences, it is difficult or impossible to control the variables. In many fields however, causal explanatory myths are conjectured, based on intuition and on what appear to be common features of the historical events. This gives free rein to those who espouse a particular myth, pretending to know all the laws of nature. It also prevents recognition of miracles, since anomalies always can be explained away by citing some of the many ways in which historical events must inevitably differ. Even a long history of failed predictions may not persuade some to abandon an intuitively appealing myth to which they are emotionally committed.

Predictions of future developments based on the oxymoronic "non-experimental sciences" must be suspect, especially when they are dire. Predictions of calamity excite popular anxiety and tend to receive a lot of publicity. If they have intuitive appeal, they easily become myths; the miraculous non-occurrence of the calamity on the predicted schedule tends to go unnoticed. If the end of the world doesn't come as predicted, the prophet simply changes the date.
A decade after Lagrange, the Anglican clergyman Thomas Malthus attempted to apply similar mathematical reasoning to predict the future of human welfare. Arguing that, in the absence of constraints, population would tend to grow exponentially, while the production of food could only grow linearly, Malthus foresaw a future of unending "misery and vice" for mankind. He opposed poor relief (the Poor Laws) and advocated taxing the importation of wheat (the Corn Laws), presumably on the grounds that welfare and surplus food would relieve the misery God intended and allow the population to breed beyond the number England could feed. Malthus seemed to make sense, and his myth, although not universally accepted, had many influential adherents. Fortunately for his reputation, Malthus was unspecific about dates and just predicted a general continuation of famine and disease. When the next two centuries miraculously developed quite opposite to his prediction, his true believers maintained that he had just got the time wrong.

The decade of the 1950s saw a more dogmatic revival of the Malthusian myth, with predictions of the depletion of other resources as well as food. The most successful advocate was the entomologist Paul Ehrlich, whose best selling 1958 book, The Population Bomb, began, "The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate."

Outdoing Malthus, he went on to advocate forced sterilization and banning the shipment of food to countries that wouldn't control their populations. Undeterred by the lack of any evidence that his prophecies were coming true, in 1970 he declared, "[i]n ten years all important animal life in the sea will be extinct. Large areas of coastline will have to be evacuated because of the stench of dead fish."

Faced with a half-century in which the available food per capita has increased throughout the world and the world-wide death rate has decreased, and despite his confident but incorrect predictions of rising commodity prices, Ehrlich continues to insist that the end is near—his timing was just off.

In 1971, Jay Forrester, an electrical engineer and inventor of computer hardware, published Global Dynamics, an attempt to model the development of the world in Lagrangian style by means of a computer simulation. His calculations indicated that the world's economies would collapse sometime around the year 2000, and advanced civilization would come to an end. This was highly publicized and aroused great concern. Many universities introduced courses or seminars based on Forrester's model and its results came close to being enshrined as myth. The credibility of his model lessened considerably however, when researchers at The University of Stirling, in Scotland, ran it backward to "postdict" the past, rather than to predict the future. They found that, according to Forrester's model, civilization should also have collapsed around 1900. (Stirling has gone on to explore the use of Forrester's underlying "systems dynamics" approach to modeling, and now gives degrees in the subject.) Running causal predictive models backward and comparing the results with actual history is a good way to check their accuracy.

I have practiced mathematical and computer modeling, so I am keenly aware of their limitations. When I worked for the Executive Office of the President (1967-1974), we investigated the best models for predicting the overall performance of the national economy. We found that they were pretty good at predicting the next three months; at six months, the predictions were poor, but not completely useless; at nine months they were useless. If someone speculates about how the economy is likely to perform more than six months ahead, I assume it is just a wild guess! The current myth regarding the hazards of global warming many decades in the future is largely based on a causal computer model, the assumptions of which have been vigorously debated. I have no expertise in climatology—for all I know, the model may be unusually accurate—but I believe that the model should be run backward to see how it "postdicts" the recorded climate history before too much credence is given to its predictions for the distant future.

As in ancient times, it is possible for a myth to get established with no solid empiric evidence because its propounder has a persuasive literary style and makes arguments that may seem, on intuitive grounds, to be not unlikely. Among many examples are Marx's economic myth, Freud's psychoanalytic myth, and Piaget's educational myth.
Although Marx gathered very substantial data about the English economic conditions of his day, his intuitive and philosophy-based economic theory had little relation to those data. It was primitive compared to the more careful thinking that evolved over the next century; to some extent, statistical econometric methods now allow a theory and its predictions to be checked. Economists from the former Soviet Union have expressed to me the discomfort they felt paying lip service to Marxist theory when they knew better. Despite this, there remains an emotional commitment to Marx's theories in some academic and literary circles.

Freud's theories have been discredited—notably by my old professor, Adolf Grünbaum—and Freud's published case studies and claims of psychoanalytic cures have been shown to have had, at best, a tenuous connection to the facts. Although Freud insisted that psychoanalysis was a science and that it should stand or fall by scientific criteria, some practitioners now claim that this was a mistake on Freud's part, and that his work should be judged, rather, as hermeneutics, i.e.—on literary grounds. Bruno Bettelheim, who had become familiar with Freud's theories while writing a doctoral dissertation on art history, claimed to have been analyzed by a Viennese psychoanalyst. After fleeing his native Austria, Bettelheim set up as a child psychologist in America, had a distinguished career, and wrote many influential and prize-winning books in the field. His Freudian theories about autism in children are now discredited, and are seen, in retrospect, as having caused much inappropriate suffering and guilt by blaming the problems of autistic children on the behavior of their parents. According to the currently accepted myth, autism is largely a genetic problem. Freud's theories remain popular in some artistic and literary circles, but very few psychiatrists practice Freudian psychoanalysis.

Among those who rejected psychoanalysis as "insufficiently empirical" was the Swiss natural historian and philosopher, Jean Piaget. Piaget became intrigued with child development when he taught at a school run by Alfred Binet while the Binet test of intelligence was being formulated. He later proposed a theory of cognitive development stages, which he claimed to have verified by observing his own three children. Piaget eventually studied other children, wrote voluminously, received many honors, and became the director of various international institutes. His theories were adopted by many educators during the 1960s, and influenced curriculum development. Piaget's ideas were particularly congenial to mathematicians, for we tend to see our discipline structured in the way he proposed that children naturally develop. I was dismayed when critics who disputed his experimental methodology and his claimed evidence also revealed that, at age twenty, he had published a novel in which the conclusions he claimed to have reached by observing his own children were set forth three years before he taught at Binet's school, and at least seven years before he had any of the three children he claimed to have observed.

Marx, Freud, and Piaget were historically important thinkers who developed some interesting ideas; each created a widely accepted myth and claimed great generality. However, the acceptance of their myths depended more on their literary and propagandistic skills than on empirical observations. Belief in each myth was gradually eroded by the accumulation of evidence without encountering any one spectacular miracle that forced its reconsideration. Myths grounded in frequently repeated experimentation or experience are more likely to remain valid over a range of conditions, and only miraculously fail in unusual circumstances. Aristotle's myth about nature abhorring a vacuum would still suffice for designing pumps to lift water from wells that are not too deep; Newtonian mechanics does well enough for many situations not involving extreme conditions. Among the social sciences, some—such as criminology and economics—are like meteorology in that they have accumulated enough data through repetition of similar incidents to provide a modest basis for very limited statistical predictions. Others depend on intuitive elaboration of conclusions based on a limited number of essentially diverse incidents.

Unfortunately, in situations in which experimentation is difficult or impossible, Kahneman's illusions of understanding and of validity tend to hold sway. Something that works in limited circumstances often generates an expectation of universality, and the illusion of validity makes people loathe to abandon the idea. The effect of changing a low tax rate, for instance, may be quite different from the effect of the same change in a rate that is already high. Changing funding for education when only half the population graduates from high school and only one in twenty graduates from college could have a very different result from making the same change when 85% graduate from high school and a quarter graduate from college. The result of a massive program of public works in a nation that holds a large portion of the world's gold may be quite different from the effect of a similar program instituted by a debtor nation that must borrow to pay for it. Speculations as to what will happen in such cases often become a source of political contention, with each side citing historical instances in which the policy it wants to follow produced a favorable result. Because there are only a few historical events to cite, the values of the parameters at which a change in the result of a policy might occur—or even what parameters might be relevant—are unknown and the support of one policy or another becomes a matter of prejudice, supported by illusions of understanding and validity.

It has been over a century since the miracles that gave rise to "the new physics" and displaced the myth in which Michelson believed so strongly. Some physicists again believe that they are close to a "theory of everything", but there has been a recent, very careful, observation of neutrinos traveling miraculously faster than light, which would be forbidden by the current myth. If the results hold up under repeated experimentation and careful checking, we may be poised for the creation of a new myth. If not, we must continue to await the next miracle.

© Harvey Smith

Harvey A. Smith, PhD

Harvey A. Smith, Professor Emeritus of Mathematics at ASU, holds degrees in engineering, physics, and mathematics from Lehigh University and University of Pennsylvania. He has published technical papers ranging from pure mathematics through economics and strategic policy to criminology and terrorism. Aside from his academic career, he has served as a staff member or consultant to many industrial concerns and government or quasi-government agencies. Since retiring from teaching, he has regularly audited courses at ASU in an attempt to polish his German, gain some fluency in Italian, and increase his knowledge of history and the arts.