The Real Meaning of Falsifiability

Karl Popper’s “criterion of demarcation” classifies a theory as scientific only if that theory could conceivably be contradicted (“falsified”) by a logically possible observation. Intellectuals nowadays commonly deride theories as “unfalsifiable”, brandishing the term as a pejorative. But falsifiability was never intended as a criterion of value or meaning. It was merely a technical distinction between different types of theories that address different kinds of questions and that, as a result, are vulnerable to different modes of criticism. In particular, it distinguishes one class of theories from all others: universal laws of physics. A theory in this class has a special attribute: it can be logically contradicted by the observation of a single event that it says could never occur. Most good theories—including the theory of falsifiability itself—do not fit into this class. But we can still assess unfalsifiable theories rationally, using modes of criticism besides observation and experiment. And many such modes are available. For example, we can assess such theories using criteria of explanatory power, logical coherence, and consistency with other theories. But to assess conjectured universal laws of physics, we can also attempt to devise a situation (i.e., an experiment) in which we observe a physical event that the theory forbids. If we observe that event, then we will have discovered that the world does not conform to the theory. So, scientific theories are “demarcated” from other kinds of theories in one respect, which falsifiability is just a way of expressing: namely, they are subject to every mode of criticism that other theories are subject to plus observation and experiment. And this added mode of criticism can accelerate the growth of knowledge in science as compared to other fields because more criticism promotes more error correction, which is what fuels the growth of all knowledge, scientific or not.

“Woke” and the Moral Abyss

On June 22, 1941, Nazi Wehrmacht units penetrated Soviet territory. Einsatzgruppen, or killing squads, followed in their wake to massacre “subhumans”: Jews and others deemed racially inferior by the German “Übermenschen”. One Einsatzgruppe commander, Friedrich Jeckeln, to save labor and ammunition, devised an efficient method for mass killing. It entailed forcing victims to strip naked and lie facedown next to each other in a pit, so that the backs of their heads could each absorb a single bullet, creating a fresh bed of corpses upon which the next layer of victims could lie. Jeckeln’s men repeated this process, layer by layer, until each pit was filled with bodies. Jeckeln’s innovation (“sardine packing”) evinces the fanatical Nazi belief in “Untermenschen”. Similarly, when in 1928 Stalin forcibly collectivized Soviet agriculture to accelerate the arrival of the Communist utopia, he and his collaborators self-assuredly starved and murdered millions of people—categorized as “kulaks” or prosperous peasants—who were portrayed by Communist theory as scoundrels: enemies of the morally superior working class. Nazism and Communism were in many ways diametrically opposed worldviews. But they shared a key ideological attribute: they divided human beings into categories, and they assigned to those categories different intrinsic moral worths. It was this specific ideological attribute that ultimately caused the mass killings perpetrated by both the Nazi and Communist movements, and by other historical movements besides. “Woke” ideology shares this attribute also, which is why, on moral grounds alone, it should be vigorously resisted.

The Political Traditions of Persia

The Shia theocracy of Iran seems to contrast sharply with the secularist Pahlavi dynasty that preceded it. The Pahlavis prohibited the veiling of women, quashed the political influence of Shi’ism, and romanticized their connection to the culture of pre-Islamic Persia. The Ayatollahs, conversely, mandate the veiling of women, meld politics with Shi’ism, and deride the heresies of ancient Persia. But these two regimes, despite their apparent contrasts, both belong to a political tradition that stretches back twenty five centuries to Cyrus the Great: Persian Kingship. The Ayatollahs do not look like kings, and even Mohammad Reza Pahlavi, when he famously bedecked himself in jewels to pose for photos at Golestan Palace, did not look especially kingly. Still, both the Ayatollahs and the Pahlavis, very much like the Qajars, the Afsharids, the Safavids, the Sassanians, the Parthians, and the Achaemenids before them, have held the power of kings. Persian absolute monarchy is as old as Persia. And by the time the medieval concept of the Divine Right of Kings was invented in the West, it had already existed in Persia for hundreds of years. Persians have immense reverence for their history, and it would be no surprise if they generally expect to be ruled over by a king-like authority (although they disagree over who that authority should be). If indeed this ancient expectation endures in the general Persian mind, then surely it poses the most fundamental barrier preventing freedom and democracy in Iran today.

Evolutionary Psychology’s Flaw

Evolutionary psychologists attempt to explain human behavior in terms of Homo sapiens’ evolutionary history. They study selective pressures that guided the evolution of our ancestors’ genes, and they draw connections between those pressures and general patterns in human behavior that we observe today. The problem with this approach, however, is that humans can choose to act against their genetic coding—to resist their inborn impulses. All other animals are incapable of making such choices, which is why we can explain their behaviors purely in terms of their genetic evolution. But people are different: we can criticize and defy our inborn impulses—for any of an infinity of reasons—and this simple fact spoils any explanation of the form, “He did it because his genes programmed him to do it”. Supposing an inborn impulse does happen to govern a person’s behavior in a particular instance, even that behavior can’t be satisfactorily explained in terms of the person’s genes. For that person, unlike any other kind of creature, could have chosen to do otherwise.

Combining Ambition with Gratitude

A grateful person is a person who, at least in a sense, is satisfied with their situation in life. An ambitious person is a person who, at least in a sense, is dissatisfied with their situation in life. So ambition and gratitude are—in some sense—in tension with each other. Yet ironically, feeling either of these emotions without the other leads to unhappiness, whereas feeling them together leads to happiness. Ambition without gratitude manifests as an unquenchable thirst that can never be satisfied, whereas gratitude without ambition produces stagnation, which is inherently depressing for a creative entity such as a human being. But if a man manages to feel both gratitude and ambition—being appreciative of the good things in life while also aware of how life could be better—then he will be well poised for happiness. Like everyone else, he will encounter problems in life, but he will determinedly seek solutions to his problems. If he solves one of them, he will rejoice in having discovered a solution. But he will not allow this feeling of joy to cloud his vision and prevent him from seeing new problems. His ambitious mind will seek further problems, which he will be prepared to try and solve, meaning he might actually solve some of them. And with each new solution will come yet another reason to be grateful.

Why Scientific Reproducibility Matters

Scientists often repeat experiments to determine whether an initial result is reproducible. Reproducibility is crucial to science, but not for the reason most people think it is. The mistaken idea that scientific experiments can “verify” theories leads people to think that successive instances of an identical result can, in effect, reverify theories (this idea is often expressed in terms of increasing a theory’s “Bayesian credence”). But nothing, not even empirical data, can verify our theories. Empirical data serves only to criticize our theories. In other words, if a scientific theory predicts that a particular observation will occur under the conditions of a specified experiment, then scientists can perform the experiment and see what they see. If the experiment, or any number of successive repetitions of it, reveals any mismatch between the predicted observation and the actual observation, then the scientists will have detected a potential error in the theory, which they can then work to correct. And crucially, detection and correction of errors in our scientific theories constitutes scientific progress. So, reproducibility matters not because repeatable results serve to verify (and reverify) whatever theory predicted those results. It matters because unrepeatable results alert us to errors in our theories, which we can attempt to correct in order to grow our scientific knowledge.

The Limits of Animal Models

We humans share with other mammals much of our evolutionary history and physiological makeup. Because of these shared physiological characteristics, scientists can use nonhuman mammals to test experimental therapies for various human disorders and diseases. These tests, which are called animal model studies, provide insights into whether a therapy might work in humans without risking human safety. The problem with animal model studies, however, is that in many cases the human disorder or disease of interest involves not only physiological dysfunction, but also psychological attributes that nonhuman animals may not experience (e.g., pleasure, pain, motivation, and addiction). Animal model researchers specify animal behaviors that they assume can serve as proxies for these psychological attributes. Yet these assumptions are substantive theories in their own right, which may be false, but which are taken for granted by the study designs. And if such an assumption is false, then the study results don’t mean what we think they mean and can’t be legitimately extended to humans.

The Irrationality of Authoritarianism

“Authoritarian” political systems concentrate power in a lone authority and lack countervailing institutions to meaningfully challenge that authority. All authoritarian political theories and systems derive from what we might call authoritarian theories of knowledge, which designate an authority as the ultimate source of truth. Extended to politics, they designate an authority as the ultimate source of political wisdom: the sovereign who alone possesses the knowledge required to rule. Although different authority-based political theories assert different authorities to be the rightful ruler (e.g., The King or perhaps The People), such theories all assume that the task of politics is to designate an ultimate authority to rule over society. This authoritarian assumption, the core attribute of all such theories, is more fundamental than comparatively superficial disputes over which authority is legitimate. And it is why all authoritarian political theories, and the systems they give rise to, are irrational: all people are fallible, so no one person, nor any group of people, is fit to serve as an ultimate authority.

The Problem with Education

Children enrolled in traditional educational systems spend most of their time striving to meet other peoples’ criteria for success, or suffering because they don’t meet those criteria, instead of pursuing their own interests and solving their own problems. They endure a staggering imposition on their time and attention, which distorts their priorities. It diverts them from creating, criticizing, improving, and striving to meet their own criteria for success; it steals their attention away from their own interests and problems, sabotaging their ability to solve those problems and defeating the ostensible purpose of their education: to prepare them for life as autonomous individuals. But it shows that traditional educational systems are optimized for another purpose, one that diminishes the joy of childhood, conflicts with the liberal values of our society, and undermines the long-term dynamism of our economy: instilling obedience.