A gene acquires a meaning only via its history of dynamic interaction with its niche. That niche consists not only of the physical environment outside the organism’s body, but also the body itself, any other genes in that body, and all of the other genes within the population of conspecifics—including any variants of the gene in question. All the interactions between the gene and its niche are physical events, yet they can be expressed purely in terms of information acting on other information—all the properties of a niche can always be rendered abstractly as a dynamic set of constraints that guide the evolution of the genes that it affects. Hence, the meaning of a specific gene (e.g., a gene encoding the instruction to “dilate both pupils as the sun sets”) depends intrinsically on that gene’s history of interaction with its niche, but not on the niche’s physical character. Indeed, recall that many components of the gene’s niche are abstractions: namely, all the other genes in the host organism and in the conspecific population. So, critical rationalism explains in exactly the same terms the meanings of genes and of human ideas. Both genes and ideas are abstract entities that acquire their distinctive meanings via their histories of dynamic interaction with their respective niches, which are corresponding sets of abstract constraints that affect the evolution of all the genes or ideas they interact with. What physical system instantiates the abstract constraints that constitute a niche, be it a regional biome on Earth or a skull-encased network of neurons, is immaterial for critical rationalism vis-à-vis how the information adapted to that system acquired its meaning.
If you were stranded in an ocean, the only way to survive would be to float. You would struggle to float—and if you lost that struggle you would sink to your death. And if you refused to float, for fear of sinking, you would surely sink and die. You must float to survive. This logic applies also to the analysis of human knowledge—for we float unsupported in an ocean of ignorance. We seek foundations for our knowledge, solid ground, to escape the terror of floating. But in regard to knowledge, there are no foundations. The only solid ground is the ocean floor. To live is to float. To refuse to float is to die.
To write well a writer must possess skills in logical analysis. Perplexing or ambiguous writing frequently results from a writer’s failure to notice that the most literal meanings or most plausible interpretations of his phrases or statements involve subtle contradictions. Although they may be implicit, these contradictions can easily confuse readers, especially—and ironically—those readers who are most discerning. Such contradictions may not be intrinsic to the message written. They might arise in the writing because of the way that the message is written. So, good writing requires two levels of logical analysis: (1) the search for contradictions in the writing’s underlying message, and (2) the search for contradictions in the written words as they are written.
From any two contradictory premises p and non-p, valid rules of deductive inference can be used to derive any conclusion one likes. Karl Popper, in his essay What is Dialectic? (1940), presented the logical proof of this fact in the course of criticizing Hegel’s “dialectical” account of historical progress. That proof is summarized below, wherein the symbol ∨ is to be read as “and/or”:
Rule of Inference 1: If the premise p is true then the statement p ∨ q must also be true
Rule of Inference 2: If the two premises non-p and p ∨ q are true then the conclusion q must also be true
Contradictory Assumption: There exist two contradictory premises which are both true, such as the following:
(a) Humans did evolve on Earth
(b) Humans did not evolve on Earth
From the two contradictory premises (a) and (b) one can use Rules 1 and 2 above to infer any statement. For example, from (a) and (b) one can validly infer the statement, Genghis Khan fathered one million children.
From the premise (a), one can use Rule 1 to infer the following statement:
(c) Humans did evolve on Earth ∨ Genghis Khan fathered one million children
Now, from the joint premises (b) and (c), one can use Rule 2 to infer the following statement:
(d) Genghis Khan fathered one million children
This proof shows that any theory involving a contradiction entails the truth of any statement, and thus contains no substantive content. Alongside this proof, Popper noted that, contra dialectic, contradictions fuel progress not when they are accepted, but rather if, and only if, they are avoided. For the law of noncontradiction underlies all criticism, and criticism, which is to say the highlighting of contradictions, and the changing of theories that involve contradictions, underlies all progress.
Apologists for Islam sometimes respond to criticisms of regressive Islamic doctrines by arguing that such doctrines emerged in a context far removed from the modern world. They argue that these doctrines remain valid even if they are inapplicable to the modern world, because the very meanings of these doctrines are inextricably linked to the specific cultural and historical contexts from which they emerged and for which they were morally suited. If for example a critic of Islam emphasizes that An-Nisa, the fourth Quranic Surah, advocates for daughters to inherit half of what their brothers inherit—that is, if a man has one daughter and one son, then his daughter should inherit one-third of his holdings, whereas his son should inherit two-thirds—then an apologist for Islam may hypothetically argue along the following lines: “In seventh century Arabia, young daughters married older men, who supported them financially. Sons, conversely, supported themselves before marriage, and then had to support their younger wives, and eventually also their children. Thus when An-Nisa is understood properly in its cultural and historical context, its seemingly misogynistic prescriptions are shown to be not sexist—but fair. And indeed, these practices were much fairer than the practices they replaced, meaning that they represented progress, attesting to the veracity of Muhammad’s revelation.” But such arguments—which we might call contextual arguments—cannot defend Islam against its critics, for they contradict a core tenet of Islam: namely, that the Prophet Muhammad received from Allah a final, perfect, and timeless revelation, and hence that the norms stipulated by the Quran apply to human life in all contexts. Thus the moral import of the Quran, according to Islam, is not that it revealed to the Arabs of the seventh century how to live better lives, but instead that it revealed to all people for all time that they should live as Muhammad lived in seventh century Arabia.
To assess the moral status of the United States—as a culture, as a society, and as a set of institutions—one must compare the United States, using all moral criteria deemed relevant to the assessment, with the other cultures, societies, and sets of institutions that have existed across human history. It is therefore invalid to assess the moral status of the United States by study of American history alone. Yet the American subcultures that condemn the United States as morally corrupt often base their condemnation on narrow and tendentious studies of American history that neglect the history of the rest of the world. Ironically, this narrow concentration on the United States resembles the American Exceptionalism that those exact same subcultures often claim lies at the root of moral corruption in America.
The maxim “to err is human” expresses pithily the philosophical position known as fallibilism. For the everyday human being, however, to recognize that people are fallible does not provide him with any guidance on what to do when he himself makes a mistake. Indeed, fallibilism may even seem to engender nihilism, because if humans are doomed to err, then any efforts to perfect life or to achieve enlightenment must be doomed to fail. But the impossibility of perfecting life or achieving enlightenment are not grounds for nihilism. Such aims seem attractive in the abstract because life is full of problems and attaining “perfection” or “enlightenment” implies attainment of a state in which all problems are solved. Yet such a state would be a hell for humans if humans could exist in it. (Arguably, humans could not exist in it, because the concept of error is implicit in certain attributes regarded as quintessentially human, which means that imagining humans existing in an error-free state is paradoxical.) Error, the principal concept of fallibilism, may have negative connotations, but it has distinctly positive philosophical implications. For it is our mistakes, our shortcomings, that confer on us the opportunity to create meaningful lives: to solve problems and to transcend our errors. A world without error is a world without joy. And to designate a joyless state “perfect” or “enlightened” is straightforwardly perverse. So, for a maxim that serves, not only as a reassurance in the face of error, but also as a guide toward a rational and joyful life in an error-filled world, we might augment the old maxim as such: “to err, and to transcend errors, is human”.
People engaged in critical discussion often interpret a lack of reservations as a feeling of certainty. A man who expresses his theories and arguments clearly, without performative reservation, may be viewed as arrogant, or as believing that he “has it all figured out” and could not possibly be mistaken. But performative reservations fulfill no substantive function within a critical discussion of theories and arguments, though they may well obfuscate the discussion by shifting its focus away from the theories and arguments and onto the reservations themselves. So, any discussants, in order to improve their understanding of the theories and arguments being discussed, should express those theories and arguments as clearly and concisely as they can, without performative reservations that may obfuscate the discussion. The mistake of interpreting a lack of reservations as a feeling of certainty flows from the even deeper mistake of identifying one’s “sense of self” with one’s current theories—a notion which contradicts the familiar psychological fact that in general one’s sense of self persists, even during periods throughout which one’s explicit theories are frequently, and indeed radically, changing.
Any attempt to measure or quantify the human tendency to err must fail, because fallibility cannot exist in degrees. Fallibility is a binary concept: a fallible entity is limitlessly error prone, whereas an infallible entity can never err. Every attempt to measure the verisimilitude or credence or confidence of a theory is revealed, by this binary logic of fallibility, to be fundamentally misconceived. For such an attempt to succeed, it would need to establish a method not only for measuring error proneness in the sciences—the usual context for such attempts—but also for measuring the verisimilitude or the credence or the confidence of the method itself, which introduces an infinite regress. Ironically, these vain attempts merely testify—in both their explicit aims and their inability to achieve those aims—to the human fallibility that they implicitly deny.
The central purpose of therapy is to help patients create solutions to their problems in thinking. In certain cases, the therapist might achieve this aim by validating some of his patients’ feelings, for example to eliminate some hindrances to creative thought that prevent those patients from inventing viable solutions. Yet the therapist should not strive to validate his patients’ feelings as an aim in itself, because such validation will entrench emotional hindrances to creativity afflicting his patients. Entrenching such hindrances contravenes the central purpose of therapy and represents a fundamental failure of the therapist, who, through this blind offer of validation, will have violated the responsibility his patients vested in him.
If the world exists objectively then so do objective truths about it. And if objective truths about the world exist, then statements about the world can be objectively false, in which case the set of false statements includes any statement that denies either the existence of objective truth or the possibility of objectively false statements. “Postmodernist” philosophical theories therefore declare themselves to be false, either by implying that the world exists objectively while denying that statements about it can be objectively false—which is a contradiction—or by denying the existence of the objective world, which contradicts the existence of any objective things, including postmodernist philosophical theories themselves. Unfortunately, some postmodernist philosophical theories take one terminal step toward self-annihilation to endure these contradictions: they embrace contradiction.
People argue bitterly about transgenderism, yet no one truly understands the phenomenon and the prevailing theories about it are filled with errors. One of these errors may be called gender essentialism. Gender essentialism presumes that a person’s biological sex, which is to say the set of objective physical characteristics that determine whether that person’s body is male or female, corresponds to a complementary set of objective psychological characteristics that determine whether that person’s mind is that of a man or a woman. According to a gender essentialist, a trans person is a person whose physical sex and psychological gender conflict. Trans activists reject gender essentialism by arguing that “man” and “woman” are collections of stereotypes, not objective psychological categories. But then, immediately after explicitly rejecting gender essentialism, trans activists often implicitly embrace its spurious link between “sex” and “gender” by concluding that if psychological gender is a collection of stereotypes, then biological sex also must be a collection of stereotypes. So, when trans activists reject gender essentialism they reject the validity of, say, “woman” as a gender category; but in some cases this rejection leads them also to reject the concept of a “female”, which in turn leads them—as a way of avoiding that concept—to use manipulative phrases such as “people who menstruate”. In reality, our best theories of human biology imply that sex is an objective, physical fact about a human that can be explained only in terms of that human’s genes, whereas our best theories of the human mind imply that transgenderism is a psychological phenomenon that cannot be explained in genetic terms. So although biological sex categories are real, and are genetically determined, they have no psychological analogs to which they correspond or with which they can conflict. Given the infinite diversity and profound complexity of the human mind, none of its significant features, transgenderism included, can be understood in terms of fixed sex categories, never mind in terms of the psychological constructs known as “genders”, which are imagined either to correspond to those categories, or to transcend them merely by increasing their number.
Debate has intensified about how SARS-CoV-2 emerged, and some experts have cited the fact that most past pandemics emerged as natural zoonoses to argue that SARS-CoV-2 “most likely” emerged in that same way. This kind of argument substitutes the search for a causal theory explaining a specific event with an invalid assertion about the event’s likelihood. It assumes that grouping the unexplained event into a category with past events that have been explained may shortcut the task of explaining the unexplained event. Specifically, it ignores the fact that details, such as the unique attributes of each specific pandemic pathogen, impose unique constraints on any viable explanation of each distinct origin event. So, the best existing explanations of how past pandemic pathogens originated in no way constitute evidence of how SARS-CoV-2 “most likely” emerged. Arguments of that form distract from and misrepresent the effort to explain the true origins of SARS-CoV-2.
The Internet’s initial impacts on civilization are still unfolding and remain poorly understood. Yet its two most immediate impacts are indisputable. Namely, it has dramatically increased both the connectivity of people and the ability of people to create and share information. The increase in new information has meant that “the public sphere” has both grown larger and become more filled with erroneous ideas than it ever was before, such that people now encounter many more erroneous ideas per day than they ever have in history. And yet people’s enhanced connectivity has simultaneously enabled much more criticism and error correction of the information in the public sphere than was previously possible, which explains why knowledge continues to grow in our society despite the increased rate at which we are producing erroneous ideas. This dynamic’s long-term cultural effects are of course unpredictable because they involve the growth of knowledge, which itself cannot be predicted. Still, one can reasonably speculate that, because this dynamic’s internal logic resembles the logic of knowledge creation (i.e., it consists of iterative commission and correction of errors), the Internet, in the long run, will accelerate rather than impede the progress of civilization. Specifically, it may one day become clear, perhaps only with hindsight, that the Internet’s main political consequence has been a global liberalization. In other words, people may become so psychologically accustomed to having their ideas criticized and refuted, and to criticizing and refuting the ideas of others, that they consistently oppose, at the level of their intuitions, all forms of authoritarian power.
Empiricism holds that scientific knowledge is based on data or evidence; justificationism claims that all knowledge consists of justified true beliefs. Together, these ideas lead to anthropocentric and reductionist portrayals of science that say we use sense data to justify our beliefs about the world. But scientific knowledge does not consist of evidence-based true beliefs. It consists instead of conjectural explanations, created by people attempting to understand the world. Scientific knowledge therefore explains our sense data, but our sense data never justifies our scientific knowledge: regardless of whether, or with how much certitude, those explanations are “believed”.
If we take seriously the idea of human fallibility, then we must agree that even our best knowledge, about even the simplest of things, might in some senses be false, misleading, or inadequate. We should expect that, at any one time, our picture of objective reality misses far more than it captures, and that what it does capture it may capture in a deceptive or incomplete way. For this reason, truth, to the extent that it can be captured by human knowledge, can be captured only in the elimination of errors, and never in the establishment of absolute truths themselves. So, I presume that none of the theories or arguments posted on my blog ever capture final truths. I presume only that some of them may succeed in criticizing and improving upon existing theories about reality. I expect these criticisms themselves to be full of errors, which will require further criticism to detect and correct. With luck, I may even detect and correct some of these errors myself.
The assumption that all decent people agree about some issue, regardless of what that issue is, may very well lead to more indecency than any other idea. For it implies that we can distinguish decent from indecent people by checking their opinions on the issue, and that decent people can reference this distinction to justify acting indecently toward indecent people. Decent people will presumably acknowledge, however, that their opinions, just like everyone else’s, have changed over time: including their opinions regarding issues about which “all decent people agree”. So whenever a decent person uses other peoples’ opinions to judge them as either decent or indecent, as if decency and indecency are fixed personal attributes, he either condemns as “indecent” all people who have ever held a different view on the issue of concern—condemning himself—or else he contradicts the idea that decent people can acknowledge changes in their own views on that issue. To avoid self-condemnation while also resolving this contradiction, he should regard decency, to the extent that it is an objective moral concept, not as an attribute of people (who change constantly in self-contradictory ways, and who thus cannot meaningfully be described as “decent” or “indecent”), but instead as an attribute of certain kinds of ideas: namely, the meanings behind peoples’ behaviors. Such meanings, if they are interpreted in their original contexts, are timeless and objective, and can therefore be meaningfully described in terms of objective moral truth: a concept implicit in the judgments “decent” and “indecent”. This proposed usage avoids contradictions and suggests no right to condemn a person for holding a different opinion. It also preserves common speech about decent and indecent actions. Most importantly, this usage encourages us all to engage seriously with criticism of our own ideas no matter who it comes from. If we think that some people are indecent and can therefore be ignored, then we may automatically refuse to engage with their criticisms of our ideas. Such refusal prevents corrections of our moral mistakes, which, given our fallibility, may be the most basic moral evil of all.
The effects of trauma may propagate via interpersonal psychology across several generations (e.g., when victims of childhood sexual abuse become the sexual abusers of the next generation’s children). Some experts claim, however, that the effects of trauma may propagate across generations via genetic inheritance. This claim revives a pre-Darwinian evolutionary theory, known as “Lamarckism” and oft-summarized as the inheritance of acquired characteristics, which was straightforwardly superseded by Darwinism and then neo-Darwinism. This biological theory of intergenerational trauma is, however, considerably weaker than Lamarckism. Lamarck’s original theory endorsed only the inheritance of acquired physical characteristics, whereas the biological theory of intergenerational trauma goes further in endorsing the inheritance of acquired psychological characteristics—a position that is susceptible to all the same critiques that undermine the beleaguered field of Darwinian evolutionary psychology: such as the critique that the human mind, in its creation of knowledge, routinely overrides those psychological predispositions that are encoded in the human genome. So, the biological theory of intergenerational trauma asserts the existence of two untenable kinds of genetic inheritance, both of which contradict good explanations in other fields (i.e., evolutionary biology and epistemology, respectively), and neither of which could function by any known biological mechanism. Thus, we should regard the biological theory of intergenerational trauma as false.
Everettian quantum theory describes physical reality as a vast multiverse, which is almost entirely imperceptible to us (“almost” because we perceive a famous but infinitesimal slice called the universe). Many people reject this theory, either because it asserts the existence of imperceptible entities, or because it seems to violate Occam’s razor, which advises us not to multiply beyond necessity the entities referred to by our theories. The first criticism is based on a false philosophy of science known as empiricism, which holds that observations are irreducible primitives in science, and that all scientific knowledge derives from experience. If empiricism were true, then humans would possess no knowledge, for example, of events that occurred before humans existed to make observations; thus, the fact that we possess such knowledge (e.g., of long-extinct species and of the formation of the moon) straightforwardly refutes empiricism. The second criticism, by invoking the principle of Occam’s razor, suffers from two major flaws that characterize the principle itself: first, Occam’s razor suggests the need for a criterion of “necessity”, yet Occam’s razor specifies no such criterion; second, Occam’s razor defines the parsimony of a theory in terms of how many entities the theory refers to, but parsimony is about the structure of the theory, not the structure of the world that the theory describes. A single rationalist maxim undermines both of these criticisms of Everettian quantum theory: namely, that all theories should be judged by their explanatory power. Compared to all rival explanations of quantum phenomena such as interference, Everettian quantum theory stands as the only realistic and parsimonious explanation. For as long as we lack any better explanation of quantum phenomena, the explainer must conclude—despite all protestations from common sense—that we live inside of a dizzyingly vast, mostly hidden, quantum multiverse.
The choice to marry can seem like an inherently irrational commitment, for to commit with real confidence might seem to require advance knowledge of the relationship’s future, which is unattainable. But the rational choice to marry, with full commitment and equanimity, requires no such knowledge. On the contrary, it requires only that the culture of the relationship fosters open-ended problem solving, such that the participants view each other as worthy partners in the tasks of living and derive joy from solving problems together. For no way of life can avoid problems, but certain ways of life can solve whatever problems arise. So both good and bad marriages are full of problems, but in a good marriage problems are constantly being solved.
Philosophy must distinguish sharply between absolute truth and absolute knowledge of that truth. Whereas the former concept serves as the crucial regulative principle for any rational worldview, the latter concept negates rationality itself. Any philosophy that denies the existence of absolute truth also denies the existence of error, and thereby repudiates any possibility of error correction: that is, of rational criticism and progress. And yet a similar problem arises for any philosophy that, in addition to guarding the concept of absolute truth, endorses the concept of absolute knowledge of absolute truth. For such a philosophy denies human fallibility, thereby denying again the possibility of human error and hence of error correction. So, it is critical that we regard absolute truth and absolute knowledge of absolute truth as fundamentally distinct concepts. Rationality can survive only if we accept one while rejecting the other, for to either accept or reject both concepts together would negate the concept of error correction, without which the higher-level concepts of reason and progress would become meaningless.
Claims and their logical implications exist objectively. So if a man asserts a claim, then that claim may have implications that he has not recognized. If another man recognizes and explicitly criticizes one such implication, then the asserting man might accuse the critic of attacking a straw man, noting that the critic has attacked a claim that the asserting man does not believe. But this accusation would itself constitute a straw man argument. For the critic’s interrogation focuses not on the mental state of the asserting man, but rather on the truth or falsity of the asserting man’s claim. The asserting man, in failing to recognize one of his claim’s logical implications, has in no way diminished the critic’s ability to refute the claim via a refutation of that implication. If the critic has derived from the asserting man’s claim a logical implication—which the asserting man has not recognized and so could not possibly believe—and shown that implication to be false, then the critic will have refuted the asserting man’s claim without ever mentioning it. The fact that the critic has applied these rules of logic in engaging with the asserting man’s claim, whereas the asserting man himself has not, indicates that the critic, far from constructing a straw man of the asserting man’s claim, has in fact taken the claim more seriously than the asserting man has himself. The critic, in other words, has done the exact opposite of attacking a straw man. And ironically, the accusation that the critic attacked a straw man, by suddenly shifting the conversation away from regulative principles of truth and falsity, toward the question of who believes what, constitutes the only straw man argument in the entire exchange.
Empiricism, the idea that human knowledge derives from human sensory experiences, cannot be true. Whatever you are observing in the world, that thing could have been perceived in an infinite number of ways. Your actual perception of that thing, however, is unique and particular. Furthermore, everything about your perception that makes it unique and particular (i.e., everything that distinguishes your perspective from any other perspective that could be taken toward that thing) constitutes an assumption, a theory, about how that thing is to be perceived. All observation is therefore, as Karl Popper put it, theory laden. Thus we must acknowledge, contra empiricism, that ideas precede experience, and that, as David Deutsch has remarked in building on Popper’s insight, all we have is interpreted experience. This fact refutes the empiricist concept of “pure” or “direct” experiences, as well as the corollary doctrine that our ideas—which include our knowledge—can in any epistemologically relevant sense be derived from those experiences.
Americans generally denounce the institutionalized racial discrimination of American history. Yet many of these same Americans also argue that racial discrimination should be re-institutionalized against the racial group of the past discriminators. This argument denies the objective character of moral truth, either by simultaneously denouncing and defending institutionalized racial discrimination, or by holding that an immoral action may be morally justified if it suits a futuristic moral idea. To remain self-consistent, a moral theory that criticizes the institutionalized racial discrimination of American history must also oppose the re-institutionalization of racial discrimination in modern America. If moral truths are indeed objective, then an evil action committed in the past remains evil if committed today, and similarly an evil action remains evil if he who commits it intends, albeit under the influence of a false moral theory, to create a better society. If, however, moral truths are not objective, then in what sense are Americans right to denounce the institutionalized racial discrimination of American history?
People who conflate words with violence deny this crucial fact: the ability of words to cause harm depends on the target’s psychological reaction. If you are verbally attacked, or if your feelings are hurt by a wanton remark, then your psychological resilience may mitigate the harm done to you. By contrast, if a careening fist, a thrusting knife, or a ripping bullet hits you, it harms you, no matter how you feel about being punched, stabbed, or shot.
Human knowledge has been accumulating for hundreds of millennia and has radically transformed our physical environment. And yet the growth of knowledge is itself a purely abstract phenomenon, which consists entirely of ideas interacting with other ideas in a mysterious process that we know involves some form of evolution. A thinking mind is a dynamic environment in which the units of evolutionary selection and the selecting entities are all ideas. Ideas mutate and then differentially “survive” based on the selection pressures that other ideas in that same environment impose. How well any particular mind accumulates objective knowledge depends on how well the ideas in that mind impose upon each other the rational selection pressure of truth-seeking criticism. That is, if mutating ideas in a mind are differentially selected as a function of how well they survive truth-seeking criticism, then they can evolve to become more true. The need for mutation arises when ideas in the mind conflict, because such a conflict can be resolved only by creatively mutating the ideas and then applying truth-seeking criticism to the variants. If the criticism is truth-seeking then it will at each stage select the variants that most successfully resolve the existing conflict. Continuous repetition of this process constitutes a form of evolution in which all of the ideas involved—the conflicting ideas, the mutating ideas, and the criticizing ideas—can improve, in the sense that their objective contents and structure can become truer representations of objective reality. The philosopher Karl Popper called this process problem solving, and characterized “science” as a special case of this process, in which percepts give rise to new conflicts and criticisms, enabling us to accumulate knowledge about the physical world.
Medical science pours its energy into developing biomarkers and drugs that enable it to predict and control patient outcomes. It spends relatively little energy, however, on inventing theories that can explain the biological processes that underly human health and disease. The current field relies heavily on data collection and on the use of statistical modeling to identify correlations in data, which can be used to predict the likelihoods of known disease trajectories. A more rational science of human health and disease, however, would rely entirely on its explanatory theories to make exact (i.e., non-probabilistic) predictions. It would spend most of its energy dreaming up imaginative theories, and then ruthlessly applying rational criticism to those theories so that it could identify flaws in them and develop them into better theories. That kind of medical science, although it would give rise to many false theories, would also give rise to astounding progress—the kind of progress that today’s mainstream medical scientists can scarcely dream of achieving. The invention of efficacious drugs would proceed, not as the dominant activity of medical science per se, but as an ancillary engineering task to be undertaken rapidly in the light of rich, deep explanatory theories of how the body works or fails to work. Discovering rich, deep explanations of human health and disease is of course difficult: but it’s possible. And yet established medical science, by spending its resources on generating data, observing correlations, and (probabilistically) predicting patient outcomes, has largely failed to achieve the kinds of explanatory breakthroughs that it needs to make awesome and dramatic progress. If the field shifts its focus, however, away from prediction toward explanation (e.g., by paying medical scientists to sit around and develop speculative theories, which will in most cases be false), then it could make that kind of progress, and not only in its immediate goal of understanding human health and disease, but also, as a consequence of that immediate goal, in its instrumental tasks of predicting and controlling patient outcomes. Indeed, by adopting this rational attitude of seeking good explanations to better understand the problems of human disease, medical science could expect, in the long run, to solve death itself.
Misconceptions about epistemology hamper progress in Western culture. Subjectivism replaces the idea of objective truth with ideas like “personal truth”, regards “lived experience” as superior to objective knowledge, and tends to frame disagreements as power struggles among identity groups. Justificationism, conversely, acknowledges the existence of objective truth, but frames disagreements as clashes among sources of knowledge, which differ in their reliability. Purportedly “reliable sources” of knowledge might include experts (e.g., “trust the science”), women (e.g., “believe all women”), politicians (e.g., “only I can fix it”), or Twitter executives (e.g., “some or all of the information shared in this Tweet is disputed and may be misleading”). Bayesianism analyzes theories in terms of their “probabilities of being true” given some “evidence base”, which misconstrues the relationship between theories and evidence while substituting truth and falsity with probability. Subjectivism, justificationism, and Bayesianism appear in many contexts, including politics, media, business, academia, science, and medicine—in each context, these misconceptions hamper our ability to make progress. Progress entails that we resolve disagreements by inventing explanations, and by then criticizing those explanations to eliminate errors within them. Our culture can overturn subjectivism, justificationism, and Bayesianism by recognizing the following three insights: (1) we can make genuine progress in pursuing objective truth; (2) we can “dispute” any theory on the grounds that it “may be misleading”; (3) we cannot meaningfully describe a theory as “probably true”. With these insights in mind, we should stop framing disagreements as power struggles among identity groups, competitions among sources of knowledge, or attributions of probability based on data. We should frame disagreements instead as competitions among theories about objective reality, where the purpose of the competition is to correct misconceptions in our existing theories. In this framing, disagreements are opportunities for rational thought—for problem solving and progress.
Programs to achieve equality of outcome tacitly deny the uniqueness of every human being. This denial often appears nowadays in the form of spurious moral arguments that our society must achieve “equity”. These arguments amount in practice to justifications for various kinds of unjust and suppressive measures that are necessary to enforce strict uniformity among unfathomably diverse human beings. The people who make these arguments, even those who do not recognize themselves as authoritarians, advocate for top-down authoritarian policies that target certain groups for the express purpose of advantaging other groups. These fighters for equity systematically overlook the crucial irony that their coercive, discriminatory approach to social policy resembles those old discriminatory practices that they claim to be at war against. If institutionalized discrimination is ever to be permanently abolished, we must face a somewhat uncomfortable fact: to accept human uniqueness is to accept unequal outcomes in life, and to reject the latter is to deny the former. People who appreciate the radical diversity of human beings are compelled to become moral individualists and are called on to criticize the spurious moral arguments that assert the need for “equity” in society. In countering equity, though, one should never assume that human diversity justifies all inequalities of outcome in society, for certain inequalities of outcome may indeed result from institutionalized discrimination. One should also never lose sight of the fact that those who fight misguidedly for equity are themselves profoundly unique individuals who are worthy of sincere moral concern.
Reading line after line of words on pages is monotonous. But a simple psychological shift, which admittedly requires an initial effort, can relieve the monotony and transform reading into a gripping experience. The trick is to wire an automatic relay from the written words to the imagination, so that the imagination immediately renders imagery evoked by the words. The reader then, to draw an analogy, experiences a written work like the TV watcher experiences a TV. The TV watcher needs a TV to watch a series. Yet the act of watching the series makes the TV seem to disappear. Which is to say that the TV watcher experiences the series without experiencing the TV. Similarly, the reader—if his mind is constantly rendering imagery evoked by the words being read—will experience the written work without experiencing the words, at which point his reading can become amazingly engrossing and entertaining.
The philosophical doctrine known as physicalism attempts to explain the world without any reference to abstract entities such as minds, thoughts, or choices. Physicalist explanations can refer only to physical phenomena, so physicalists deny the existence of “downward causation”, or the ability of thoughts to exert causal influence on the brain or body. Yet we have all experienced the tensions in our bodies that we call the stress response, and it is a familiar psychological fact that these sensations often co-occur with spirals of negative thought patterns. The physicalist could argue that this co-occurrence is a mere correlation, which is susceptible to a purely physical explanation. He might argue, for instance, that all our negative thoughts are mere epiphenomena. Or he might argue that our negative thoughts actually result from—rather than cause—the physical stress response. And although it may appear that such physicalist accounts explain the correlation between tensions in our bodies and negative thought patterns, no physicalist account can possibly explain why the contents of particular thought patterns have a negative valence for the thinker. And because one cannot describe the correlation itself without referring to the negativity of the thought patterns that co-occur with the stress response, a failure to explain the negativity of the thought pattern cannot satisfactorily explain the correlation. That is, it is only by admitting downward causation that we can explain both the negative valence of the thought patterns and the co-occurrence of these thought patterns with the bodily sensations that characterize the stress response.
Society should abolish age-based voter eligibility criteria, which are arbitrarily discriminatory. Voting rights comprise the formal mechanism by which citizens freely criticize, and remove from power, their governments. To prevent a subpopulation of citizens from exercising those rights, based on a morally irrelevant age-based criterion, is straightforwardly to oppress that subpopulation. All that should be required for a person to vote is for that person, regardless of his age, to willfully write or dictate a statement before the law proclaiming his desire to do so. This measure would open the vote to all desiring citizens while oppressing no one . It would increase the diversity and the creativity of the criticisms behind the votes cast within our society, and it would make society more free, both of which could help to make our society more rational.
The brain’s left and right hemispheres differ dramatically in functionality. All these differences could have evolved from any genetic mutation that affected the functioning of only one hemisphere. Evolutionary adaptations occur at the level of individual genes, and thus the environment in which a gene variant “lives or dies” includes the host genome in which that gene sits, as well as the physical “body” constructed by that host genome. And because the survival of every gene within a host genome depends on the survival of its one host organism, every gene in that genome must either live or die together (ignoring random gene transmission during sex). If, in the distant past, a genetic mutation occurred that affected the functioning of only one hemisphere, then it might have caused a spiral of symbiotic co-evolution to emerge between hemisphere-specific gene variants. That is, the genes that coded for left-hemisphere attributes might have exerted selection pressures on genes that coded for right-hemisphere attributes, and vice versa, eventually producing two highly divergent structures, each of which was adapted to function in the environment created by the other.
A rationalist judges theories solely by their contents, not by their sources. But because contents are inseparable from meaning, he immediately faces the question of how to determine what a particular theory means. A theory can have an objective meaning only in a specific context. This dependence on context might seem to undermine the idea that a theory’s meaning can be objective. But the alternative ideas, that a theory either cannot have an objective meaning, or can preserve the same objective meaning across all contexts, are untenable. A theory exists only to solve a problem. Hence, a theory without its problem loses its meaning. Yet, considered together, a theory and its problem (i.e., its contents, given its context) are meaningful in their objective logical and explanatory relations to each other. And only creative and critical investigation of these objective relations—which is to say, only rational thought—can determine that a theory solves its problem.
A servant of an institution may in certain situations be compelled, by his loyalty, to resign. If, for example, he feels pressure to publicly lie on behalf of the institution, then his only way to safeguard the institution’s legitimacy might be to resign and explain that decision publicly. Lying on behalf of the institution would damage its legitimacy. But seeing that an insider is willing to resign, and publicly explain why, might reinforce the public’s impression that some elements within the institution maintain their capacity for self-criticism, which is crucial if “the people” are to simultaneously accept the institution’s legitimacy and acknowledge its most calamitous mistakes.
Galileo, in a 1613 letter to his protégé Benedetto Castelli, which in 1615 he expanded into a letter addressed to Grand Duchess Christina of Tuscany, advocated for Copernicanism, refuted theological arguments against it, and defended himself against charges of heresy. In the expanded letter, Galileo asserted outright his view that “the sun [is] situated motionless in the center of the revolution of the celestial orbs while the earth revolves about the sun”. Himself a Catholic, Galileo argued that his critics in the clergy had misconstrued the relationship between holy scripture and the physical world. He agreed that “the holy Bible can never speak untruth—whenever its true meaning is understood”. But he argued that its meaning “is often very abstruse, and may say things which are quite different from what its bare words signify”. Galileo, in making this argument, sought to establish that it was his critics’ interpretation of the Bible that contradicted Copernicanism, not the Bible itself. Galileo held that the Bible was written in metaphors, to teach humans how to live. He even pointed out that literal interpretations of certain Biblical passages (e.g., those attributing to God emotions such as jealousy, anger, or hatred) would themselves constitute heresies. Galileo noted that, although “the holy Bible and the phenomena of nature” both “proceed alike from the divine Word” and thus “cannot contradict each other”, the physical world, unlike the holy Bible, “never transgresses the laws imposed upon her, or cares a whit whether her abstruse…methods of operation are understandable to men”. He argued that humanity should not use its interpretations of scripture to criticize its knowledge of physics. On the contrary, humanity should use its knowledge of physics “as the most appropriate aid in the true exposition of the Bible”. Galileo thus defended his piety while simultaneously offering an argument that not only refuted the basis for his critics’ attacks, but also proposed a powerful framework for his critics to use in correcting what were, from his perspective, their false interpretations of “the holy Bible”.
It is a fallacy to assume that, if a specious moral argument is advanced to justify some practice, then any other moral argument that would justify that same practice must also be specious. Consider, for example, the following argument: Social Darwinism has been used to justify war; Social Darwinism is specious, and thus so is its justification for war; therefore, war cannot be morally justified. The final statement betrays the fallacy, because the fact that war is not morally justified by Social Darwinism does not imply that it cannot be morally justified by some other argument. This fallacy can impede moral progress, because if we assume that a certain practice must be immoral because a specious moral argument was previously used to justify it, then we will automatically reject new arguments that seem to justify that same practice, without ever taking seriously the contents of those arguments. Yet it is only by taking seriously and critically assessing the contents of such arguments that can we rationally assess their validity, and recognize whether they pave the way toward moral progress.
Physicalists realized decades ago that even a complete understanding of our physical brains could not by itself explain how subjective experience arises—a conundrum called “the hard problem of consciousness”. The hard problem arises because physicalists assume that good explanations are always reductionist in form—that they account for a phenomenon in terms of its subcomponents. And because they are physicalists, they also assume that these subcomponents must themselves be physical entities (e.g., neurons or neurotransmitters). If, however, we transcend these two physicalist assumptions (i.e., by acknowledging the possibility of explaining consciousness purely in terms of abstract entities, such as the information being processed by the brain, rather than in terms of the brain’s physical subcomponents) then the hard problem of an explanatory gap between physical brains and abstract consciousnesses disappears.
Marxism asserts that human history unfolds according to iron laws, which alone determine the future and which human ideas or actions cannot alter. Ironically, however, Marxism, a human idea, has profoundly influenced the unfolding of history, and in ways that Marx’s historical laws did not predict. Marxism thereby, as a historical phenomenon, refutes Marxism as a theory of deterministic historical laws that cannot be altered by human ideas.
Genes compete with variants of themselves to dominate specific locations in the genome of their species. An organism’s overall physical structure, its “body”, provides a local environment to promote the successive replication of the gene variants that code for the attributes of that very structure. The variants that outcompete their rivals do so precisely because they code for attributes of their host structures that promote the replication of those very variants better than the attributes that other variants code for promote the replication of those other variants.
Glass fulfills an essential function in good architecture: the creation of windows. It should not, however, be expected to function as a primary building material. Glass buildings add little more to the visual appeal of an urban landscape than would a large, dully mirrored or half-translucent three-dimensional surface. Architects who design glass buildings inhabit a tiny world of aesthetic choices compared to the vast universe of aesthetic choices explored by architects who design buildings of other materials that can be transformed into a prodigious range of pleasing shapes, colors, and textures, and which can be appended with stucco for detail or decoration.
A typical person, from his own first-person perspective, regards his own sense of “identity” as self-evident. He fails to recognize that underneath his immediate sense of identity lurks a complex slew of ideas or theories. In taking his own sense of identity for granted as a unified, intrinsic aspect of his conscious experience, he fails to comprehend anomalous psychological phenomena such as Dissociative Identity Disorder (DID), in which a person experiences life through varied perspectives and “switches” among distinct identities. DID is no great mystery, however, to a person who recognizes that his own seemingly immediate sense of identity—far from being self-evident or intrinsic to his conscious experience—is in reality an emergent consequence of diverse, malleable, self-focused theories operating within his mind. Recognizing that experiences of identity are theoretical in nature, we can understand that a man experiencing DID is like any other person, except that his theories of self are varied and nonintegrated. Indeed, the vast majority of theories operating within a “psychologically normal” person’s mind are also varied and nonintegrated, indicating, ironically, that it is the widely held sense that our identities are self-evident, unified, and intrinsic to our conscious experience that is the psychological anomaly.
As Karl Popper argued in his book, The Open Society and its Enemies (1945), we should reserve the right to defend, with violence, those institutions that promote peaceful error correction against assailants who would use force or violence to destroy those very institutions. Consequently, those people who, at this very moment (at approximately 3:30pm Eastern Time, January 6, 2021), are mobbing the U.S. Capitol building, have made themselves the legitimate targets of justified state violence.
Why is Seinfeld immeasurably funnier than Scrubs? Because Seinfeld writers cared about funniness and funniness alone. They never sacrificed or even deemphasized funniness to make a point about life, as Scrubs writers did routinely. Scrubs writers attempted to be funny and to deliver sentimental messages about life, touching on themes such as love, death, and regret. Although nobody can yet explain precisely what “makes” something funny, inexplicit knowledge of humor is widespread—it is amply created and expressed in the form of comedy. We should appreciate that humor and therefore also comedy, like other forms of knowledge, have an intrinsic value, and that those who creatively generate such value are engaged in genuinely aesthetic pursuits—which can result, objectively, in success or failure. But if a humorist inserts a “moral” (i.e., a lesson or message about life) into a comedic work, then his creative choices have strayed away from humor-related criteria, and he has conditioned the work, in part, by criteria that serve no comedic or aesthetic function. This act—which is akin to a scientist knowingly proposing a partially untrue theory that compensates for its lack in truthfulness with a corresponding amount of usefulness—degrades the comedic work and diminishes its aesthetic qualities. To that extent, Scrubs is not a good comedy. Its writers honed their work in part by criteria that were unrelated to humor, thus creating a not-very-funny show.
Free will and determinism seem to conflict with each other. But the apparent conflict disappears when we understand that determinism and free will simply describe the world from radically different perspectives and at fundamentally different levels. Free will makes sense only within the context of the physical world, whereas determinism makes sense only from a perspective that is outside the physical world. Consider the determinist statement, “The future exists and has always existed”. It seems like a contradiction in terms, but only because our language forces us to express the idea misleadingly in terms of the past and future. If we assign special meanings to the temporal words in the statement—namely, if by the future we mean “objectively real events that from the perspective of our present have not yet happened”; and if by always we mean “transcending time itself” rather than the usual “existing across all time”—then the contradiction resolves. Assigning these special meanings allows us to express determinism as atemporal and objective: as a description of a physical reality of which time is an attribute. Conversely, free will, which is by far the more intuitive concept, is needed to explain certain kinds of events (i.e., choices) that occur within time, and thus within the physical world that determinism describes from the outside. Determinism and free will are compatible. We really do make choices. It’s just that, from an atemporal determinist perspective, these choices have “always” existed.
Every so often, jihadists attack a target in the West; and each time, westerners debate the issue. Sadly, these debates seem to repeat as mindlessly as the attacks they are about. In debating whether jihadist violence is connected to Islamic teachings, people argue about what Islamic sources “really” say—with each side battling to legitimize its own interpretation of Islamic doctrine as the “authentic” one. But this exercise entirely misses the crux of the issue, which is about how jihadists interpret Islamic doctrine. The debate about jihadism, in essence, is about how to explain acts of jihadist violence. And because all acts of jihadist violence result from choices made by jihadists, asking what causes this violence is equivalent to asking what causes those choices. In other words, it is primarily a psychological question, not a theological question. This psychological question is not particularly complicated to answer, and the answer—which, of course, implicates various aspects of Islamic theology, such as the key tenet that Muhammad’s life represents a perfect model of human conduct—straightforwardly explains acts of jihadist violence. But as long as we insist on debating the empty question of what Islamic doctrine “really” says, this understanding will continue to elude us.
A writer should cultivate a clear and simple style. Writers who attempt to dazzle with impressive-sounding words or complicated sentence structures disrespect their readers and obfuscate the contents of their writing. They make people wonder, justifiably, whether their affected style might serve to conceal the poverty of their ideas and the invalidity of their arguments. Writers who are worth reading can attract your attention and hold your interest by the sheer force of their insight: expressed powerfully in clear and simple language.
A recent philosophical discovery has invalidated all theories that invoke the supernatural. People for millennia have proposed supernatural theories to explain myriad aspects of the world around them, but then David Deutsch discovered the philosophical principle of good explanation. This principle asserts that a good explanation is “hard to vary while still accounting for what it purports to account for”. That is, it explains something in such a way that each of its details plays a functional role in the explanation, and consequently changing any of its details would spoil its explanatory power. If an explanation can be varied without diminishing its explanatory power, then it is a “bad” explanation—because it is not in fact a single explanation, but is instead a set of mutually exclusive variant explanations, which cannot be distinguished from each other using rational criteria. Deutsch’s principle of good explanation invalidates all theories that invoke the supernatural, because the details of such explanations are unconstrained by nature, and so are infinitely variable.
In America, certain political pundits spend much of their time waging “culture wars” while grumbling that they must do so. The real substance of politics, they suggest, is policy—and fights over cultural issues distract from these more substantive policy debates. But regrettably, they imply, this distraction is necessary because “the other side’s” cultural pathologies are so potentially dangerous, and need so urgently to be neutralized, that policy debates (i.e., “normal” discussions) must be temporarily sidelined to overcome the immediate cultural threat. But this argument undermines its own premise. For if the real substance of politics were policy, then policy debates could proceed autonomously, regardless of any pugilistic cultural disputes that may occur in parallel. The fact that commentators sideline policy debates to settle “culture wars” implies that culture has profound political significance: more significance, in fact, than any particular policy debate. And it does. Policy debates deal with specific problems and ideas proposed to solve those problems, whereas the outcome of a culture war can affect how our political system handles ideas in general. To that extent, culture wars affect American politics more profoundly than any particular policy debate.
Karl Popper’s “criterion of demarcation” classifies a theory as scientific only if that theory could conceivably be contradicted (“falsified”) by a logically possible observation. Intellectuals nowadays commonly deride theories as “unfalsifiable”, brandishing the term as a pejorative. But falsifiability was never intended as a criterion of value or meaning. It was merely a technical distinction between different types of theories that address different kinds of questions and that, as a result, are vulnerable to different modes of criticism. In particular, it distinguishes one class of theories from all others: universal laws of physics. A theory in this class has a special attribute: it can be logically contradicted by the observation of a single event that it says could never occur. Most good theories—including the theory of falsifiability itself—do not fit into this class. But we can still assess unfalsifiable theories rationally, using modes of criticism besides observation and experiment. And many such modes are available. For example, we can assess such theories using criteria of explanatory power, logical coherence, and consistency with other theories. But to assess conjectured universal laws of physics, we can also attempt to devise a situation (i.e., an experiment) in which we observe a physical event that the theory forbids. If we observe that event, then we will have discovered that the world does not conform to the theory. So, scientific theories are “demarcated” from other kinds of theories in one respect, which falsifiability is just a way of expressing: namely, they are subject to every mode of criticism that other theories are subject to plus observation and experiment. And this added mode of criticism can accelerate the growth of knowledge in science as compared to other fields because more criticism promotes more error correction, which is what fuels the growth of all knowledge, scientific or not.