Wednesday, May 30, 2018

The role of intuitions in conspiracy theorizing


In developing a conspiracy theory, a common method is to find apparent inconsistencies between the “official story” and how the world works. Take the Kennedy Assassination. A wide range of evidence (e.g. autopsy photos, forensic recreations, expert testimony) indicates that a single bullet, passing through the bodies of both JFK and Governor Connally, caused seven wounds (Bugliosi, 2007; McAdams 2011). During the process of reviewing the evidence, conspiracy theorists conclude that the events involving this “magic bullet” couldn’t have happened. While there are typically arguments and “evidence” offered (e.g. the long-debunked misrepresentations of the bullet’s trajectory), the origins of their skepticism likely come from their initial beliefs or intuitions about ballistics and human anatomy. Intuitively, it may seem unlikely that one bullet could cause so much damage. Likewise, the head movement of Kennedy after the third shot (back and to the left) seems to be inconsistent with a shot from behind, where Lee Harvey Oswald was stationed. But JFK conspiracy theorists take their intuitions a few steps further by concluding that the facts about the gun wounds undermine the single shooter theory and strongly support the multiple gunmen theory. In the face of contradictory physical evidence and expert testimony, conspiracy theorists tend to stick to their intuitions and infer that all of the evidence supporting the “official story” must be fabricated or mistaken. The conclusions of expert panels, forensic recreations, sophisticated computer simulations, and peer-reviewed scientific articles are often discounted out of hand. Intuitions about how they think the world works are often given more weight than the science. 

Experiments by Anatomical Surrogates Technology provide support for the single bullet theory. (Watch video to hear analysis from the ballistics experts consulted (1)

To use a recent example, consider the recent Vegas mass shooting. Is it possible that the mass murderer, Stephen Paddock, broke through the windows using a small sledgehammer, as reported by the police? Conspiracy theorists say “No”. Once again, the reasoning goes something like this: It seems unlikely or impossible that a hammer could break out the windows of the hotel room, therefore, Paddock couldn’t have done so.

In the case of the Vegas mass shooting, there is much more speculation than science. What kind of windows does the Mandalay Bay have? Can a small sledgehammer, by itself, smash through the windows that were installed? Online, there are lots of assertions made in answering these questions, with little to no evidence offered. But by looking at the photographic evidence and considering the eyewitness testimony of glass shattering, it is reasonable to infer, as the LVPD did, that the glass was shattered by Paddock using the hammer found in the room and/or rifle fire. Additionally, the photographic evidence and eyewitness testimony appear to undermine the internet rumors that hurricane resistant or shatterproof windows were installed (2).

Image source: Gregory Bull/Associated Press


What the JFK conspiracy theorist and the Vegas shooting conspiracy theorist have in common is that they rely upon an argument from intuition. Their beliefs about how bullets or hammers work determine the conclusions they draw and the hypotheses they take seriously. The argument is not just unique to JFK or the Vegas shooting; it is used as a basis for most conspiracy theories. The argument can be stated much more generally.

The general argument from intuition
It seems as if E is unlikely or impossible
Therefore E probably didn’t happen
Application 1: JFK multiple gunmen theories
It seems unlikely that one bullet can cause seven wounds
Therefore, the single bullet theory is probably false
Application 2: Vegas shooting conspiracy theories
It seems unlikely that Paddock broke out the windows with a hammer
Therefore, Paddock probably didn’t carry out the shootings (alone)
Application 3: 9/11 controlled demolition theories
It seems unlikely that a building can collapse from fire
Therefore, WTC 7 probably didn’t collapse from fire
Application 4: Moon landing hoax conspiracy theories
It seems unlikely that we had the technological capabilities to go to the moon
Therefore, we didn’t go to the moon

Given how often the argument is used to support belief in conspiracy theories, a lot hangs on whether this form of argument is any good. Unfortunately for the conspiracy theorists, the argument is demonstrably unsound. As it turns out, the argument is a variation of a textbook logical fallacy, the argument from personal incredulity. Just because you cannot imagine how something happened, doesn’t mean that it didn’t happen.  

Why is the argument unsound? First, one can be mistaken about the likelihood or possibility of a given event, especially when it comes to the domain of physics. The intuitions of experts carry much more weight, as they possess the relevant background knowledge to judge whether an event is likely or possible. Laypeople often do not have the relevant background knowledge, relying mostly upon internet rumors and their own relatively uninformed speculation. When it comes to assessing the likelihood of an event, the right questions to would be:

-What do most of the relevant experts think?
-Is there any experimental data or quantitative analyses that inform us about the event’s likelihood?
-Have similar events happened in the past?

Second, the unlikeliness of an event is not, in itself, a good reason to doubt that the event occurred. After all, unlikely events happen all of the time. To form reliable judgments about the likelihood of an event, one would also have to consider the totality of the evidence and the plausibility of the alternative hypotheses. One ought to prefer the explanation that accounts for all of the facts, rather just some of the them. If the totality of evidence suggests an unlikely event occurred, then an unlikely event probably occurred. In forming likelihood judgments, conspiracy theorists often fail to realize that their alternative explanations for what happened rely on a number of highly questionable (if not demonstrably false) assumptions, and that their hypotheses (which typically require hundreds of people to be lie and fabricate evidence) are much less likely than the widely accepted view. 

The main problem with relying upon the argument from intuition is that you might begin theorizing with false assumptions. Instead of revising their hypotheses in light of new evidence, conspiracy theorists will likely cling to their original intuitions and the factoids (3) that they have found to support them. For example, in response to up-close photos of the broken windows in Paddock’s hotel room, some conspiracy theorists now claim that the photos of the window have been altered or fabricated (part of the coverup). Likewise, in response to the newly released footage of Paddock transporting his luggage to his hotel room, some conspiracy theorists--who previously claimed that it was impossible to transport so many guns into the hotel room--assert that the Mandalay Bay security footage provided to the New York Times and other media outlets is all fake. 

Conspiracy theorists have an easy way to dismiss criticism and evidence that contradicts their strongly held beliefs. Assert, without evidence or argument, that it’s all rubbish. The psychological appeal to this tactic is easy to understand. To engage in conspiracy theorizing, you don’t need to have any qualifications, or do much research (outside of watching youtube videos). In responding to critics, conspiracy theorists can always say that the evidence for their theory has been successfully covered up (an unfalsifiable claim), that all the evidence that conflicts with their theory is fake, or that everyone is lying. You can be “in the know” by simply relying upon your own intuitive judgments, following others who are likeminded, without the need to reflect upon whether those judgments are correct. 

Like with hardcore religious believers, there have a set of core beliefs that they treat as immune to refutation. Their core beliefs consist of intuitions about what is and isn't physically possible and those who do not share their intuitions are labeled morons or shills. Of course, not all conspiracy theorists engage in this kind of rhetoric, but I've encountered quite a lot of it in my conversations over the years. More objective researchers will present expert testimony (though usually irrelevant and/or biased) and evidence that they believe supports their theory, but much of what is presented is just to support their initial judgments. So, even the more sophisticated theorists still treat certain claims as gospel. 

Understanding how the world works requires much more than relying upon intuitions. The truth revealed by the scientific method can be, and often is, counterintuitive. Proper skepticism and good scientific reasoning requires that we carefully reflect not only upon the assumptions made by others, but on the assumptions that we ourselves make, especially if our assumptions are supported by little more than our gut. Sometimes, crazy shit just happens. And if you look hard enough, you’ll be able to find something surprising or hard to believe about virtually any event. Instead of falling down an endless rabbit hole, one should be open to considering alternative hypotheses, read and engage with criticisms of your favored hypotheses, look at the totality of the evidence, and evaluate the strength of one's arguments.



(1) Their experiment recreated six of the seven wounds and demonstrated that the trajectory of the bullet is consistent with that of a bullet fired from the sixth floor of the book depository (where Oswald's rifle was found). While some conspiracy theorists interpret the result as undermining the single bullet theory, Alexander R. Krstic, a ballistics expert who was involved with the experiment, strongly believes that they would have replicated the event if the bullet hadn't struck a second rib bone, which slowed down the bullet considerably, and caused deformation (the "magic bullet" only struck one bone and was relatively undamaged).

(2) Close-up pictures reveal that the breakage does not appear to be consistent with that of a tempered glass breakage pattern or hurricane-resistant windows. The glass appears to have shattered, like in other instances of high-rise hotel windows that have been broken. Several eyewitnesses have provided testimony regarding the sound of glass shattering, and glass raining down from the window during the shooting. Given that Paddock's room contained the means to shatter the windows (and Paddock), the best explanation is that Paddock broke the windows from the inside before firing into the crowds. 

(3) By factoid, I mean an erroneous claim that is presented as a fact. While the vast majority of claims and assertions made by conspiracy theorists have been thoroughly debunked, the myths continue to spread, and are presented as factual information on conspiracy websites and youtube. To a naive observer, a long list of factoids can appear to be compelling evidence. To a more skeptical observer, a long list of claims, especially if the conclusions aren't widely accepted or controversial, calls for fact checking and careful analysis. 



Works cited

Bugliosi, V. (2007). Reclaiming History: The Assassination of President John F. Kennedy. WW Norton & Company.

McAdams, J. (2011). JFK Assassination Logic: How to Think about Claims of Conspiracy. Potomac Books, Inc..




Saturday, March 10, 2018

Why the deprivation argument fails




In his 1989 paper, “Why Abortion is immoral”, philosopher Don Marquis argues that abortion is generally wrong because killing fetuses deprives them of a future with enjoyments, relationships, and projects. As Marquis puts it, abortion deprives fetuses of a “future-like-ours.” I will argue that Marquis's argument is unsound, in that having a future-like-ours is not sufficient for having full moral worth (i.e. the moral worth of a person).  


Marquis’s explanation for the wrongness of abortion is initially quite plausible and potentially has a wide scope. It can also be used to explain common moral intuitions about the permissibility of euthanasia (under certain circumstances) and the wrongness of killing. When an innocent person is killed, there are typically several sources of harm. The harm done to the person’s family and friends, psychological harm done to the killer (e.g. making them even more vicious), and the physical harm done to the person killed. But, as Marquis argues, the worst part of the wrongdoing is not from the psychological and physical harms. Killing ends personal and romantic relationships, terminates any long-term projects or life goals, and denies the person of ever experiencing pleasure again. It is the deprivation of a future that makes killing seriously wrong.

Marquis’s account of the wrongness of killing explains not only why killing innocent human adults is wrong, but also why infanticide is seriously wrong. And since there aren’t any morally relevant differences between babies and (late-term) fetuses, Marquis concludes that abortion is typically “in the same moral category as killing an innocent adult” (Marquis 183). After all, if you kill the fetus, you deprive it of a future-like-ours, just as in the case of infanticide or the murder of an adult.  


I accept that depriving persons of the goods that life brings is seriously immoral. I also accept that late-term abortion deprives a potential person of a future-like-ours. But I reject the conclusion that abortion is equivalent to murdering a person. I hold this position because there is a morally relevant difference between the killing of persons and potential persons that Marquis does not consider. Actual persons also have a past-like-ours, whereas potential persons do not.


By past-like-ours, I mean a series of connected psychological states that involve episodic memories, experiences of happiness and pleasure, and the actual formation of long-term life-goals and close relationships. To understand why having a past-like-ours is morally relevant, consider the following case.


Imagine that in the near future, scientists are able to create sentient AI. Once they are fully developed, the robots are able to have conscious experiences just like humans. They can experience the same range of complex emotions, fluently speak and understand human languages, enjoy fine-dining and music, and can even contemplate their own existence. These robots are full-blown persons with a future-like-ours.*


In order for the robots to have these psychological capacities, there is a developmental period, much like fetuses go through. You can think of this as a “buffering period” for personhood. In a world like this, it seems reasonable to suppose that humans who decide to turn on one of these robots might end up changing their mind. So, on occasion, humans decide to press a “cancellation button” that is found inside the robot’s computerized brain before it reaches full-blown personhood. If pressed, everything in the robot’s computerized brain is wiped clean, which results in the death of the potential person. If humans decide to restart the process, an entirely new potential person would be generated (it wouldn’t be a clone or a recreation of the one originally terminated). Would it be seriously wrong for someone press the cancellation button during the buffering period? 

According to Marquis's view, it would be seriously wrong to do so, given that it would deprive a potential person of a future. Moreover, it would be just as wrong to push the button as it would be to kill an innocent adult human. But according to my own intuitions--which may be widely shared--this result is extremely counterintuitive. Thus, there is a conflict between Marquis's account and common-sense morality. I think a plausible explanation for why it’s morally permissible is because the robot did not reach the point where it was fully up-and-running. The robot did not have a past-like-ours. If this is right, Marquis's account is, at best, incomplete. 

The personhood robot thought experiment is analogous to late-term abortion in that you are depriving potential persons of having a future-like-ours while they are still in the "buffering" stage of development. What is the morally relevant difference between pressing the cancellation button and abortion? One difference is that one is a manmade machine, the other is a biological organism. But that doesn’t seem to be difference relevant to morality. The following claim seems plausible: all potential persons intrinsically have equal moral status, regardless of their species or physical constitution. Not only does it seem plausible, I am aware of no reasonble grounds for doubting it is true. 

One might think that the consequences of pushing the button and abortion are very different, in that abortion may bring about certain harms to others, that will not result from pushing the button. But any appeals to harms done to family members or to human society at large will no longer be talking about intrinsic moral worth. We want to know, what is seriously wrong about pushing the button, in itself. 


If you think that abortion is seriously wrong, just because it deprives someone of a future-like-ours, then pressing the cancellation is also wrong, equally so. Marquis seems to accept the antecedent. He states that, “having that future by itself is sufficient to create the strong presumption that the killing is seriously wrong [emphasis, mine]” (Marquis, pg. 195).


Logical consistency requires us to either think that pressing the cancellation button is as seriously wrong as killing an innocent adult human, or that we deny that both abortion and pressing the button are seriously wrong. I think the latter option is much more reasonable. However, it is argued that the implications of accepting that abortion is not seriously wrong has disturbing consequences. It implies that infanticide is not seriously immoral either. Here, the moral intuitions for infanticide are much stronger, and it may just seem obviously wrong to kill newborns. Although these strong moral intuitions may be widely shared, it matters whether they have a rational basis.


I suspect that resistance to this conclusion—that abortion and infanticide are not seriously wrong—is largely due to emotional reasons and cultural biases. For example, late-term fetuses and newborn babies are cute. We experience negative emotions when we think of cute babies dying, but don’t feel much of anything when a robot or something ugly dies. There are evolutionary reasons that explain why we feel this way, and why it leads us to take good care of cute creatures. But this evolutionary explanation, by itself, does not provide a sound moral basis for our intuition.


Why else might explain our moral intuitions about infanticide? A plausible answer can be found by looking at historical and anthropological work on infanticide. As philosopher Michael Tooley observes, attitudes about infanticide have not always been widely shared. In fact, these attitudes about have only been around for a few hundred years (Tooley 1984). Throughout most of human history, infanticide was considered morally permissible. Why might this be? It is unlikely that the shift in attitudes was informed by science or the discovery of some new facts about infants.  Tooley argues that a much more plausible answer is that our attitudes about infanticide were inherited from a religious tradition (e.g. Christianity) that values all human life, or posits the existence of an immortal soul to all humans. Christians will likely argue that this is a point in favor of their worldview, and that on atheism, you are left with the disturbing conclusion that babies have no moral worth. But if we reject the legitimacy of the religious tradition, or the existence of the soul, there are seemingly no other legitimate grounds to appeal to. In that case, we should not be misled by our intuitions. The presence of cultural biases and negative emotions are not good reasons for holding moral beliefs.


In conclusion, having a future-like-ours is not sufficient for full moral worth. Persons not only have a future-like-ours, they also have a past-like-ours. It is a combination of these two features that make it seriously wrong to kill someone. Potential persons, such as fetuses and infants, do not have a past-like-ours, in that have not experienced pleasure, established any life projects, goals, or relationships. Thus, the killing of a potential person is not morally equivalent to killing of an actual person.


* My thought experiment is essentially a variation of Michael Tooley's thought experiment involving cat people (Tooley 1972), an argument I have written about elsewhere


Works cited
Marquis, D. (1989). Why abortion is immoral. The Journal of Philosophy86(4), 183-202.
Tooley, M. (1972). Abortion and infanticide. Philosophy & Public Affairs, 37-65.
Tooley, M. (1984). Abortion and Infanticide. Blackwell Publishing Ltd.

Saturday, July 8, 2017

The argument from marginal cases



Vegans defend a number of arguments for why we shouldn’t raise and kill animals for food. One of the most popular and longstanding arguments for veganism is the argument from marginal cases (AMC). It goes like this:

Humans are thought to have more moral value than other animals because humans typically possess certain traits that we take to be morally relevant (e.g. self-awareness, high intelligence, rationality, moral understanding, future oriented desires, and life goals). But there are some humans that lack all of these traits, and some animals that seem to possess some of them (though, to a lesser degree). Humans that lack all of the morally relevant characteristics are referred to as ‘marginal cases’. Marginal cases include: the severely retarded, infants, anencephalics (babies born without brains), the senile, and permanent comatose patients. If one grants that these marginal cases lack all of the traits required for having moral worth, then they ought to be considered equals to certain non-human animals (e.g. cows), and inferiors to others (e.g. chimpanzees and dolphins). But we do not treat infants or the severely retarded like we treat non-human animals. That is to say, we don’t kill and eat babies, we don’t torture them, and we don’t experiment on them. What kind of moral theory or principle could explain this differential treatment? Without a morally relevant difference between the marginal cases and non-human animals, we are not justified in our differential treatment of the two groups. Our treatment of animals is inconsistent with how we treat the marginal cases. 

In this post, I will first refute two objections to AMC. I will then provide a radical way to remedy the inconsistency between our beliefs about marginal cases and animals. I will argue that neither the non-human animals or the marginal cases should be subjected to unnecessary pain and suffering, but that we may allow for certain kinds of experimentation and animal farming (across species), provided there are no rights violations or extrinsic harms that result from such practices.

Part 1: Objections to the AMC

Objection 1: Species, though

One way to resist the argument would be to argue that there in fact are morally relevant traits present in humans that are lacking in non-human animals. A typical example would be species membership. Simply being human has moral worth. All humans are members of the species homo sapiens, and no non-human animals are members of our species. Therefore, we have moral justification to treat non-human animals the way we do, because they are of a different species. The problem with this objection is that it seems implausible that being a member of a particular species is morally relevant at all.

Imagine we were to encounter extraterrestrial lifeforms or some other hominin species (e.g. Neanderthals) that were like us with respect to our cognitive capacities. They can talk, are capable of moral understanding, are highly intelligent, and are self-conscious. It seems obvious that we would consider these beings to have moral worth, but they are not members of our species. It is the morally relevant characteristics that they share with us that gives them moral worth. Therefore, species membership is not a morally relevant characteristic.

Objection 2: Souls, though

A second objection may come from those from a religious background. Humans have moral value because they have souls. Animals don’t have souls. Therefore, animals don’t have moral value. The problem with this objection is two-fold. First, positing the existence of a soul to explain human cognition is superfluous in light of modern cognitive science and psychology and fraught with metaphysical baggage. I don’t have time to get into the details, but for the sake of argument, let’s suppose that humans do have souls. Would that be enough for the argument to work? No. There are many religious traditions (e.g. Hinduism, Sikhism) that recognize the existence of animal souls, including at least five denominations of Christianity (Seventh Day Adventists, Episcopalians).

But even if we grant that animals don’t have souls, there is still work to be done on explaining why possessing a soul is necessary for having moral worth. If a creature can experience pain and suffering, it seems wrong to harm such a creature for trivial reasons (e.g. personal pleasure). The intuition that causing gratuitous suffering to conscious beings is immoral is shared by believers and nonbelievers alike. Without any good reason to doubt the force of this intuition, we are justified in believing that causing gratuitous suffering is just wrong, full stop. The absence of a soul does nothing to change this intuition.

There are plenty more bad objections to discuss, but I have been unable to come across any that are compelling. In turn, I believe that our differential treatment of the marginal cases and non-human animals is inconsistent and morally unjustifiable. For those who find the argument sound, the AMC forces one to reconsider our treatment of animals and/or marginal cases.

Part 2: Take-aways from the AMC

At first glance, there seems to be two ways to resolve the inconsistency:

(1) we ought to
treat non-human animals much better (e.g. by banning factory farms)
(2) allow for marginal cases to be experimented on, tortured, farmed, and killed.*

Of these two options, the first seems to the more intuitive take-away. While it is more intuitive, it still has pretty radical implications. It would entail that we should all become vegans and perhaps ban the practice of animal farming altogether.

While I think it would be better for the animals, the planet, and for ourselves to cut out most or all meat from our diets, I think there is a compromise that could be made that ethically permits certain kinds of animal farming:

(3) neither the non-human animals or the marginal cases should be subjected to unnecessary pain and suffering, but we may allow for certain kinds of experimentation and livestock farming (of certain non-human animals and human marginal cases), provided there are no rights violations or extrinsic harms that result from such practices.

If we accept option 3, animal farming, where the animals are treated and slaughtered humanely, could be morally justified. It really hinges on whether animals and/or marginal cases have certain rights.

For instance, do animals or marginal cases have a right to life? It may not be immediately obvious what to say about this. However, the implications are clear enough. If animals lack a right to life, then so do the marginal cases (via AMC). If, on the other hand, animals and marginal cases do have a right to life, then experimentation and livestock farming (across species) would be immoral.

Adopting a basic theory of rights similar to the philosopher Michael Tooley’s (1984), I will assume that a creature has a right to X if it has a desire or preference to X, provided that such desires and preferences do not infringe upon the rights of others. This assumption accounts for the view that animals have rights not be subjected to unnecessary pain and suffering in light of the fact that such animals have a desire or preference to avoid pain and suffering. But having such a desire to go on living arguably requires the possession of self-consciousness (c.f. Singer 2011, Tooley 1983). That is to say, in order for the creature to have desires about its own life, it must have conception and memory of one’s self existing over time (past, present, and future).** Thus, creatures that are not self-conscious, do not have a right to life.

Even if one accepts my assumptions about rights possession, I take it that many would still have a strong intuition that it’s wrong to raise and kill mentally disabled babies for food. The philosopher Daniel Dombrowski asserts that it is “one of our strongest moral intuitions”, and argues that no one has provided any good reasons to think that this intuition is mistaken. (Dombrowski 1997, p 180). I agree with Dombrowski that we have this strong intuition, and that absent any defeaters, we have good reason to trust our intuitions (c.f. Huemer 2007 for a strong defense of ‘ethical intuitionism’). However, I think there has been a compelling debunking argument developed. In his 1984 book ‘Abortion and Infanticide’, Michael Tooley argues that the intuition—that it’s wrong to kill the mentally disabled—is recent in human history, and stems from Western religious views about souls and human dignity. Tooley’s theory may be controversial. But if the intuition really comes from religion, and we have reason to doubt that religion is authoritative, then we have reason to doubt the intuition.

There are additional reasons to question our intuitions about killing marginal cases. For one, most marginal cases have families who would be harmed by their deaths. We think it’s wrong to kill and eat babies because other people will be harmed (e.g. their family and society at large). If we are to assume that other people would be harmed in the process of baby farming, then we would be right to think it’s immoral. But I have only made the claim that insofar as there are no extrinsic harms that result from these practices, there is nothing morally objectionable to them. Given that both of these kinds of harm would qualify as extrinsic, my conditional claim remains standing. What I’m saying is that our moral intuitions may not be factoring in all of the stipulations laid out. Instead of just sticking to our pretheoretical moral intuitions, we ought to reevaluate the moral status of the action in light of the particular context.

In closing, the AMC gets us to reconsider our treatment of animals and marginal cases. If one accepts that the argument is sound, then whatever treatment is called for marginal cases, the same must follow for the non-human animals possessing the same morally relevant traits. The conclusion I have arrived at may be disturbing to some. But if you find it disturbing, then you should give up meat eating altogether and adopt (1), veganism, since the only other option (2) is even more disturbing.



Footnotes

*provided that we have the consent of family members and that either it is done in secret or with the approval of the general public.

**Now some animals that are raised and killed for food may have this capacity (e.g. pigs and dogs). However, it is doubtful that chickens, fish, and (especially) shrimp have such a capacity.



Works cited

Dombrowski, D. A. (1997). Babies and beasts: The argument from marginal cases. University of Illinois Press.
Huemer, M. (2007). Ethical intuitionism. Springer.
Singer, P. (2011). Practical ethics. Cambridge university press.
Tooley, M. (1984). Abortion and Infanticide. Blackwell Publishing Ltd.

Saturday, May 27, 2017

Thinking for yourself


We live in an age where we are overloaded with information. To know what is going on in the world, what has happened in the past, and what may happen in the future, we often have to rely upon the testimony of journalists, government officials, civilians, military personnel, and experts. Without a foolproof way to determine who is telling the truth, there are those who advocate a rather extreme form of skepticism. It is not that they think we cannot know anything, but that our sources of knowledge are very limited. There are those who argue that certain kinds of testimony are either unreliable or that we cannot determine whether or not it is accurate. Specifically, the skepticism is generally directed at journalists (the “mainstream media”) and experts. But at times, testimony from other groups removed from the establishment is deemed reliable (e.g. certain government officials, civilians). Let’s call this view establishment skepticism (ES). Without journalists and experts, E-skeptics recommend the following two strategies for gaining knowledge.

      1)      Think for yourself
      2)    Rely solely upon personal experience and things you have seen firsthand

In this post, I will demonstrate why these strategies are prone to error and why dismissing certain kinds of testimony is not only misguided, but dangerous.

It’s generally a good idea to think for yourself. Provided that one knows how to employ valid reasoning and is well-informed about a given topic, independent thought can be useful in developing novel arguments and insights. But notice the potential pitfalls.

Suppose there is an individual who not only lacks (implicit or explicit) knowledge of basic logic, but who vehemently believes that fallacies (invalid arguments) are good arguments. It seems safe to say that it would be a bad idea for this person to think for themselves.

Suppose there is an individual who is capable of independent thought but has only encountered misleading evidence or false information. In this case, thinking for oneself will likely lead to many false conclusions given that the premises one has to work with are false.

In avoiding the pitfall of the second individual, how does one acquire good information? One might argue that a reliable way to get good information is through firsthand experience. If you are able to see with your own eyes that something is the case, how can you go wrong? Here are two ways:

(1)               Your sample size is too small
(2)             Your recollection of what you have seen is selective. We all have certain biases and tend to see what we want to see. [We tend to remember the hits and forget the misses].

Experts are in the business of correcting for all of the pitfalls previously discussed. To take two quick examples, they take into account the possibility of bias on the part of other researchers and have a solution for it (i.e. peer review), and they ensure that their sample sizes are large enough to make accurate generalizations. Nonetheless, experts sometimes get it wrong.

The most recent case of expert failure is the 2016 US presidential election. An often made argument by E-Skeptics goes as follows. The (polling) experts were wrong about Trump losing, therefore, experts, in general, are (probably) wrong about everything. This is a terrible argument and is patently fallacious. Consider the following parallel line of reasoning, which no reasonable person would accept.

Speedometers sometimes misrepresent the speed of a vehicle. Therefore, they always do (or get it wrong most of the time).

But the E-skeptic argument is even worse than this. The argument implicitly generalizes from polling experts to all experts. It would be like concluding, because speedometers sometimes misrepresent the speed of a vehicle, all measuring instruments are unreliable.

Not all domains of expertise are of equal epistemic authority. Polling experts have to work with data that is sometimes unreliable or hard to predict. So, pollsters will probably get it wrong a lot more often than experts in other fields (e.g. engineering, physics).

The relevant question to ask is, for a particular domain of expertise, “how often do the experts get it right?”

In the case of pollsters, some actually have a pretty good track record (e.g. 538). Even in the case of the recent US election, the state polls were off within a normal margin of error (1-3 percentage points), and even before the results came in, pollsters had warned about this possibility. The national polls weren’t that far off at all. Pollsters predicted that Clinton would win the popular vote by 3 percentage points. She won the popular vote by 2. More recent elections, such as the presidential election in France, have reminded us of the general reliability of election polling.

The reality is, we need to rely upon the testimony of experts and journalists in order to know what’s going on in the world. Thinking for yourself has its limitations, some of which I have already discussed, and we should be well aware of them. We do not have God-like powers to see everything in the world firsthand, so, we need to rely upon other people who have seen things firsthand, as well as those who have observed more indirect forms of evidence (e.g. archaeologists, geologists, astronomers). 

Now, we should not assume that experts are infallible. It’s possible that they could have employed bad reasoning to reach their conclusions, or that they are unaware of evidence that undermines their position, etc. Nonetheless, we are warranted in accepting expert testimony, as long as it is in general agreement by most of their peers, and there is no strong evidence that negates what they say.

Regularly watching the news, reading some articles or watching youtube documentaries does not make you an expert. Most of us cannot dedicate the time and energy to become well-informed about complex issues, so we have to rely upon the testimony of those who do. There’s a reason why we have graduate schools and advanced degrees. [This isn’t to say that one cannot become an expert after years of extensive study on one’s own. Only that, it takes a lot of time to become an expert and a graduate education is the most common, and, perhaps, most reliable way of gaining expertise.]

What’s the harm in considering journalists and experts to be generally unreliable sources? One harm is that someone might end up putting all of their trust into a dangerous and unreliable source (e.g. a corrupt politician). Tyranny usually begins with government leaders attacking the press while seeking public support for their policies through propaganda and lies. By selectively pointing out things that journalists or experts have gotten wrong, and by selectively pointing out the things they themselves have gotten right, authoritarian politicians try to mislead the public into thinking that they are the only reliable source of information. Note how the same bad argument mentioned earlier gets transformed into an argument for listening to certain politicians over everyone else.

Politician A is sometimes right about what he says. Therefore, he is probably right about most things.  

The relevant question to ask is “who has the better track record of getting things right? The experts or politician A?” But those who have already become won over by clever politicians will likely conclude that the politician has the better track. After all, they believe that the politician is the one stating the facts. If it gets to the point where the only justification for believing what the politician says is that he or she said it, we have a serious problem. There would seemingly be no line of argument that could be used to get them to change their closed minds. That’s why we need to ask ourselves and each other to provide some kind of non-circular justification for the beliefs we hold. I conclude with a few suggestions for preventing the kind of dangerous closed-mindness just discussed.

(1)               Read widely. Don’t get all of your information from a small set of sources. Read essays and articles written by those you disagree with (If liberal, read e.g. WSJ, Daily wire, or Fox from time to time. If conservative, read e.g. NYT, CNN or the Guardian from time to time).  
(2)             Make sure your arguments are logically valid (Ask: Would I accept the same argument form if applied to other contexts, or stated by other individuals?)
(3)              Communicate to people you disagree with. Try to understand why they believe what they believe, understand their arguments and reasons, and articulate why you hold your own views.

(4)             Have some humility. There are issues where even the experts reasonably disagree with one another. If it’s a controversial subject, don’t rest much weight on your conclusions and be open to entertaining alternative views. 






Related previous posts:

Sunday, May 21, 2017

Ontology


Ontology is a sub-branch of metaphysics that deals with what kinds of things exist. At first pass, it might seem like scientists would be better authorities to consult than philosophers on what exists, but that would be to misunderstand the project of metaphysicians.  Unlike science, which refers to specific classes of physical objects (e.g. electrons, hydrogen, enzymes), metaphysics refers to the most general or abstract categories of things that exist: substances, properties, and kinds.*

To paraphrase the philosopher Wilfrid Sellars, metaphysics deals with how things, in the broadest possible sense of the word ‘things’, hang together, in the broadest possible sense of ‘hang together’. Metaphysicians who specialize in ontology, work on determining what the things. Put more simply, they ask 'what kinds of things are there?' Taking a step back from level of analysis provided by physics, metaphysicians try to determine specifically what (if any) kinds of substances, properties, and kinds exist. To make things more concrete, let’s look closely at these concepts and see how views about them lead to interesting philosophical conclusions.

Substances

Substances are understood as the entities in which properties inhere. While controversial, there are those who believe there are two types of substances: physical and mental. Mental substances would include things like immaterial human minds and God. Physical substances would include all of the objects of scientific study (e.g. tables, bears, helium, etc.), as well as all of the other physical objects we encounter in everyday life (e.g. tables, chairs, Ipads). The view which states there are two types of substance is called substance dualism. An alternative view would be that there is only type of substance (substance monism). On physicalism, the view which states the one type of substance is physical, minds are considered to have a physical basis in the brain, and the existence of other kinds of immaterial substances, like God, are denied. 

Properties

Properties are understood as the entities that inhere in things (e.g. redness, circularity, positive charge). So, all of the red objects are thought to share the property of ‘redness’ in common, all of the circular objects have the property of ‘circularity’ in common, and so on. Just like for substances, there are property dualists who think there are two types of properties (i.e. mental and physical properties). Property dualists argue that mental properties cannot be fully explained by the physical properties of the brain. Mental properties are sometimes characterized as (strongly) emergent properties of the brain. Mental properties are thought to be dependent upon the brain, but something over and above brain processes. If we were to know everything about how the brain works, property dualists think there would still be the question of ‘why is conscious experience paired with brain activity of a certain kind?’ This is known as the hard problem of consciousness (Chalmers 1996). Physicalists see consciousness and other mental properties to be analogous to other biological properties, like digestion or photosynthesis. Photosynthesis may be an emergent phenomenon of biochemistry, but it is nothing over and above its underlying chemical properties. There are some physicalists who simply deny that there is hard problem. If one accepts that minds are identical to brains (c.f. mind-brain identity theory), then the question ‘why is A paired with B’ turns into the incoherent question of ‘why is A paired with itself?’

Kinds

Kinds are groupings of objects (or substances) that share essential properties. For instance, gold is a chemical kind. All instances of gold will share certain chemical properties that are not shared by other chemical kinds (e.g. silver, potassium). Most of the kinds discussed by scientists are considered to be natural kinds, in that, the categories are taken to be individuated on an objective basis. It was not up to the scientists whether or not the whale was a mammal, the whale simply is a mammal, regardless of what humans believe. To better understand what a natural kind is, let’s contrast them with a clear case of an artificial kind: ‘pets’. The kind ‘pets’ include cats, dogs, parrots, and any of the other animals humans have adopted for companionship. Human interests determine which animals get selected as pets and which do not, and those interests vary depending on time and place. While there is some usefulness, for us, in using the category of ‘pets’, there is no property or cluster of properties that all of the pet animals have in common (other than certain groups of humans liking them). Thus, the kind ‘pets’ is not a natural kind.

There are also instances of categories in science where it looks as if scientists have to rely upon rather arbitrary categorization methods. In the case of counting the number of planets in our solar system, it used to be that there were 9, but after the discovery of additional Pluto-sized objects, Pluto was demoted to a dwarf planet. Instead, scientists could have kept Pluto as a planet but just increase the number of planets from 9 to 13. 

To move to a particularly controversial example, take ‘race’. Should ‘race’ be considered a natural kind, or a category more like ‘planets’ or even ‘pets’? One consideration is that we know that humans are grouped differently around the world. The way we individuate races in the United States is different from how they distinguish races in other countries (e.g. Brazil). In the US, there are, roughly, five races (Black, white, American Indian/Alaskan native, Asian, and mixed). In Brazil, there are nine (see below). Anthropologists, who are the experts on human biodiversity, widely disagree about how to group humans, and come to widely different answers depending on their criteria of race individuation. Should we conclude that any single one of these groupings of humans is objectively correct? Or is it more plausible to assume that human interests primarily shape how we categorize human beings.


Real world applicability

While scientists are typically unaware of philosophical jargon, they draw conclusions and make statements that can be accurately characterized as rejecting or accepting controversial categories, like race, as instances of natural kinds. You also see among laypeople that certain assumptions are made about controversial categories. For example, take the kind ‘gender’. A widespread view is that ‘gender’ and ‘sex’ are closely connected and that both are grounded in biology. One interpretation of the "common sense" view is that ‘sex’ and ‘gender’ are natural kinds in biology. An alternative view, held by both scientists and laypeople alike, is that ‘sex’ is a natural kind whereas ‘gender’ is an artificial social kind, one that varies according to human interests relative to time and place. Many proponents of this alternative view consider gender to be, what's referred to as, a social construction

People have pretty strong views about race and gender, views which are amenable to philosophical analysis. There are also those who are skeptical about the prospects of ever settling these areas of controversy. But if there is a fact of the matter with respect to the ontological status of these categories, one is going to have to do the philosophical work to sort out the answers. Empirical evidence alone will not tell you what ‘race’ or ‘gender’ is. After all, there are experts who are aware of all of the relevant evidence but yet disagree about the more philosophical issues. Even if a consensus among scientists were to form, one may still reasonably ask whether or not the scientists drew the right conclusions.




*Metaphysicians disagree about just about everything. My breakdown of the world's ontology is hardly original, and is just one way to do it among many. 

Works cited

Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.