Thursday, December 27, 2018

Naming the trait: Part 1


I take it as a given that it is wrong to cause unnecessary suffering to sentient beings, regardless of species. It is wrong to stab puppies in the eyes, wrong to yank off a cat’s tail, and wrong to slice off a chicken’s beak. And it’s wrong to do these things all for the same reason: it causes animals to undergo immense suffering. I don’t think it takes an argument to understand why this is true, but rather, I believe the wrongness of harming sentient beings for trivial reasons is self-evident. A non-obvious ethical question is whether it is morally permissible to end a sentient creature’s life for human consumption, provided the creature did not suffer or feel pain in the process of dying 

There are some philosophers that take issue with factory farms but not with the act of humanely killing animals. Philosopher Peter Singer has stated that if an animal has lived a good natural life and does not undergo any substantial suffering, it imorally permissible to kill the animal for food. One could imagine a farm where the animals live good lives. Suppose that the animals get to engage in natural behaviors, keep their offspring, are not systematically mutilated, and are slaughtered on site through a process that results in instantaneous destruction of their brain. Let’s also stipulate that the animals wouldn’t know that their death was near and that they would miss or worry about their slaughtered relatives. Given that there is no substantial suffering for the animals, Singer would say that we have an instance of ethical animal farming.
  
While the hypothetical case for ethical animal farming may sound very plausible, there is a powerful objection that may cause you to reconsider your views. Put simply, what if we changed the species of the animal being farmed from, say, cows to humans? If the humans get to live good lives, get to engage in natural behaviors, and are painlessly killed without any foreknowledge, would it be okay to kill humans for food? And if not, what is the morally relevant difference between humans and cows that renders the action wrong in one case and permissible in the other? 

Vegan youtuber Ask Yourself (Isaac) poses the question as a challenge to “name the trait”. If there is a morally relevant difference, then there is some trait (or set of traits) that explains why it’s wrong to kill humans but not cows. While there are many attempts to name the trait (e.g. species, intelligence, rationality, reciprocation) I will focus on the answer that seems most plausible. In short, I don’t think it is a single trait that makes the moral difference, but rather, a set of traits. 

The personhood response

Humans are morally superior to farm animals because they are persons. That is to say, humans are self-aware, have a strong desire to go on living, and have long-term life projects (e.g. raising a family, saving the rainforests). They are also involved in complex social relationships, which mean that their deaths can affect and harm lots of other persons. Cows do not have these psychological traits. They may have short term desires to eat and procreate, but it is unlikely that they have the cognitive capacities to understand their own existence or the nature of death. Therefore, because humans are persons, they have a higher moral status than cows, which in turn makes it wrong to kill humans, but permissible to kill cows.  

The personhood reductio 

While I do think personhood is the strongest response to the name-the-trait challenge, it has some (potentially) disturbing implications. Not all humans possess the psychological traits required for personhood (e.g. infants and humans with severe cognitive disabilities). Thus, the explanation I’m offering would not work in the case of painlessly killing marginal cases for food. So, if it’s morally permissible to kill creatures lacking personhood for food, then it would be morally permissible to kill babies or the cognitively disabled for food.  

Isaac observes that many of the responses to name-the-trait have this implication (e.g. intelligence, rationality), and he believes that this renders all such responses absurd or unacceptable. If we are to deny any human the right the life, we have rejected a widely held moral principle: all humans have an equal right to life. Isaac implies that since the personhood response is inconsistent with widely held moral intuitions, we should reject or dismiss it. Put another way, if one concedes that some humans don’t have a right to life, they have lost the moral debate. I don’t find Isaac’s response compelling for several reasons. Assuming for the sake of argument that Isaac’s empirical claim is true, the popularity of a moral view is not a deciding factor in resolving difficult questions in ethics. If it were, then we would already have strong reasons to reject ethical veganism. 

There seems to be an inconsistency in Isaac’s approach to ethics. To defend ethical veganism, Isaac appeals to rational arguments that explain why eating meat is immoral. But in responding to critics, he appeals to irrelevant considerations, like popularity. It could be that Isaac is just a pragmatist, using reason when it’s useful for moral persuasion. But given his strong emphasis on being logically consistent, I will continue to interpret his objections as substantive philosophical claims. In the next post of this series, I will further analyze the personhood reductio and the implications for ethical veganism. Specifically, I will address the following questions: 

Firstly, if one concedes that it is morally permissible to breed and kill babies for food, does one really lose the debate? Secondly, if one cannot name-the-trait, is veganism the only rationally defensible position? 

Thursday, December 20, 2018

Anti-fatalism



In the free will debate, there is a distinction to be made between the metaphysical views of determinism and fatalism. Determinism is a view about the nature of causation, that every event was necessarily caused by some prior events evolving in accord with the laws of nature. This thesis has direct implications for human agency, in that, if we are determined to act based on the past, we could not have done otherwise. Fatalism states that the unfolding of all events happens necessarily, neutral on questions of causation. On fatalism, the world—at any point in time— could not have been otherwise.

Most philosophers probably reject fatalism, but, as far as I know, there isn’t a name for the position. So, from here on out, I will refer to the negation of fatalism as anti-fatalism. To accept anti-fatalism, one would just have to demonstrate a possible difference in the evolution of the universe. I will argue that the only way to refute fatalism would be to demonstrate that the universe had an absolute beginning that was indeterministic. If the eternal universe model, or any alternative model of the universe is correct, then the world is, was, and always will be, necessarily the way that things are.

We can imagine that the past could have been different. Someone other than Benjamin Franklin could have invented the bifocals, Hillary Clinton could have won the 2016 election, and I could have majored in neuroscience rather than philosophy. Nothing (logically) impossible when it comes to the past being different. However, when we ask whether these things are metaphysically possible, it is going to depend upon on whether fatalism is correct. If fatalism is true, then all of these imagined events would be metaphysically impossible (though, still logically possible). Determinists regularly claim that it is possible that the past could have been different, and in those hypothetical alternative worlds, that we could have done otherwise. What sense of possibility does the determinist have in mind: logical or metaphysical? If logical possibility, then the claim is uncontroversial. There are no contradictions involved in supposing Benjamin Franklin’s cousin could have invented the bifocals or that Hillary Clinton could have won the 2016 election. If metaphysical possibility, the truth of the claim is not as obvious.

For the sake of argument, suppose that both 1) determinism is true, and 2) it was metaphysically possible for Hillary Clinton to have won the 2016 election (HC). For HC to be true, there must be a metaphysically possible world where either the past (e.g. No Russians) and/or laws of nature were different. But how could we explain the possibility of a difference if our universe is deterministic? On determinism, the possibility of a different present requires the possibility of a different past. But any change in the past requires either a subsequent change in the past, ad infinitum. Given that changing the past seems hopeless, one might be tempted to go back to the Big Bang to posit a change in the laws of nature. If the initial conditions of the universe were different, then Hillary could have won. But notice that we wind up in the same exact position as before. How could we explain the possibility of a difference in the initial state, if the universe is deterministic?

To make room for alternative possibilities, one has to introduce some randomness or indeterminacy for the laws of nature or initial conditions of the universe.* Assuming that the laws of nature arose at the moment of the Big Bang, and that there was no such thing as a past that preceded the singularity, one could hold that you can have indeterminism at the very beginning but that everything else afterwards was determined. Here, we have a possible world where determinism is true, but fatalism is not. However, one must assume that the universe had an absolute beginning and that it could have been otherwise. In making these assumptions, one must also rule out two other theses. Namely, 1) that the universe is eternal, and 2) That the initial state of the universe was necessary (fatalism). Given that ruling out 2 is the very thesis is question, it would be question-begging for the anti-fatalist to assume it is false (without argument). 

Thus, for both anti-fatalism and determinism to be true, the universe must have had a beginning. On an eternal universe model, there is no beginning or point in which indeterministic elements could enter, for this would falsify determinism.

Wednesday, May 30, 2018

The role of intuitions in conspiracy theorizing


In developing a conspiracy theory, a common method is to find apparent inconsistencies between the “official story” and how the world works. Take the Kennedy Assassination. A wide range of evidence (e.g. autopsy photos, forensic recreations, expert testimony) indicates that a single bullet, passing through the bodies of both JFK and Governor Connally, caused seven wounds (Bugliosi, 2007; McAdams 2011). During the process of reviewing the evidence, conspiracy theorists conclude that the events involving this “magic bullet” couldn’t have happened. While there are typically arguments and “evidence” offered (e.g. the long-debunked misrepresentations of the bullet’s trajectory), the origins of their skepticism likely come from their initial beliefs or intuitions about ballistics and human anatomy. Intuitively, it may seem unlikely that one bullet could cause so much damage. Likewise, the head movement of Kennedy after the third shot (back and to the left) seems to be inconsistent with a shot from behind, where Lee Harvey Oswald was stationed. But JFK conspiracy theorists take their intuitions a few steps further by concluding that the facts about the gun wounds undermine the single shooter theory and strongly support the multiple gunmen theory. In the face of contradictory physical evidence and expert testimony, conspiracy theorists tend to stick to their intuitions and infer that all of the evidence supporting the “official story” must be fabricated or mistaken. The conclusions of expert panels, forensic recreations, sophisticated computer simulations, and peer-reviewed scientific articles are often discounted out of hand. Intuitions about how they think the world works are often given more weight than the science. 

Experiments by Anatomical Surrogates Technology provide support for the single bullet theory. (Watch video to hear analysis from the ballistics experts consulted (1)

To use a recent example, consider the recent Vegas mass shooting. Is it possible that the mass murderer, Stephen Paddock, broke through the windows using a small sledgehammer, as reported by the police? Conspiracy theorists say “No”. Once again, the reasoning goes something like this: It seems unlikely or impossible that a hammer could break out the windows of the hotel room, therefore, Paddock couldn’t have done so.

In the case of the Vegas mass shooting, there is much more speculation than science. What kind of windows does the Mandalay Bay have? Can a small sledgehammer, by itself, smash through the windows that were installed? Online, there are lots of assertions made in answering these questions, with little to no evidence offered. But by looking at the photographic evidence and considering the eyewitness testimony of glass shattering, it is reasonable to infer, as the LVPD did, that the glass was shattered by Paddock using the hammer found in the room and/or rifle fire. Additionally, the photographic evidence and eyewitness testimony appear to undermine the internet rumors that hurricane resistant or shatterproof windows were installed (2).

Image source: Gregory Bull/Associated Press


What the JFK conspiracy theorist and the Vegas shooting conspiracy theorist have in common is that they rely upon an argument from intuition. Their beliefs about how bullets or hammers work determine the conclusions they draw and the hypotheses they take seriously. The argument is not just unique to JFK or the Vegas shooting; it is used as a basis for most conspiracy theories. The argument can be stated much more generally.

The general argument from intuition
It seems as if E is unlikely or impossible
Therefore E probably didn’t happen
Application 1: JFK multiple gunmen theories
It seems unlikely that one bullet can cause seven wounds
Therefore, the single bullet theory is probably false
Application 2: Vegas shooting conspiracy theories
It seems unlikely that Paddock broke out the windows with a hammer
Therefore, Paddock probably didn’t carry out the shootings (alone)
Application 3: 9/11 controlled demolition theories
It seems unlikely that a building can collapse from fire
Therefore, WTC 7 probably didn’t collapse from fire
Application 4: Moon landing hoax conspiracy theories
It seems unlikely that we had the technological capabilities to go to the moon
Therefore, we didn’t go to the moon

Given how often the argument is used to support belief in conspiracy theories, a lot hangs on whether this form of argument is any good. Unfortunately for the conspiracy theorists, the argument is demonstrably unsound. As it turns out, the argument is a variation of a textbook logical fallacy, the argument from personal incredulity. Just because you cannot imagine how something happened, doesn’t mean that it didn’t happen.  

Why is the argument unsound? First, one can be mistaken about the likelihood or possibility of a given event, especially when it comes to the domain of physics. The intuitions of experts carry much more weight, as they possess the relevant background knowledge to judge whether an event is likely or possible. Laypeople often do not have the relevant background knowledge, relying mostly upon internet rumors and their own relatively uninformed speculation. When it comes to assessing the likelihood of an event, the right questions to would be:

-What do most of the relevant experts think?
-Is there any experimental data or quantitative analyses that inform us about the event’s likelihood?
-Have similar events happened in the past?

Second, the unlikeliness of an event is not, in itself, a good reason to doubt that the event occurred. After all, unlikely events happen all of the time. To form reliable judgments about the likelihood of an event, one would also have to consider the totality of the evidence and the plausibility of the alternative hypotheses. One ought to prefer the explanation that accounts for all of the facts, rather just some of the them. If the totality of evidence suggests an unlikely event occurred, then an unlikely event probably occurred. In forming likelihood judgments, conspiracy theorists often fail to realize that their alternative explanations for what happened rely on a number of highly questionable (if not demonstrably false) assumptions, and that their hypotheses (which typically require hundreds of people to be lie and fabricate evidence) are much less likely than the widely accepted view. 

The main problem with relying upon the argument from intuition is that you might begin theorizing with false assumptions. Instead of revising their hypotheses in light of new evidence, conspiracy theorists will likely cling to their original intuitions and the factoids (3) that they have found to support them. For example, in response to up-close photos of the broken windows in Paddock’s hotel room, some conspiracy theorists now claim that the photos of the window have been altered or fabricated (part of the coverup). Likewise, in response to the newly released footage of Paddock transporting his luggage to his hotel room, some conspiracy theorists--who previously claimed that it was impossible to transport so many guns into the hotel room--assert that the Mandalay Bay security footage provided to the New York Times and other media outlets is all fake. 

Conspiracy theorists have an easy way to dismiss criticism and evidence that contradicts their strongly held beliefs. Assert, without evidence or argument, that it’s all rubbish. The psychological appeal to this tactic is easy to understand. To engage in conspiracy theorizing, you don’t need to have any qualifications, or do much research (outside of watching youtube videos). In responding to critics, conspiracy theorists can always say that the evidence for their theory has been successfully covered up (an unfalsifiable claim), that all the evidence that conflicts with their theory is fake, or that everyone is lying. You can be “in the know” by simply relying upon your own intuitive judgments, following others who are likeminded, without the need to reflect upon whether those judgments are correct. 

Like with hardcore religious believers, there have a set of core beliefs that they treat as immune to refutation. Their core beliefs consist of intuitions about what is and isn't physically possible and those who do not share their intuitions are labeled morons or shills. Of course, not all conspiracy theorists engage in this kind of rhetoric, but I've encountered quite a lot of it in my conversations over the years. More objective researchers will present expert testimony (though usually irrelevant and/or biased) and evidence that they believe supports their theory, but much of what is presented is just to support their initial judgments. So, even the more sophisticated theorists still treat certain claims as gospel. 

Understanding how the world works requires much more than relying upon intuitions. The truth revealed by the scientific method can be, and often is, counterintuitive. Proper skepticism and good scientific reasoning requires that we carefully reflect not only upon the assumptions made by others, but on the assumptions that we ourselves make, especially if our assumptions are supported by little more than our gut. Sometimes, crazy shit just happens. And if you look hard enough, you’ll be able to find something surprising or hard to believe about virtually any event. Instead of falling down an endless rabbit hole, one should be open to considering alternative hypotheses, read and engage with criticisms of your favored hypotheses, look at the totality of the evidence, and evaluate the strength of one's arguments.



(1) Their experiment recreated six of the seven wounds and demonstrated that the trajectory of the bullet is consistent with that of a bullet fired from the sixth floor of the book depository (where Oswald's rifle was found). While some conspiracy theorists interpret the result as undermining the single bullet theory, Alexander R. Krstic, a ballistics expert who was involved with the experiment, strongly believes that they would have replicated the event if the bullet hadn't struck a second rib bone, which slowed down the bullet considerably, and caused deformation (the "magic bullet" only struck one bone and was relatively undamaged).

(2) Close-up pictures reveal that the breakage does not appear to be consistent with that of a tempered glass breakage pattern or hurricane-resistant windows. The glass appears to have shattered, like in other instances of high-rise hotel windows that have been broken. Several eyewitnesses have provided testimony regarding the sound of glass shattering, and glass raining down from the window during the shooting. Given that Paddock's room contained the means to shatter the windows (and Paddock), the best explanation is that Paddock broke the windows from the inside before firing into the crowds. 

(3) By factoid, I mean an erroneous claim that is presented as a fact. While the vast majority of claims and assertions made by conspiracy theorists have been thoroughly debunked, the myths continue to spread, and are presented as factual information on conspiracy websites and youtube. To a naive observer, a long list of factoids can appear to be compelling evidence. To a more skeptical observer, a long list of claims, especially if the conclusions aren't widely accepted or controversial, calls for fact checking and careful analysis. 



Works cited

Bugliosi, V. (2007). Reclaiming History: The Assassination of President John F. Kennedy. WW Norton & Company.

McAdams, J. (2011). JFK Assassination Logic: How to Think about Claims of Conspiracy. Potomac Books, Inc..




Saturday, March 10, 2018

Why the deprivation argument fails




In his 1989 paper, “Why Abortion is immoral”, philosopher Don Marquis argues that abortion is generally wrong because killing fetuses deprives them of a future with enjoyments, relationships, and projects. As Marquis puts it, abortion deprives fetuses of a “future-like-ours.” I will argue that Marquis's argument is unsound, in that having a future-like-ours is not sufficient for having full moral worth (i.e. the moral worth of a person).  


Marquis’s explanation for the wrongness of abortion is initially quite plausible and potentially has a wide scope. It can also be used to explain common moral intuitions about the permissibility of euthanasia (under certain circumstances) and the wrongness of killing. When an innocent person is killed, there are typically several sources of harm. The harm done to the person’s family and friends, psychological harm done to the killer (e.g. making them even more vicious), and the physical harm done to the person killed. But, as Marquis argues, the worst part of the wrongdoing is not from the psychological and physical harms. Killing ends personal and romantic relationships, terminates any long-term projects or life goals, and denies the person of ever experiencing pleasure again. It is the deprivation of a future that makes killing seriously wrong.

Marquis’s account of the wrongness of killing explains not only why killing innocent human adults is wrong, but also why infanticide is seriously wrong. And since there aren’t any morally relevant differences between babies and (late-term) fetuses, Marquis concludes that abortion is typically “in the same moral category as killing an innocent adult” (Marquis 183). After all, if you kill the fetus, you deprive it of a future-like-ours, just as in the case of infanticide or the murder of an adult.  


I accept that depriving persons of the goods that life brings is seriously immoral. I also accept that late-term abortion deprives a potential person of a future-like-ours. But I reject the conclusion that abortion is equivalent to murdering a person. I hold this position because there is a morally relevant difference between the killing of persons and potential persons that Marquis does not consider. Actual persons also have a past-like-ours, whereas potential persons do not.


By past-like-ours, I mean a series of connected psychological states that involve episodic memories, experiences of happiness and pleasure, and the actual formation of long-term life-goals and close relationships. To understand why having a past-like-ours is morally relevant, consider the following case.


Imagine that in the near future, scientists are able to create sentient AI. Once they are fully developed, the robots are able to have conscious experiences just like humans. They can experience the same range of complex emotions, fluently speak and understand human languages, enjoy fine-dining and music, and can even contemplate their own existence. These robots are full-blown persons with a future-like-ours.*


In order for the robots to have these psychological capacities, there is a developmental period, much like fetuses go through. You can think of this as a “buffering period” for personhood. In a world like this, it seems reasonable to suppose that humans who decide to turn on one of these robots might end up changing their mind. So, on occasion, humans decide to press a “cancellation button” that is found inside the robot’s computerized brain before it reaches full-blown personhood. If pressed, everything in the robot’s computerized brain is wiped clean, which results in the death of the potential person. If humans decide to restart the process, an entirely new potential person would be generated (it wouldn’t be a clone or a recreation of the one originally terminated). Would it be seriously wrong for someone press the cancellation button during the buffering period? 

According to Marquis's view, it would be seriously wrong to do so, given that it would deprive a potential person of a future. Moreover, it would be just as wrong to push the button as it would be to kill an innocent adult human. But according to my own intuitions--which may be widely shared--this result is extremely counterintuitive. Thus, there is a conflict between Marquis's account and common-sense morality. I think a plausible explanation for why it’s morally permissible is because the robot did not reach the point where it was fully up-and-running. The robot did not have a past-like-ours. If this is right, Marquis's account is, at best, incomplete. 

The personhood robot thought experiment is analogous to late-term abortion in that you are depriving potential persons of having a future-like-ours while they are still in the "buffering" stage of development. What is the morally relevant difference between pressing the cancellation button and abortion? One difference is that one is a manmade machine, the other is a biological organism. But that doesn’t seem to be difference relevant to morality. The following claim seems plausible: all potential persons intrinsically have equal moral status, regardless of their species or physical constitution. Not only does it seem plausible, I am aware of no reasonble grounds for doubting it is true. 

One might think that the consequences of pushing the button and abortion are very different, in that abortion may bring about certain harms to others, that will not result from pushing the button. But any appeals to harms done to family members or to human society at large will no longer be talking about intrinsic moral worth. We want to know, what is seriously wrong about pushing the button, in itself. 


If you think that abortion is seriously wrong, just because it deprives someone of a future-like-ours, then pressing the cancellation is also wrong, equally so. Marquis seems to accept the antecedent. He states that, “having that future by itself is sufficient to create the strong presumption that the killing is seriously wrong [emphasis, mine]” (Marquis, pg. 195).


Logical consistency requires us to either think that pressing the cancellation button is as seriously wrong as killing an innocent adult human, or that we deny that both abortion and pressing the button are seriously wrong. I think the latter option is much more reasonable. However, it is argued that the implications of accepting that abortion is not seriously wrong has disturbing consequences. It implies that infanticide is not seriously immoral either. Here, the moral intuitions for infanticide are much stronger, and it may just seem obviously wrong to kill newborns. Although these strong moral intuitions may be widely shared, it matters whether they have a rational basis.


I suspect that resistance to this conclusion—that abortion and infanticide are not seriously wrong—is largely due to emotional reasons and cultural biases. For example, late-term fetuses and newborn babies are cute. We experience negative emotions when we think of cute babies dying, but don’t feel much of anything when a robot or something ugly dies. There are evolutionary reasons that explain why we feel this way, and why it leads us to take good care of cute creatures. But this evolutionary explanation, by itself, does not provide a sound moral basis for our intuition.


Why else might explain our moral intuitions about infanticide? A plausible answer can be found by looking at historical and anthropological work on infanticide. As philosopher Michael Tooley observes, attitudes about infanticide have not always been widely shared. In fact, these attitudes about have only been around for a few hundred years (Tooley 1984). Throughout most of human history, infanticide was considered morally permissible. Why might this be? It is unlikely that the shift in attitudes was informed by science or the discovery of some new facts about infants.  Tooley argues that a much more plausible answer is that our attitudes about infanticide were inherited from a religious tradition (e.g. Christianity) that values all human life, or posits the existence of an immortal soul to all humans. Christians will likely argue that this is a point in favor of their worldview, and that on atheism, you are left with the disturbing conclusion that babies have no moral worth. But if we reject the legitimacy of the religious tradition, or the existence of the soul, there are seemingly no other legitimate grounds to appeal to. In that case, we should not be misled by our intuitions. The presence of cultural biases and negative emotions are not good reasons for holding moral beliefs.


In conclusion, having a future-like-ours is not sufficient for full moral worth. Persons not only have a future-like-ours, they also have a past-like-ours. It is a combination of these two features that make it seriously wrong to kill someone. Potential persons, such as fetuses and infants, do not have a past-like-ours, in that have not experienced pleasure, established any life projects, goals, or relationships. Thus, the killing of a potential person is not morally equivalent to killing of an actual person.


* My thought experiment is essentially a variation of Michael Tooley's thought experiment involving cat people (Tooley 1972), an argument I have written about elsewhere


Works cited
Marquis, D. (1989). Why abortion is immoral. The Journal of Philosophy86(4), 183-202.
Tooley, M. (1972). Abortion and infanticide. Philosophy & Public Affairs, 37-65.
Tooley, M. (1984). Abortion and Infanticide. Blackwell Publishing Ltd.