Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Wednesday, May 30, 2018

The role of intuitions in conspiracy theorizing


In developing a conspiracy theory, a common method is to find apparent inconsistencies between the “official story” and how the world works. Take the Kennedy Assassination. A wide range of evidence (e.g. autopsy photos, forensic recreations, expert testimony) indicates that a single bullet, passing through the bodies of both JFK and Governor Connally, caused seven wounds (Bugliosi, 2007; McAdams 2011). During the process of reviewing the evidence, conspiracy theorists conclude that the events involving this “magic bullet” couldn’t have happened. While there are typically arguments and “evidence” offered (e.g. the long-debunked misrepresentations of the bullet’s trajectory), the origins of their skepticism likely come from their initial beliefs or intuitions about ballistics and human anatomy. Intuitively, it may seem unlikely that one bullet could cause so much damage. Likewise, the head movement of Kennedy after the third shot (back and to the left) seems to be inconsistent with a shot from behind, where Lee Harvey Oswald was stationed. But JFK conspiracy theorists take their intuitions a few steps further by concluding that the facts about the gun wounds undermine the single shooter theory and strongly support the multiple gunmen theory. In the face of contradictory physical evidence and expert testimony, conspiracy theorists tend to stick to their intuitions and infer that all of the evidence supporting the “official story” must be fabricated or mistaken. The conclusions of expert panels, forensic recreations, sophisticated computer simulations, and peer-reviewed scientific articles are often discounted out of hand. Intuitions about how they think the world works are often given more weight than the science. 

Experiments by Anatomical Surrogates Technology provide support for the single bullet theory. (Watch video to hear analysis from the ballistics experts consulted (1)

To use a recent example, consider the recent Vegas mass shooting. Is it possible that the mass murderer, Stephen Paddock, broke through the windows using a small sledgehammer, as reported by the police? Conspiracy theorists say “No”. Once again, the reasoning goes something like this: It seems unlikely or impossible that a hammer could break out the windows of the hotel room, therefore, Paddock couldn’t have done so.

In the case of the Vegas mass shooting, there is much more speculation than science. What kind of windows does the Mandalay Bay have? Can a small sledgehammer, by itself, smash through the windows that were installed? Online, there are lots of assertions made in answering these questions, with little to no evidence offered. But by looking at the photographic evidence and considering the eyewitness testimony of glass shattering, it is reasonable to infer, as the LVPD did, that the glass was shattered by Paddock using the hammer found in the room and/or rifle fire. Additionally, the photographic evidence and eyewitness testimony appear to undermine the internet rumors that hurricane resistant or shatterproof windows were installed (2).

Image source: Gregory Bull/Associated Press


What the JFK conspiracy theorist and the Vegas shooting conspiracy theorist have in common is that they rely upon an argument from intuition. Their beliefs about how bullets or hammers work determine the conclusions they draw and the hypotheses they take seriously. The argument is not just unique to JFK or the Vegas shooting; it is used as a basis for most conspiracy theories. The argument can be stated much more generally.

The general argument from intuition
It seems as if E is unlikely or impossible
Therefore E probably didn’t happen
Application 1: JFK multiple gunmen theories
It seems unlikely that one bullet can cause seven wounds
Therefore, the single bullet theory is probably false
Application 2: Vegas shooting conspiracy theories
It seems unlikely that Paddock broke out the windows with a hammer
Therefore, Paddock probably didn’t carry out the shootings (alone)
Application 3: 9/11 controlled demolition theories
It seems unlikely that a building can collapse from fire
Therefore, WTC 7 probably didn’t collapse from fire
Application 4: Moon landing hoax conspiracy theories
It seems unlikely that we had the technological capabilities to go to the moon
Therefore, we didn’t go to the moon

Given how often the argument is used to support belief in conspiracy theories, a lot hangs on whether this form of argument is any good. Unfortunately for the conspiracy theorists, the argument is demonstrably unsound. As it turns out, the argument is a variation of a textbook logical fallacy, the argument from personal incredulity. Just because you cannot imagine how something happened, doesn’t mean that it didn’t happen.  

Why is the argument unsound? First, one can be mistaken about the likelihood or possibility of a given event, especially when it comes to the domain of physics. The intuitions of experts carry much more weight, as they possess the relevant background knowledge to judge whether an event is likely or possible. Laypeople often do not have the relevant background knowledge, relying mostly upon internet rumors and their own relatively uninformed speculation. When it comes to assessing the likelihood of an event, the right questions to would be:

-What do most of the relevant experts think?
-Is there any experimental data or quantitative analyses that inform us about the event’s likelihood?
-Have similar events happened in the past?

Second, the unlikeliness of an event is not, in itself, a good reason to doubt that the event occurred. After all, unlikely events happen all of the time. To form reliable judgments about the likelihood of an event, one would also have to consider the totality of the evidence and the plausibility of the alternative hypotheses. One ought to prefer the explanation that accounts for all of the facts, rather just some of the them. If the totality of evidence suggests an unlikely event occurred, then an unlikely event probably occurred. In forming likelihood judgments, conspiracy theorists often fail to realize that their alternative explanations for what happened rely on a number of highly questionable (if not demonstrably false) assumptions, and that their hypotheses (which typically require hundreds of people to be lie and fabricate evidence) are much less likely than the widely accepted view. 

The main problem with relying upon the argument from intuition is that you might begin theorizing with false assumptions. Instead of revising their hypotheses in light of new evidence, conspiracy theorists will likely cling to their original intuitions and the factoids (3) that they have found to support them. For example, in response to up-close photos of the broken windows in Paddock’s hotel room, some conspiracy theorists now claim that the photos of the window have been altered or fabricated (part of the coverup). Likewise, in response to the newly released footage of Paddock transporting his luggage to his hotel room, some conspiracy theorists--who previously claimed that it was impossible to transport so many guns into the hotel room--assert that the Mandalay Bay security footage provided to the New York Times and other media outlets is all fake. 

Conspiracy theorists have an easy way to dismiss criticism and evidence that contradicts their strongly held beliefs. Assert, without evidence or argument, that it’s all rubbish. The psychological appeal to this tactic is easy to understand. To engage in conspiracy theorizing, you don’t need to have any qualifications, or do much research (outside of watching youtube videos). In responding to critics, conspiracy theorists can always say that the evidence for their theory has been successfully covered up (an unfalsifiable claim), that all the evidence that conflicts with their theory is fake, or that everyone is lying. You can be “in the know” by simply relying upon your own intuitive judgments, following others who are likeminded, without the need to reflect upon whether those judgments are correct. 

Like with hardcore religious believers, there have a set of core beliefs that they treat as immune to refutation. Their core beliefs consist of intuitions about what is and isn't physically possible and those who do not share their intuitions are labeled morons or shills. Of course, not all conspiracy theorists engage in this kind of rhetoric, but I've encountered quite a lot of it in my conversations over the years. More objective researchers will present expert testimony (though usually irrelevant and/or biased) and evidence that they believe supports their theory, but much of what is presented is just to support their initial judgments. So, even the more sophisticated theorists still treat certain claims as gospel. 

Understanding how the world works requires much more than relying upon intuitions. The truth revealed by the scientific method can be, and often is, counterintuitive. Proper skepticism and good scientific reasoning requires that we carefully reflect not only upon the assumptions made by others, but on the assumptions that we ourselves make, especially if our assumptions are supported by little more than our gut. Sometimes, crazy shit just happens. And if you look hard enough, you’ll be able to find something surprising or hard to believe about virtually any event. Instead of falling down an endless rabbit hole, one should be open to considering alternative hypotheses, read and engage with criticisms of your favored hypotheses, look at the totality of the evidence, and evaluate the strength of one's arguments.



(1) Their experiment recreated six of the seven wounds and demonstrated that the trajectory of the bullet is consistent with that of a bullet fired from the sixth floor of the book depository (where Oswald's rifle was found). While some conspiracy theorists interpret the result as undermining the single bullet theory, Alexander R. Krstic, a ballistics expert who was involved with the experiment, strongly believes that they would have replicated the event if the bullet hadn't struck a second rib bone, which slowed down the bullet considerably, and caused deformation (the "magic bullet" only struck one bone and was relatively undamaged).

(2) Close-up pictures reveal that the breakage does not appear to be consistent with that of a tempered glass breakage pattern or hurricane-resistant windows. The glass appears to have shattered, like in other instances of high-rise hotel windows that have been broken. Several eyewitnesses have provided testimony regarding the sound of glass shattering, and glass raining down from the window during the shooting. Given that Paddock's room contained the means to shatter the windows (and Paddock), the best explanation is that Paddock broke the windows from the inside before firing into the crowds. 

(3) By factoid, I mean an erroneous claim that is presented as a fact. While the vast majority of claims and assertions made by conspiracy theorists have been thoroughly debunked, the myths continue to spread, and are presented as factual information on conspiracy websites and youtube. To a naive observer, a long list of factoids can appear to be compelling evidence. To a more skeptical observer, a long list of claims, especially if the conclusions aren't widely accepted or controversial, calls for fact checking and careful analysis. 



Works cited

Bugliosi, V. (2007). Reclaiming History: The Assassination of President John F. Kennedy. WW Norton & Company.

McAdams, J. (2011). JFK Assassination Logic: How to Think about Claims of Conspiracy. Potomac Books, Inc..




Saturday, May 27, 2017

Thinking for yourself


We live in an age where we are overloaded with information. To know what is going on in the world, what has happened in the past, and what may happen in the future, we often have to rely upon the testimony of journalists, government officials, civilians, military personnel, and experts. Without a foolproof way to determine who is telling the truth, there are those who advocate a rather extreme form of skepticism. It is not that they think we cannot know anything, but that our sources of knowledge are very limited. There are those who argue that certain kinds of testimony are either unreliable or that we cannot determine whether or not it is accurate. Specifically, the skepticism is generally directed at journalists (the “mainstream media”) and experts. But at times, testimony from other groups removed from the establishment is deemed reliable (e.g. certain government officials, civilians). Let’s call this view establishment skepticism (ES). Without journalists and experts, E-skeptics recommend the following two strategies for gaining knowledge.

      1)      Think for yourself
      2)    Rely solely upon personal experience and things you have seen firsthand

In this post, I will demonstrate why these strategies are prone to error and why dismissing certain kinds of testimony is not only misguided, but dangerous.

It’s generally a good idea to think for yourself. Provided that one knows how to employ valid reasoning and is well-informed about a given topic, independent thought can be useful in developing novel arguments and insights. But notice the potential pitfalls.

Suppose there is an individual who not only lacks (implicit or explicit) knowledge of basic logic, but who vehemently believes that fallacies (invalid arguments) are good arguments. It seems safe to say that it would be a bad idea for this person to think for themselves.

Suppose there is an individual who is capable of independent thought but has only encountered misleading evidence or false information. In this case, thinking for oneself will likely lead to many false conclusions given that the premises one has to work with are false.

In avoiding the pitfall of the second individual, how does one acquire good information? One might argue that a reliable way to get good information is through firsthand experience. If you are able to see with your own eyes that something is the case, how can you go wrong? Here are two ways:

(1)               Your sample size is too small
(2)             Your recollection of what you have seen is selective. We all have certain biases and tend to see what we want to see. [We tend to remember the hits and forget the misses].

Experts are in the business of correcting for all of the pitfalls previously discussed. To take two quick examples, they take into account the possibility of bias on the part of other researchers and have a solution for it (i.e. peer review), and they ensure that their sample sizes are large enough to make accurate generalizations. Nonetheless, experts sometimes get it wrong.

The most recent case of expert failure is the 2016 US presidential election. An often made argument by E-Skeptics goes as follows. The (polling) experts were wrong about Trump losing, therefore, experts, in general, are (probably) wrong about everything. This is a terrible argument and is patently fallacious. Consider the following parallel line of reasoning, which no reasonable person would accept.

Speedometers sometimes misrepresent the speed of a vehicle. Therefore, they always do (or get it wrong most of the time).

But the E-skeptic argument is even worse than this. The argument implicitly generalizes from polling experts to all experts. It would be like concluding, because speedometers sometimes misrepresent the speed of a vehicle, all measuring instruments are unreliable.

Not all domains of expertise are of equal epistemic authority. Polling experts have to work with data that is sometimes unreliable or hard to predict. So, pollsters will probably get it wrong a lot more often than experts in other fields (e.g. engineering, physics).

The relevant question to ask is, for a particular domain of expertise, “how often do the experts get it right?”

In the case of pollsters, some actually have a pretty good track record (e.g. 538). Even in the case of the recent US election, the state polls were off within a normal margin of error (1-3 percentage points), and even before the results came in, pollsters had warned about this possibility. The national polls weren’t that far off at all. Pollsters predicted that Clinton would win the popular vote by 3 percentage points. She won the popular vote by 2. More recent elections, such as the presidential election in France, have reminded us of the general reliability of election polling.

The reality is, we need to rely upon the testimony of experts and journalists in order to know what’s going on in the world. Thinking for yourself has its limitations, some of which I have already discussed, and we should be well aware of them. We do not have God-like powers to see everything in the world firsthand, so, we need to rely upon other people who have seen things firsthand, as well as those who have observed more indirect forms of evidence (e.g. archaeologists, geologists, astronomers). 

Now, we should not assume that experts are infallible. It’s possible that they could have employed bad reasoning to reach their conclusions, or that they are unaware of evidence that undermines their position, etc. Nonetheless, we are warranted in accepting expert testimony, as long as it is in general agreement by most of their peers, and there is no strong evidence that negates what they say.

Regularly watching the news, reading some articles or watching youtube documentaries does not make you an expert. Most of us cannot dedicate the time and energy to become well-informed about complex issues, so we have to rely upon the testimony of those who do. There’s a reason why we have graduate schools and advanced degrees. [This isn’t to say that one cannot become an expert after years of extensive study on one’s own. Only that, it takes a lot of time to become an expert and a graduate education is the most common, and, perhaps, most reliable way of gaining expertise.]

What’s the harm in considering journalists and experts to be generally unreliable sources? One harm is that someone might end up putting all of their trust into a dangerous and unreliable source (e.g. a corrupt politician). Tyranny usually begins with government leaders attacking the press while seeking public support for their policies through propaganda and lies. By selectively pointing out things that journalists or experts have gotten wrong, and by selectively pointing out the things they themselves have gotten right, authoritarian politicians try to mislead the public into thinking that they are the only reliable source of information. Note how the same bad argument mentioned earlier gets transformed into an argument for listening to certain politicians over everyone else.

Politician A is sometimes right about what he says. Therefore, he is probably right about most things.  

The relevant question to ask is “who has the better track record of getting things right? The experts or politician A?” But those who have already become won over by clever politicians will likely conclude that the politician has the better track. After all, they believe that the politician is the one stating the facts. If it gets to the point where the only justification for believing what the politician says is that he or she said it, we have a serious problem. There would seemingly be no line of argument that could be used to get them to change their closed minds. That’s why we need to ask ourselves and each other to provide some kind of non-circular justification for the beliefs we hold. I conclude with a few suggestions for preventing the kind of dangerous closed-mindness just discussed.

(1)               Read widely. Don’t get all of your information from a small set of sources. Read essays and articles written by those you disagree with (If liberal, read e.g. WSJ, Daily wire, or Fox from time to time. If conservative, read e.g. NYT, CNN or the Guardian from time to time).  
(2)             Make sure your arguments are logically valid (Ask: Would I accept the same argument form if applied to other contexts, or stated by other individuals?)
(3)              Communicate to people you disagree with. Try to understand why they believe what they believe, understand their arguments and reasons, and articulate why you hold your own views.

(4)             Have some humility. There are issues where even the experts reasonably disagree with one another. If it’s a controversial subject, don’t rest much weight on your conclusions and be open to entertaining alternative views. 






Related previous posts:

Tuesday, January 31, 2017

What are the facts?



In today’s world, articles expressing opposing viewpoints are labeled as “fake news” and falsehoods have been rebranded as “alternative facts”. We all have beliefs about the world. In many cases, we disagree with one another about what the facts are. But we can’t all be right. The sub-branch of philosophy known as epistemology, offers some useful concepts in discussing matters of truth, facts, and belief. In this post, I will use some of these concepts to clear up some of the conceptual confusion surrounding recent events involving President Trump and his spokespeople.

Belief
Philosophers define belief as a state of mind where a subject accepts that a given proposition is true. For instance, Joe believes that he has work in the morning translates to Joe accepts that it is true that he has work in the morning. Beliefs can be true or false, and they can be about anything, even things that are obviously true (e.g. the United States is in North America). 

Truth
A belief is true if it corresponds to how the world is. A belief is false if it does not. The world is the way it is independent of our beliefs. This conception of truth goes back as far as Aristotle and is the dominant view among academic philosophers. There are epistemic relativists and coherentists who hold alternative views about truth, but in our everyday conversation, I assume that we all share a common vocabulary and are making claims about the world under that shared framework. A shared conceptual framework is what makes disagreement possible in the first place.

Beliefs can be true or false even if we cannot know what the truth is. For instance, there is a determinate number of grains of sand at the beach. Likewise, there is a fact of the matter whether or not advanced lifeforms exist elsewhere in the cosmos.

Facts and justification
There’s a fact of the matter as to how many people showed up to Trump’s inauguration. We can safely rule out that only five people attended or that five billion attended. Estimates by crowd scientists, whom carefully studied aerial photographs of the event, put the attendance at around 200,000. Trump, relying upon how things looked from where he was standing, thought the number was over a million. We have two competing claims for how many people showed up to the inauguration. Which number is probably closer to the truth?

We can ask about the kinds of justification used by Trump and the crowd scientists. Which used the more reliable method for counting large crowds?

Crowd scientists look at objective measures, like aerial photographs and the number of metro tickets purchased. Aerial views enables one to see the crowd in its entirety.  Looking out from the ground level at front of the crowd leaves out of sight all of those standing (or not standing) in the back. Furthermore, the front of the crowd is exactly where you would expect to see a higher density of people, thus, providing a (potentially) misleading impression of how many people were there in total. We can see that these two methods of establishing crowd size are not equally reliable, and that Trump’s method is especially prone to error.

In response to crowd estimates conducted by experts, Trump’s counselor, Kelly-Ann Conway, referred to Trump’s belief as an “alternative fact”. Here, she is either misusing the word “fact” for political means, or she is confused about the concept. Sean Spicer, Trump’s main spokesman, has also made similar claims: Sometimes we can disagree with the facts.”

There aren’t the facts and alternative facts. Facts are facts. What we have in the current situation is a disagreement between two parties about what the facts are. I take it that Conway is not just saying that Trump disagrees with the crowd experts. She is also trying to say that Trump’s belief about the crowd size is a legitimate view to hold.

Given the unreliability of the methods Trump used to form his belief, there are strong reasons to discount his testimony and to side with the experts. Hence, if Conway is insisting that Trump’s view is a legitimate alternative, then she is simply wrong. Having an alternative view does not mean that you deserve to be listened to or respected. There are plenty of possible views one might hold, but many are nonsensical or can be rejected after carefully looking at all the evidence. Holocaust denial is an alternative view. Would Kelly-Ann Conway be prepared to say that Holocaust deniers are presenting “alternative facts”? I highly doubt it. Conway and Spicer are probably just using this rhetoric to try and stay on the President’s good side.

In cases like these, why would some people side with the president over experts? There are several explanations one could offer.

1)    Authoritarianism: Shut up and agree with what our president says!
2)   Conspiracy theorizing: The crowd scientists have doctored the photos. Trump is telling the truth.
3)   Anti-elitism: The academic elites and scientists act like know-it-alls, call those who disagree with them ignorant, but they are often wrong. Therefore, we shouldn’t trust experts.

I take all three of these explanations to be plausible when it comes to Trump’s most ardent supporters. It is hard to see how a rational discussion could take place with such individuals. However, there are plenty of reasonable people who voted for Trump—for instrumental reasons (e.g. Republican control of government) or because they believe his policies will lead to better consequences (e.g. making us safe)—that are amenable to reason and evidence.

Conclusion

We should all care about the truth, even if it is ugly or in conflict with our political views. Instead of demonizing those who disagree with you, hear out their arguments, first, to understand what their position actually is, and second, to see if their position has any merit. Before we begin to have a rational discussion about our disagreements, we need to share some common ground. One source of common ground is a shared understanding of the nature of truth. 

Take-home messages:
1)      There are alternative views, but not alternative facts.
2)    Truth is independent of our beliefs.
3)     Not all beliefs are equally justified.

Monday, December 12, 2016

Science is not the only way to know things: a rebuttal to Lawrence Krauss



Epistemological naturalism, otherwise known as scientism, is the view which states that science is the source of all knowledge. To assess the merits of the view, one must get clear on what one means by science as well as what one means by knowledge. More than just an exercise in nitpicking, the task of sorting out adequate definitions is important to making advances in philosophical debates. In this essay, I will target some recent philosophical claims advanced by the physicist Lawrence Krauss and argue that he defends a view which understands science too broadly, and knowledge too restrictively. Krauss’s claims are not merely semantic; they are controversial claims about the nature of science and knowledge, claims that have been widely discussed and examined by both philosophers of science and epistemologists.

Krauss is a strong proponent of two closely related theories of knowledge; empiricism and scientism. Empiricism states that sense experience is the source of all knowledge, whereas scientism states that science is the sole source of knowing. While these two views are closely related, one could be an empiricist without endorsing scientism and vice versa. In a series of recent debates and discussions, Krauss has made a number of philosophical claims with little to no argumentative support. Among several others, Krauss has stated that “There are no such thing as non-empirical facts” and that “science is the sole source of knowing”. One might argue that Krauss's statements on these issues should not be taken as representative of his actual views. It's possible that Krauss might come to recognize that many of his statements were sloppy or mistaken, and that he actually holds views that are much more plausible. I don't buy this. Krauss has been very consistent in how he answers questions about knowledge and science over the years. His recent statements should, therefore, be taken to express views he sincerely believes. Furthermore, his statements about the nature of science and philosophy have a wide audience. He has written several popular books and has appeared in dozens of public debates and discussions with scientists and philosophers like. As a theoretical physicist, Krauss rightly recognizes the importance and usefulness of doing science, but I think he gets several things seriously wrong when he starts talking about knowledge and how the practice of science should be defined. 

For instance, take Krauss’s definition of science; “rational thought applied to empirical evidence”. Empirical evidence is clearly important for doing science, and so is rational thought, but science is not the only discipline that meets this description. For instance, most contemporary philosophers would be scientists under Krauss’s definition. One might object by saying that his definition is too broad as it captures many other academic disciplines thought to be distinct from science. But the vagueness of the definition also captures countless other forms of activities and practices. If we accept his definition, plumbers are regularly doing science because they need to make observations about pipes and faucets and make rational decisions while troubleshooting problems. Artists would also be doing a lot of science because they need to rely upon sense experience and rational thought to determine how they are going to create their artwork. In Krauss's own words, "we all do science every single day". Such consequences demonstrate the inadequacy of his definition. After all, a good definition should not be open to an endless number of counter examples. An easy way to remedy this problem would be to give a much more specific and accurate set of conditions for what 'science' is. 

If one understands Krauss to be stipulating a definition, then his claim is trivial. One could just as easily stipulate that philosophy should be defined as the exercise of critical thinking or careful reflection. Since we need to think and reason about what we know, philosophy is indispensable for knowing things about the world. Therefore, philosophy is the source of all knowledge. But I take it that these disagreements are not merely semantic, but a disagreement about what ‘science’ actually is. Like in his recent book on how the universe came from “nothing”, Krauss illegitimately defines his terms and then (ironically) accuses his critics of just playing with semantics. Playing with semantics to reach a philosophical conclusion happens to be a common trend amongst scientists writing popular books lately (Harris 2011, Krauss 2012, Wilson 2014).

With the difficulties facing Krauss’s definition of science, it is unsurprising to find that his claims about knowledge are subject to many of the same criticisms. Krauss seems to think of the activity of doing science as the source of knowledge. But science, understood as a complex intellectual activity, can be broken down into more basic components. When epistemologists talk about sources of knowledge, they usually have in mind things like introspection, memory, perception, reasoning, and testimony. Scientific knowledge is comprised of a complex combination of all of these. 

Scientists rely upon their perceptual faculties when they make observations of the world. They use reason to make inferences and deductions about what they’ve observed. And they frequently rely upon the testimony of their colleagues and scientists of previous generations. Once one realizes that knowledge can be derived by more basic sources (than scientific investigation) it is easy to see why countless instances of knowledge that are by no means scientific. Here are five examples:

v  1: I know whether I am experiencing pain, hunger or thirst (via introspection).
v  2: I know that I turned off my television set before I headed out to the store (via memory).
v  3: I know that my good friend recently got engaged (via testimony).
v  4: I know that “There are an infinite amount of prime numbers” (mathematical reasoning).
v  5: I know that it is wrong to torture children for fun (moral reasoning)

In considering such examples, Krauss would likely point out that they all involve sense experiences, to some degree, providing good support for his belief in empiricism. The problem here is that Krauss misunderstands the debate between empiricism and its opposing view, rationalism. Rationalism states that you can come to know certain things are true through reasoning. Rationalists do not think that you can conduct reasoning without having experiences (e.g. introspection). That would be absurd. They are claiming that the warrant or justification for believing certain propositions comes from the chains of reasoning themselves, rather than something you have to experience or observe. For instance, take logic and math. If you know that Tom is a bachelor, you can come to know that Tom is an unmarried man. To know this, you don’t need to go out and do an investigation of Tom or look at statistics on bachelors. It’s a conclusion that you can come to know just by thinking about the meaning of the concept ‘bachelor’. Tellingly, Krauss disagrees (timestamp: 20:50). He thinks one needs to look out at the world to see whether all bachelors are in fact married. We can observe that all bachelors are actually unmarried men, therefore, it is empirical evidence, that grounds that fact. I will say more on why this is deeply confused later.

Similarly, with mathematics, you just have to sit back and reflect upon the nature of ‘prime numbers’ and ‘infinity’ in order to work out why it is true that 'there is no largest prime'. To make the point more finely, if experience was what really gave you the truth of mathematical propositions, then you could encounter things in the world that might falsify basic mathematical truths like “2+2=4”. Strangely, Krauss, in misunderstanding empiricism, bites an unnecessary bullet on this point (1). In conversation with Peter Singer, Krauss has stated that if you were to encounter a situation where, say, two pairs of apples are put into an empty box, to yield five apples, you would have grounds for thinking that 2+2=5. Singer (rightly!) says that the rational thing to do in such a situation would be to assume that some kind of magic trick was performed, rather than to revise your mathematical beliefs. Your experience of the world has nothing to say about the answers to mathematical equations or basic logic questions.

The irrelevance of empirical evidence to fields that deal largely in the abstract explains why mathematicians and logicians have made great progress with just pencils and paper. (Side note: I tend to think that philosophers occupy a kind of middle ground between math/logic and science. Some are interested in questions where empirical evidence is mostly irrelevant (e.g. metaethics) while others, like myself, are interested in issues (e.g. how the mind works) that are intimately connected with scientific investigations.)

One might argue that since facts about mathematics and logic aren’t about the physical world, they aren’t bona fide facts. They’re just relations of ideas or conceptual truths. This is what some empiricists, such as David Hume, argued. But why believe that facts have to be about the physical world? What makes propositions about the physical world privileged? Is there a sense of objectivity that comes with science that is not realized in mathematics or logic? If so, what exactly is the relevant difference between claims about the world and claims about formal systems? A possible answer would involve appealing to yet another philosophical view, scientific realism. Unlike in mathematics or logic, the entities and postulates described by scientific theories are objectively real, independent of human thought. Mathematics and logic are just useful human inventions and do not bear on claims of truth. Judging by what Krauss has said about the nature of science, he strikes me as an ardent scientific realist. For instance, Krauss has stated “It is nature that determines what facts are, not people” (my emphasis).

While I do not have the space to defend such claims, I want to say that any proposition—a statement that is either true or false—is a candidate for knowledge, and that a fact is to be understood as a proposition that is true (e.g. 2+2=4). Therefore, there can be facts and knowledge about mathematics, logic, and perhaps even morality and metaphysics. In a future post, I will try and grapple with some of the deeper questions raised in these closing sections. Specifically, I will discuss scientific realism and determine whether or not the view is strong enough to justify claims regarding science as a privileged epistemic standpoint. 


Perhaps the take away message from Krauss's philosophical ventures is that he nicely illustrates how not to do philosophy, and demonstrates the need for philosophical rigor and understanding. Philosophy has applications not only within domains of study that are mostly nonempirical (e.g. ethics, aesthetics, metaphysics), but for both doing and reflecting upon science. As the American philosopher Daniel Dennett has aptly put it,


“There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination." (Dennett 1996)


Works cited:


  • Dennett, D. C. (1996). Darwin's dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster.
  • Harris, S. (2011). The moral landscape: How science can determine human values. Simon and Schuster.
  • Krauss, L. M. (2012). A universe from nothing. Simon and Schuster.
  • Wilson, E. O. (2014). The meaning of human existence. WW Norton & Company.

Thursday, December 1, 2016

Applied critical thinking: Expert testimony


We often take the testimony of experts for granted. When reading the newspaper or watching a documentary program, it seems reasonable to accept certain claims, provided that they are stated by individuals who have the right sort of credentials. But for areas of genuine controversy, it would be unwise to just accept expert testimony at face value. If we are to take sides in an area of controversy, we ought to be able to explain why a given expert is right and why others who disagree are wrong. There are plenty of instances where experts strongly disagree. For instance, there are doctors who believe acupuncture is an effective treatment for muscuoskeletal pain, and others who believe it doesn’t work, nothing more than an elaborate placebo. In such cases, it might not be obvious whose testimony we ought to trust, especially if one knows little to nothing about medicine. Figuring out who to trust can be complicated and will likely take some time. Through personal experience, I have encountered many friends and family who are disposed to throw their hands up in the air whenever areas of controversy are brought up. How can we—as nonexperts—ever decide who is telling the truth? How can we know? Instead of adopting agnosticism regarding all areas of controversy, most of us, in practice, do listen to some experts and ignore others. For instance, we are more likely to accept the testimony and advice of experts who share our own views. Instead of trying to confirm the beliefs we already hold, or engage in wishful thinking, we ought to critically evaluate competing expert testimony to the best of our ability. In what follows, I will expand upon a proposal developed by the philosopher Alvin Goldman, aimed to help one decide which experts to trust.

First step: Sift out the pseudoexperts
Before discussing the problem of how to choose between experts, we need to determine who has expertise in the first place. The thought is that once we eliminate all of the phony experts, we can move on to the harder questions of how to decide between the genuine experts. Perhaps a sufficient condition for being an expert in some domain of study would be the possession of an advanced degree—in the relevant field—awarded by a recognized academic institution. For instance, an expert in physics would be expected to have a PhD in physics. However, the knowledge set of your typical PhD is likely to be highly specialized. For instance, a scientist with a PhD in physics may be an expert regarding particle physics, but know very little about astrophysics or applied physics. In some cases, it may be unclear which area of study is the most relevant to the issue at hand. Here’s one example. For dietary advice, one might consider a nutritionist to be the most relevant authority to consult. While nutritionists might know a fair amount about dieting and nutrition, a better source would be a registered dietician. Dieticians tend to have much more training in science and medicine than a nutritionist, and have to pass a comprehensive exam to become certified. Overall, they are more qualified to be making judgments about dietary claims than nutritionists. Therefore, with regard to claims about dieting, the relevant experts are dieticians, not nutritionists. Individuals who purport to be experts on certain matters, but whom lack the relevant qualifications and/or training should raise red flags. In summary, before assessing expert testimony, one must try to answer the following questions:

What does X’s expertise consist in?
Is X’s expertise in any way relevant to the issue at hand?

Once one establishes that they have found a genuine expert and more importantly, one whose expertise is relevant to the question at hand, one must determine whether or not this expert is trustworthy. Are there reasons to doubt his or her testimony?

Steps for analyzing the testimony of true experts

It can be unhelpful to look at the testimony of one expert in isolation. From the point of view of a layperson (nonexpert) most experts tend to be highly persuasive. To get a better sense as to how reliable their testimony is, try to find an expert who disagrees with them, preferably, an expert that has had comparable training and experience. After finding two experts that disagree, it’s time to compare what each of them has to say. The philosopher Alvin Goldman proposes five ways to determine which expert is more trustworthy. I will deal with each one in turn and list some of the problems these guidelines face.

“(1) Read or listen to arguments and counter-arguments offered by the two experts, whether in a published exchange of views, an oral debate, or separate defenses of their respective positions.” (1)
Difficulty 1: The evidence and/or arguments discussed may include esoteric terminology. One may try and listen to the arguments and counter-arguments but fail to understand or even misunderstand them. Goldman makes a distinction between esoteric and exoteric terminology. Esoteric terms are not only unfamiliar to non-experts; they are inaccessible to them. This may because they involve unfamiliar concepts and theories. Exoteric terms are unfamiliar to non-experts but can that can be learned and understood by novices without any specialized training. Grasping exoteric terms may involve a subject to do some extra reading on the subject, whereas understanding esoteric terms may require one to become an expert in that field.

Difficulty 2: Superficially convincing arguments could be made to support one side of the debate, but these arguments may turn out invalid or contain false premises. If one tries to assess the arguments and counter-arguments of two disagreeing experts, one better have a decent working knowledge of informal logic. The expert committing more logical fallacies possesses fewer solid reasons for their belief. But a valid argument is not necessarily a sound one. Going back to the first difficulty, non-experts might not be able to tell whether a premise is true or not. To an expert, a given premise may be obviously false and contradicted by lots of evidence they are aware of. But to a layperson, it may seem to be plausible.

“(2) Find out what the opinions of other (putative) experts on the topic in question.    If most of them agree with expert A, then identify A as your best guide.  If most choose expert B, identify B as the more trustworthy one.  In short, go with the numbers to guide your choice of favored expert.”
Caveat: When it comes to issues where the vast majority of relevant experts are in agreement, going with the numbers is a good rule of thumb. For instance, the vast majority of climatologists accept that the planet is currently going through a warming trend and that this is primarily due to recent human activity. Should we accept the testimony of climatologists solely on the basis of consensus? In short, no. There are some possible scenarios where it would be rational to doubt consensus opinion (e.g. Nazi scientists during WW2). As long as the consensus position seems to be backed by valid arguments and independent sources of evidence, it is rational to side with the consensus.

“(3) Consult "meta-experts" about experts A and B.  Try to find out which of them is the superior expert by asking people in a position to compare and contrast them.  Or people who trained them or have worked with them.”

The idea is looking for additional experts other than the two you initially found. They might be in a good position to tell whether or not experts A or B have compelling arguments, especially if they have nothing to gain or lose in the debate. Where can one find meta-experts? You’re likely to find tons of meta-experts if you look through the peer-reviewed literature, reputable periodicals, or even personal blogs. In today’s age, many scientists blog to try and educate the general public about their work. One could even directly contact meta-experts at a local university or one could reach out to experts through email or on specialized web forums. And if you're lucky enough, you may personally know some meta-experts that could weigh in on the debate.

 “(4) Obtain evidence about the experts' biases and interests, which might lead them to self-serving answers of dubious veracity (whatever their underlying competence).”

In some cases, there is obvious bias and conflicts of interest at play with a given expert. Whether it is some political or religious agenda, these factors need to be taken into account. But one should be cautious when discrediting certain experts. It could be the case that some of the testimony is perfectly valid, or that their bias or conflicts of interest played no role in the formation of their beliefs about the issue at hand. Furthermore, an expert may be extremely biased but turn out to be right. Therefore, before one discredits an expert on grounds of bias or a conflict of interest, one must have some independent reason to think that their claims are wrong. Conspiracy theorists frequently misuse such a guideline when they discredit all experts who testify against their favored theory. Finding a potential source of bias (e.g. government funding) for some expert, and then completely disregarding everything they have to say is intellectually lazy and dishonest. The implication is that all such experts are lying or saying misleading things. If it can be shown that either is the case, that would be the reason to seriously doubt the expert’s testimony, not from just from the possibility of a conflict of interest.  

Another source of bias can be uncovered by carefully studying the behavior of a given expert. Experts who dismiss alternative positions out of hand should raise red flags. I say this because most experts tend to exhibit a certain kind of psychological profile. They tend to be fairly humble, well-integrated into their epistemic communities, and they seem to be genuinely interested in the truth. Experts tend to be very cautious when making controversial claims and admit that their own favored hypotheses could be mistake. They tend to actively engage their peers in academic journals and at conferences, try and test their hypotheses, and compare the success or failure of their own predictions to rival theories. Commenting on the small group of scientists who endorse the 9/11 conspiracy theories surrounding the tower collapses, Noam Chomsky has noted that, “They are not doing what scientists and engineers do when they’ve think they’ve discovered something" (2). Having studied the phenomenon of conspiracy theories quite extensively, I can vouch for Chomsky. Many of the “experts” who promote conspiracy theories lack the psychological profile found amongst genuine experts, and seem to share a number of opposing psychological traits. For instance, many manifest an excessive degree of pride or intellectual superiority, they tend to keep within a closed circle of peers who share their views, and they don't even attempt to convince the general scientific community about their "findings". These individuals might even possess the relevant knowledge to assess claims of interest, but their psychological profile and behavior casts serious doubt upon their capacity to seriously engage criticism and to critically evaluate their own positions. It is for this reason that their testimony should be taken with a grain of salt.

“(5)  Gather evidence of their past track-records and apportion trust as a function of these track records.”

Gathering evidence of an expert’s track record may turn out to be a challenge. Good track records of experts may come in the form of accurate and specific predictions. One might also try to look through their publication history or get a sense of their reputation amongst peers. Another way would be to look at an expert’s track record is to see if they have subscribed to controversial or fringe views in the past. Some experts can just be contrarians, while others seem to be suffering from crank magnetism. For instance, James Fetzer, a well-respected philosopher of science who taught critical thinking for most of his career, believes just about every conspiracy theory. He not only believes that 9/11 was an inside job, but that no children were killed at Sandy Hook, that we didn’t land on the moon, and that Paul McCartney died in the 60’s and was then replaced by someone with the same physical appearance, personality, and musical talents. In the case of Fetzer, there seems to be some kind of systematic misapplication of critical thinking going on (at least when he is theorizing about certain historical events). Those like Fetzer not only routinely appeal to pseudoexperts, but they accept many demonstrably false or highly questionable claims, make a number of unwarranted assumptions about human nature, and are apparently unskilled at making inferences to the best explanation. Since critical thinking is really a set of skills, knowledge of certain concepts and strategies related to the subject is not sufficient for knowledge how.

Conclusion
I believe that if one closely follows all of the advice outlined above, one will be more likely to acquire true beliefs about the world. But it is one thing to propose a strategy that makes sense in theory and quite another to have the strategy actually work for most people. Certain facts about our psychological limitations (e.g. confirmation bias, cognitive dissonance) may prevent us from being objective enough to really follow through with such advice. Nonetheless, we ought to at least try to be as objective as we can. Given the complexity of the world around us, we all need to appeal to experts at some time or another, whether it's to find out about our personal health, or how the world works. If we are going to appeal to experts, and are genuinely interested in discovering the truth, then it's a good idea to try and track down the right ones.

Bibliography
 1. Goldman, A. I. (2001). Experts: which ones should you trust?. Philosophy and Phenomenological Research, 63(1), 85-110.


2. Tuskin, B. (2013). Noam Chomsky has no opinion on building 7. Retrieved December 05, 2016, from https://www.youtube.com/watch?v=3i9ra-i6Knc