I said this book is about argument. But why argue about philosophical beliefs? It’s a free country; everyone’s entitled to their own opinion, right?
Well, that depends.
First of all, we should distinguish between two sorts of opinion. One sort has nothing to do with truth. Examples:
- chocolate ice cream is much better than vanilla
- classical music sucks
- the Toronto Maple Leafs are the only hockey team to root for
- Three Stooges movies are hilarious
- Justin Bieber is just the cutest thing there is
I don’t mean that these are false. I mean that they’re neither true nor false. It’s stupid to argue about any of these opinions (although, for sure, people disagree about them). You are, of course, entitled to these opinions, and you shouldn’t really care if somebody feels differently. So stop getting mad at those Montréal Canadiens fans.
But other opinions are either true or false. These are opinions that are called beliefs. If you believe that the Franco-Prussian War began in 1875, then you’re right or you’re wrong. (You’re wrong. It was 1870.) If someone has a different opinion, then at least one of you is wrong. If it makes any difference to either of you, then you’ll want to make some efforts to find out what the year really was. All this is entirely out of place, of course, when it’s a matter of ice cream preference.
But does it matter to you whether your belief about that date is true or not? Should it matter? What’s so good about having true beliefs?
One reason is that, by and large, having a true belief about something is better for you—has better effects—than having a false belief, or no belief at all, about it. This is especially clear when it comes to everyday matters. If you believe your plane leaves at 4 p.m. and arrive at the airport in plenty of time for that, but in fact your plane was scheduled for 10 a.m., you’re in trouble. And you’ll also be in trouble if you have no beliefs at all about when your plane leaves (and don’t try to find out).
Sometimes, however, having a true belief is more harmful. For example, imagine that the plane you missed crashed after take-off; your life was saved by your mistaken belief. But the consequences of true beliefs are usually better than those of false ones.
This is now an acceptable technical term in philosophy. I’m not kidding. It has been ever since the publication, in 2005, of On Bullshit
by the distinguished Princeton philosopher Harry Frankfurt. The book is not a joke either. It’s a serious treatment of what Frankfurt takes to be a growing and dangerous trend: talking and writing with no concern about truth. Bullshitters, then, are different from liars. Liars recognize the difference between truth and falsity, and try to get others to believe what’s false. Bullshitters don’t care what’s true or not, and pay no attention to truth or falsity in what they say. They just make up things, and say them to suit their purposes.
Frankfurt thinks that bullshitting is on the increase. He thinks that part of the blame for this is the increasing number of situations in which people who know nothing at all about a subject are encouraged to give their opinions about it. Several situations like this come to mind. School children, in order not to injure their delicate self-esteem, are nowadays encouraged to express any unfounded opinion they like; they’re praised by the teacher, and their opinions go uncorrected. The Internet provides endless venues for ignorant
opinionation. There are loads of “call-in” programs on the radio, where people’s baseless but nevertheless strong views can be broadcast to a large audience.2
Frankfurt points at postmodern antirealism, a family of views recently infecting the academic humanities, the central idea of which is that there is no such thing as the facts: there’s only what appears to you, coherent with—in fact, generated by—your particular ideological/social/economic/gender position. (I doubt, however, that this movement has had much effect outside the Groves of Academe.)
Several political commentators have remarked on the growing prevalence of bullshit in political campaigning, frequently mentioning Donald Trump’s tendency to say, for effect, whatever he wants, with no concern about whether what he says is true
.3 Compare this with a slightly different phenomenon in political talk: truthiness. This word, coined by American TV satirist Stephen Colbert, refers to an assertion a person makes because it feels right, not because it’s backed by evidence or logic—the truth he wants to exist. Colbert attributed this characteristic to assertions made by George W. Bush. And, of course, both of these are different from Nixon’s flat-out lying
Frankfurt is upset about the increase in bullshit. He thinks that getting used to saying and hearing bullshit undermines our capacity to care about the truth of what we believe—so it’s more dangerous than lying.
Frankfurt is right to be upset about political bullshit. There are consequences, often very important ones, of what politicians say.
But wait a minute. Let’s be careful here. There are two possibilities for interpreting what the Donald says. One is that he’ll just say anything, whether he believes it or not, as long as it has the effect he wants. This is something like lying but not exactly. Another is that he actually believes what he says, but he has failed utterly to take proper care—any care at all, apparently—to assure himself that his beliefs are really true. Both might be counted as bullshit. When we (finally!) get around to thinking about talk and writing in philosophy, we’ll consider both kinds of philosophy bullshit.
BELIEFS QF NO CONSEQUENCE
All right. Let’s concentrate on the second kind of bullshit: failure to take appropriate care to make sure that what you believe is true. The question we’ve been looking at—maybe you’ve forgotten, and no wonder—is why we should worry about whether our beliefs are really true. When there are good consequences for true beliefs, or bad consequences for false, then that’s the answer. But there are lots of cases in which what you believe has no practical import—that belief about the Franco-Prussian War, for example. Or the belief that the planet Neptune has the third-largest diameter of any planet in the Solar System, or that the smallest prime number larger than 829 is 833. These beliefs are false (Jupiter, Saturn, and Uranus are larger in diameter; and the next prime number is 839—Are you glad I told you?), but it’s hard to imagine how having these beliefs, or no beliefs at all about these matters, might do you any harm, or how you’d benefit from a true belief about the war, the planet, or the number. What good will it do you to have the (true) belief that in 1938 the state of Wyoming produced one-third of a pound of dry edible beans for every man, woman, and child in the nation?
This bean factoid actually had considerable practical application. The single sentence reporting it was used as a filler (a tiny item inserted at the bottom of a newspaper column to take up empty space) in a scrappy little New York newspaper, the Village Voice, that used it constantly during the 1960s, sometimes three or four times in a single issue.
But it seems that even when there’s no prospect of benefit resulting from a true belief, or harm resulting from a false one, some people want to have true beliefs anyway. Why? Well, they just value truth, period. True beliefs sometimes have, for some people, what philosophers call intrinsic value; that is, they are just good in themselves, not good for anything else. (The contrast here is with instrumental value.) (Scratching where it itches has purely intrinsic value.)
From a biological/evolutionary point of view, it makes sense that we see intrinsic value in having true beliefs. The whole biological
point of our belief-forming mechanisms is to get beliefs right—any sorts of beliefs. Truth is where the survival advantage usually lies, and that explains why these mechanisms evolved.
This is somewhat oversimplified. We can distinguish between mechanisms that are designed primarily to maximize the organism’s true beliefs, and those designed primarily to minimize the organism’s false beliefs. Which is more advantageous depends on circumstances (the value of a true belief, the harm of a false belief). Further, when it would take too much time or effort to do either, something less than maximizing/minimizing would be more advantageous.
They’re general mechanisms that don’t pick and choose among different contexts. So it’s normal to have some curiosity—some desire to have true beliefs and avoid false ones—even in areas where there seems to be no advantage or disadvantage in believing anything or nothing.
The fact that evolution has built us (some of us) to care about the truth of our beliefs (some of them) does not justify that care. It may explain why we care, but it doesn’t show why somebody who doesn’t care should. This is a general problem with justification. When (it seems) we’ve reached ground floor—when something is sometimes desired just intrinsically—how can we prove to people who don’t that they should find it desirable? Maybe we can’t. Suppose you had a friend who thought that the Franco-Prussian War began in 1860 because she just sort of remembers having learned that in high school a decade ago, and you wanted to convince that person to take more care to make sure that that belief was true. Historians are interested in getting these things right, but why should your friend be? If your friend (like many others) isn’t really worried about the possibility she’s gotten the date wrong, then maybe that’s the end of the story.
OKAY BUT REMEMBER WE’RE TALKING ABOUT PHILOSOPHY
So now there are three possible categories for philosophical opinions:
- There isn’t any question of truth or falsity here at all.
- Truth and falsity do apply, and there are consequences which make it better to have true opinions. Philosophical truths can have instrumental value.
- Truth and falsity do apply, but having true opinions just has intrinsic value for some people.
It’s a popular view that philosophical opinions fall into category (1). This categorization is expressed by the saying, “Everybody’s got their own philosophy.” The implication here is that there’s no arguing about philosophical opinions, because they’re something like individual matters of taste. Compare “Everybody’s got their own taste in ice cream.”
But what is this “philosophy” that everyone’s supposed to have? Sometimes people mean by a “philosophy” a guiding principle for your life, or a short and pithy description of your general outlook.
A friend of mine reported that the driver of a taxi he was in had asked him what he did for a living. When he said that he was a philosopher, the taxi-driver replied, “Really! Well, what are some of your sayings?”
It’s possible (but unlikely) that you have one of those philosophies (or should). Most people don’t have one of them. They never think about an overriding general principle for their lives, and just live them one day after another. Is there something wrong with this? Socrates (who you’re supposed to think is the greatest philosopher ever) allegedly said: “The unexamined life is not worth living.” Is that supposed to mean that you’re better off dead if you run your life without thought to your overriding general principles? Well, that’s crazy. That sort of self-examination stands a good chance of rendering you incapable of doing anything, paralysed with self-consciousness, or at least of making you the kind of person others want to avoid.
Anyway, if you care to have one of these, you can make one up any way you like. Go right ahead. Most of the books in the “philosophy” section of your local bookstore are designed to help with this. But here’s an unexpected source of help with finding your “philosophy”: a cosmetics company, which owns the Philosophy.com domain, offers six different perfumes in their My Philosophy series, each having a name which is a full sentence expressing a different philosophy. One flavour of perfume is called
I see the world with love and compassion (the corresponding smell is “creamy vanill...