I once almost gave a talk at a philosophy conference at Eastern Michigan University!1 This is a massively-expanded version of the three-ish paragraph proposal I wrote up, mixed up with a comment I left on a post last fall.
1. You Know Nothing
Well, no, you probably know a few things. Like that monogamy is morally permissible, that the phones are making us dumb, and that penises are bad for society, if you’re a regular around here.
But what does it mean for you to know these things?
Well, the classic definition of knowledge is as justified true belief (JTB). You believe monogamy is morally permissible, you justify that belief with all the wonderful arguments I’ve made for it, and your belief is true. So you know it!
Unfortunately, this definition has a bit of a problem. It implies there are some things you think you know, but actually don’t.
For instance, suppose you and I make a bet over whether there’s a squirrel outside your window right now. If there is one, you’ll have to give me $20 (and your undying love, attention, appreciation, and engagement, forever and ever), and if there isn’t, I’ll have to send you a blog post tomorrow.
You look outside your window, and no squirrel!
So you form a belief—that, tomorrow, a new blog post will appear in your inbox. And the belief is justified: you presume that I’ll send the post because you won our bet.
But our bet was pretty poorly defined! There’s no way for me to verify its outcome, so I decide that we should call it off, and send you a message saying so. Unfortunately, you’re out partying too hard in anticipation of tomorrow’s post and you don’t see the message.
Later in the day, I have a great philosophical epiphany and decide to write a blog post regardless. It arrives in your inbox the next morning.
Did you know that my blog post would arrive?
Well, you believed that it would, your belief was true, and you justified your belief! But your justification was wrong—you thought I’d send you a post because of our bet, not because the muses spoke to me while I apostolically wandered the streets of Ann Arbor preaching the gospel of shrimp welfare.
You had a JTB, but your justification was illegitimate—so it doesn’t quite seem appropriate to call it knowledge, does it? Clearly you were severely misinformed about the situation! If another subscriber happened to see me become inspired and begin fervently typing into the Substack editor, they would then form a legitimately justified true belief about receiving a post tomorrow—they would know about it.
This sort of a situation is called a Gettier Case, named after an epistemology-wrecking schmuck called Edmund Gettier.
Your belief only happened to be right—you got lucky! But you didn’t really know anything.
So Gettier concluded that the JTB definition of knowledge was insufficient.
Aagh! What even is knowledge then? Are we all justifying everything wrong all the time? Is all that I think I know only accidentally true? That seems bad!
2. Maybe It’s Not Bad
About six months ago, Generously-Tipping Substacker
argued that Knowledge Is Overrated.He wrote that legitimate justification isn’t all it’s cracked up to be. Even in a Gettier Case, when you happen to be right by accident, there’s no need to stress:
Suppose you care—as I do—that (a) your beliefs are true, and (b) you didn’t arrive at them in a blameworthy or criticisable way. In this case, both values are attained: your belief is true, and you didn’t fail in your epistemic obligations. Given this, what’s there left to care about?
Amos grants that luckily-true beliefs aren’t ideal—we should also be seeking out knowledge, the real kind with legitimate justification—but the occasional Gettier JTB is nothing to get worked up about.
Then again, surely some Gettier JTBs must be problematic! I could really be harmed by certain kinds of illegitimate justification, couldn’t I?
Well, Amos argues that, even in the most devastatingly harmful Gettier Cases, you should still mostly just chillax:
[S]uppose, [when I try to subscribe], a malicious Substack genie hacks the website to make it so that when I press the subscribe button … it subscribes me to (*shudders, dies*) instead of . In that case, you’d form the justified true belief that your inbox will never be barren, but be wrong about whose newsletters will be gracing it.
Could you imagine being put in such a situation? How terribly angry you’d be… But Amos argues that you wouldn’t be angry over your lack of knowledge—rather, you’d be upset with two slightly different features of your situation:
[W]hat you care about is that (a) you have to delete a bunch of typo-ridden emails from some Dunning-Kruger patient from Michigan, and (b) that you had a false belief about a closely related proposition: namely, about who you’d be getting emails from.
It’s not your lack of knowledge that’s to blame! Just the emotionally draining task you’ve created for yourself, and the false belief that led to it—your assuming you’d subscribed to the British pasty young philosophy man.
Boom, done, right? “Knowledge is #overrated, send tweet.”
3. Knowledge Is #SometimesImportant
For a while, everyone thought space was filled with some magical mysterious element called aether.
Just like sound needs air to propagate, and water waves need… water… to propagate, we assumed that things traveling through space, like light and gravity, needed aether to propagate. Waves traveling through a vacuum? Unthinkable! Absurd!
When Isaac Newton wrote the Principia, he discovered lots of interesting things and produced a lot of knowledge. He noticed that objects had inertia, that force was proportional to mass and acceleration, and that apples tended to bonk heads. And he justified all these beliefs too—you have inertia because you don’t ever feel like you’re moving,2 F = ma because all the experiments show that F = ma, and apples bonk heads because gravity pulls them out of trees.
Newton also figured out some laws of planetary motion based on gravity. He realized that the moon was looping around us, and explained that gravity was causing that too.
But how, exactly, was gravity reaching out and grabbing at the moon? For that matter, how was it grabbing at apples?
Aether, Newton said.
Aether, the magical invisible medium, was constantly flowing toward the earth, pulling everything along with it. Air resistance slowed a pendulum, and the flow of aether accelerated it.
In other words, Newton’s theory of apple-bonking was a Gettier Case! His belief was true—apples really do bonk heads—but it was justified illegitimately—in fact, aether’s not pulling the apples down, something something spacetime curvature is!
This seems like a significant problem—having the wrong model for what’s causing physical phenomena makes engineering difficult! It’s hard to get a satellite into orbit if you think a bunch of aether’s in the way.
We can even put Amos’ pet example into a similar situation—if you accidentally subscribed to Bentham's Newsletter, you might go back and try to subscribe to Amos again, once more creating the JTB that your inbox will be filled.
But, if the Substack genie continued Substack genie-ing, you would find yourself accidentally subscribed to Bentham again and again and again...
A realization that your justification was illegitimate—discovering the intervening genie—would clearly be necessary to plan all your future actions properly.
So Gettier Cases are, in themselves, imprudent when we’re thinking about what to do in the future. Justification of gravity via aether makes space travel harder, and justification of inbox-filling that doesn’t take Substack genies into account can only cause disappointment.
4. This Is #MaybeMorallyDisastrous
Consequentialists are forming lots of beliefs about the results of their actions all the time. When I save a drowning child from a pond, I do so because I think it’l bring about a better world.
Why do I think so?
Because I presume she’ll live a good life and be glad I saved her! If after I’ve saved her from the pond, she thanks me and goes off and is happy, then my belief will be confirmed: the world will get better, I knew it.
But what about certain cases of moral luck?
Imagine that a well-respected utilitarian demagogue has somehow convinced me that I should try to execute the owner of a chicken farm. I buy a rifle and drive out to his farm, hide behind my car, and wait for him to appear.
The farmer walks out of his barn, I fire, he drops, and I drive away. I believe that I’ve brought about a better world, and I think it’s because all of his chickens will live much better lives under the care of his daughter, who’ll inherit the farm.
Unfortunately, I’m totally wrong!
I accidentally shot the farmer’s twin brother. Now, this brother was a bad dude himself—in fact, he was planning to go to lots of restaurants that night, leaving no tips and drinking lots of Diet Coke. (And then he was also gonna go kill a few people afterward.)
So I actually did bring about a better world! But I’d justified that belief thinking it would result from killing a chicken farmer, so I decide to become a serial killer of chicken farmers.
Of course, killing chicken farmers is actually a really awful way to improve the world! My antics bring a lot of negative attention to utilitarianism, kill a ton of innocent people, and end up saving very few chickens.
This is bad!
It’s clearly very important that our beliefs about the morality of our actions be legitimately justified. Put another way: correlational studies won’t do—we need to be proving causation at every turn.
It’s not entirely clear to me how much we’re already doing this, or how widespread the Gettier Problem is for consequentialists. But I’d wager it’s significantly more common than most of us like to think—the replication crisis was a big shock for the sciences, and seems analogous. Somewhere between 5% and 40% of medical studies are bunk—philosophers and effective altruists are usually working with much smaller budgets.
We try to be careful—EAs especially really do try—but causation is a high bar. Even if Gettier Cases aren’t disastrous for morality, they’re certainly demanding.
Fundamentally, justified true belief can’t quite cut it—if we want our moral heuristics to be worth anything, if we want to plan for the future, we need knowledge.
Pulled out because of a scheduling conflict, FYI! Was not rejected! Thank you very much!
This explanation might feel a little strange, so I’m gonna spend a paragraph on it:
Newton’s First Law—objects keep moving the same way they have been unless another force interferes—is best understood by the principle that there’s no way to adjudicate between inertial observers. If you and I float past each other on the ISS, I’ll feel like I’m totally stationary, and you’ll feel like you’re totally stationary, but we’ll each see the other passing by. So we can say that every object feels to itself like it’s totally stationary. Of course, a stationary object would never start moving without an external force causing it to, so we conclude that no object will ever change its motion without that external force.
I take it that Amos' point was simply that we shouldn't care about justification in itself, and a true belief isn't itself inferior to a justified true beliefs. It seems like most of your arguments are arguments to the effect that justification is instrumentally valuable because it leads to less error outside of the single belief we have stipulated to be true. But I don't think Amos' position is in conflict with that. Sure justification is valuable because it makes us better at getting true beliefs, but the justification part is completely irrelevant to the value of any particular true belief.
I’m not sure your counterexamples actually disprove what Amos argued. Indeed, in all your examples, the actual problem seems to be that you have false beliefs about topics on whom true information is valuable. For example, in the Newton example the actual problem is that you have false beliefs about how gravity works which makes space travel harder, if your belief is in fact true, you won’t have this problem. Similarly, in the chicken farmer example your actual problem is that you have false beliefs about the effects of killing chicken farmers, which is the real reason why you are behaving in ways that are predictably bad for your goals. Unless I’m misunderstanding some of your examples, this seems to be the case for pretty much all of your examples.
Also, just thinking about it, it seems to me that justified true beliefs are what we actually want. We want true beliefs because accurate information is often valuable for achieving our goals like following policies that actually give us the effects we want or making technological advances like vaccines. We want to reason in a justified way, because reasoning in unjustified ways will predictably lead to us coming up with less accurate beliefs, unless we get lucky, which by definition is not something we can have control over. So reasoning in justified ways is in expectation. The best strategy for obtaining accurate valuable information, which is instrumentally useful for achieving almost any goal. This also justifies the common intuition that knowledge is something even bad actors rationally want since it appeals to purely instrumental reasons rather than virtue. That said, even if you disagree with my argument regarding why justified true beliefs are quite satisfactory for our purposes. My criticism of your examples still stands.