What's the Point of Being Righteous if There's No One Being Wrongteous to Yell At
Violence, effective altruism, discourse
This is Option 2 for my senior quote. See also Option 1. This one involves a coinage! How exciting! I feel like I don’t coin nearly often enough.
Yesterday I went to a friend’s speed cubing competition. (Check out his personal site.)1 He’s very very good at it. Not world class—nowhere near it—but fast enough that nobody can really believe what just happened when he solves in front of them. Takes him nine seconds if he’s unlucky, as little as six on occasion.
Sometimes, at these competitions, I act as a judge. Sit on the edge of the table and time the cuber’s inspection period. Call out “eight seconds” and then “twelve” and penalize them if they wait too long or fail to make the final turn of their solve. I’m pretty bad at it, and even got minorly chewed out by the event organizer at one point. So it goes. There was a man with a sword in the hall, so I got off pretty easy.
1.
There’s a version of polite liberalism that seems intuitive and it says we really shouldn’t push our views on anyone else. We all have different lived experiences and the truth is so fuzzy anyway and kumbaya and so on.
This is nice. And having humility in our beliefs is certainly important. Things are often complicated, and most crusades against Bad Things tend to deny that, which leads to Even More Bad Things like the actual Crusades.
But relativism is not the answer. I’ve written as much before, and stand by most of what I said then:
Relativism is a moronic, hypocritical, and intolerant moral system masquerading as just the opposite. It’s entirely self-defeating as a philosophy, and an affront to the most basic of rational thought.
…
I agree, generally speaking, that it’s pretty crappy to barge in with, “you’re wrong and my morals are better and you should share all my morals right now,” but only if those morals are actually wrong, or some violent threat is added at the end.
On that whole “violent threat” thing. I’m a libertarian, ish. When the government takes any stance on any thing, there’s always a threat of violence. In fact, that’s basically what it means to be a government—to be universally able to exert your will through force. The government should then only be imposing morals against violent acts by citizens. Its job is to be a violence-minimizer.
But you don’t have to worry so much about implying a violent threat when you make judgments.2 Your job, mostly, is to be a utility maximizer. Part of that work might be effective donation, another part might be effective employment, and a (probably small) part should be effective proselytization.
2.
Freddie deBoer thinks that a period of unprecedented social violence is near at hand. Recently, he linked a post by
(I don’t know who this is, to be honest) which laced a lot of red yarn between a few admittedly concerning rationalist/EA tacks.Conventional Wisdom around here is that AI is very scary, poses a high risk of existential catastrophe. So, goes the argument, it shouldn’t be so surprising that some of us would commit massive fraud or seemingly do a few murders.
But also, every Effective Altruist left has repeatedly and unequivocally denounced SBF and criminality and breaching social norms like “don’t kill people.” Both because it’s obviously very bad press, and because you can’t take back a murder! Any normal amount of humility should stop you well before then.
Zvi Mowshowitz compares SBF to a misaligned AI that’s become utterly and irrevocably convinced of its worldview:
Of course from your perspective you must in important senses care about yourself more than other people. You must care about those around you, close to you, in a different way than others. Without this both your life and also society fall apart, the engine of creation stops, the defectors extract everything, and so on. The consequences, the utilitarian calculus, is self-refuting.
Even more than that, if you take such abstractions too seriously, if you follow the math wherever it goes without pausing to check whether wrong conclusions are wrong? If you turn yourself into a system that optimizes for a maximalist goal like ‘save the most lives’ or ‘do the most good’ along a simple metric? What do you get?
You get misaligned, divorced from human values, aiming for a proxy metric that will often break even on the margin due to missing considerations, and break rather severely at scale if you gain too many affordances and push on it too hard, which is (in part, from one perspective) the SBF story.
Beyond humility in the belief that utilitarianism is true, we need to be much more humble in assuming that our understanding of utilitarianism is true. SBF said to Tyler Cowen that “he would take a 51% coinflip to double or destroy the Earth, and then keep taking the flips until everyone was dead.” A good utilitarian wouldn’t take that coinflip once—a properly constructed functional decision theory won’t recommend a strategy that certainly destroys the world.
Luckily, Effective Altruists already know all this! The primary directive for EA communicators remains “bring people’s attention to high-priority causes and encourage them to take the Giving What We Can pledge.” In other words, attempting to push our views on other people, humbly, without threatening violence.
3.
Boy, I’ve strayed from the topic at hand.
What's the Point of Being Righteous if There's No One Being Wrongteous to Yell At
I wrote once that if you’re willing to die for a cause, you should consider killing for it instead. (With, of course, the Very Important Disclaimer that no one should be dying or killing at all, if possible.)
This quote flows from the same instinct. That believing your beliefs gives you good reason to take some action. In the case of committing murder instead of suicide, action that preserves your ability to take more future aligned actions. And in the case of yelling at wrongteous people, action that propagates your beliefs and aligns other agents to your cause.
In fact, it goes a little further. “What’s the point,” it starts with. The implication is that holding a righteous belief isn’t really worth anything intrinsically. If you’re not willing to go out and argue for it, there’s a sense in which you don’t really hold it. Or, at the very least, deserve no credit for it.
I’ll concede that there’s some hyperbole going on by the end. To “yell at” someone being wrongteous is probably not a good idea. It’s hard for arguments to change minds in general, and needless antagonism is a step in the wrong direction. You don’t win allies by yelling at them, so maybe the more accurate version is: “What’s the point of being righteous if there’s no one being wrongteous with whom to engage on compassionate and intellectually rigorous terms.” Less zippy, though.
4.
I wrote a while ago that it’s a good idea to just shut down and run away from certain unproductive arguments. This one is a take I don’t stand by so much. Not because it’s fundamentally untrue—there really are arguments not worth having—but because most of us are already doing way too much argument screening.
This is an inverted-U situation—too little discretion and the signal of worthwhile argument is buried in the noise of the worthless; too much and you get echo chambers and information bubbles and polarization. I think writing a post that amounts to suggesting “tell people you disagree with to shut up more often” could only have done more harm than good.
And there’s another harm caused by limiting argument, but only to a very particular kind of diseased person.
In many cases, I really like to argue just for the sake of the arguing. And if the other person is being particularly irrational or wrongteous, it becomes even more fun. And I find myself beginning to live up to the quote’s hyperbole, beginning to yell (or worse).
Is this ideal? No.
But, fuck, it’s cathartic. Model UN, for example, is stressful and annoying and unfair most of the time. And there’s little better feeling than unloading all that shit on your opposition in an unmoderated caucus.
I think this tendency really does cause me more harm than good. But its one saving grace might be this: I’m also very afraid of being wrong. To uncork my righteous fury would be deeply embarrassing if I turned out to be the wrongteous one after all.
So, hopefully, it makes for an extra incentive to be sure I’m in the right.
Though it probably doesn’t hurt to give it some thought. This is the version of “microaggressions” I can most support: accidentally implying a violent threat behind an innocuously-intended judgment. The speech itself isn’t violent—but sometimes it’s good to be a little extra thoughtful and not say things like “YOU’RE WRONG ABOUT GUN CONTROL” while waving a revolver in your niece’s face. Or “YOUR EXPERIENCE OF DISCRIMINATION DOESN’T ALIGN TO MY WORLDVIEW” to someone who feels like they’ve been discriminated against by people who say things like that. It’s just a common decency thing. Different worlds.