While I know this post is supposed to be about deception as an inherent negative in deontology, I think what lies under the surface is something different - the idea, as discussed in the other comment chain, that certain acts (cheating on someone, making Muslims eat non-halal food) are fine when the 'impacted' party doesn't know, but bad when the impacted party does know. And the question of whether the 'impacted' party is impacted at all if they don't know.
I think what Amos is reacting so strongly against isn't deception. He's reacting strongly against the fact that Muslims are being forced to violate their religious principles. The question for utilitarians is then, *why does the badness of this scenario change (get less bad) when deception is added*?
From this point of view, it seems like utilitarians actually are the ones with a weird caveat for deception, whereas deontologists treat it just like they would everything else.
That's why I think Amos is probably fine with "your hair looks beautiful today". It's not about the deception - it's about what's behind it.
On a separate note, I think the reason people lie to the murderous axe-wielder is simply because your brother being murdered is wayyy farther down the utility scale than lying about whether he's home is down the deontological moral scale. I don't think people are "at heart" utilitarians. You can't use an extreme example that's only extreme in one dimension - if you used an example extreme in the other dimension (like, your Muslim friend surprised you at work and is about to eat some non-halal food, do you tell him the truth even if there's slight negative utility from your boss possibly finding out and firing you?) I think you would get the opposite judgement from most people. Somehow you have to find something extreme in BOTH dimensions (although I tried for two minutes and couldn't come up with a great example 💀).
Overall a very interesting question and one that I've personally struggled with over the years. I definitely don't assign a deontological moral value to deception; however, I think there's something to the idea that Amos's example is important for deontology in other ways. Even if you could create a scenario where there's effectively zero chance of someone finding out, would you still cheat on a significant other/lie to Muslims about the halal-ness of their food, assuming it gives you slightly positive utility for some other reason? I'm personally not 100% satisfied with "it weighs down the conscience of the cheater" and "there's a chance the cheatee finds out" - I feel like there must be other utilitarian explanations.
Also, I think your point about sources of morality being "my mom said" and "duh" is great. (However, I think feeding Muslims non-halal food and probably also cheating on someone fall into the "duh" category which is why they're interesting!)
Wow, thanks! Lots of interesting reframings and thoughts...
> He's reacting strongly against the fact that Muslims are being forced to violate their religious principles. The question for utilitarians is then, *why does the badness of this scenario change (get less bad) when deception is added*?
I think there's still a trick being played on our intuition here. Usually, forcing someone to violate their principles will make them feel bad! That's most of why it's wrong. (Probably there's also room here to say, "I'm humble enough to assign some probability to their principles actually being right, which would mean that their violation is extremely bad.")
In Amos' case, though, that heuristic misleads us! There's no reason that a Muslim customer would be harmed emotionally (they're not gonna find out) and dietary prohibitions are probably the silliest part of religion. If a God exists that will send you to hell for eating non-halal meat, there are bigger issues with the moral structure of the universe...
So the scenario's badness changes because the scenario is fundamentally different! In the case without deception, Muslim customers will be upset, with deception, they won't be.
> You can't use an extreme example that's only extreme in one dimension - if you used an example extreme in the other dimension (like, your Muslim friend surprised you at work and is about to eat some non-halal food, do you tell him the truth even if there's slight negative utility from your boss possibly finding out and firing you?) I think you would get the opposite judgement from most people.
You may be right! Then again, the whole point of deontology is the categorical imperative. So following a deontological intuition is extremely demanding! If you accept that telling the truth is obligated in your Muslim-friend example, you sort of also have to accept it in situations where more will be lost. Like, maybe you signed some kind of weird super-binding NDA, and would have to spend a few nights in jail if you don't lie... does the deontologist maintain the obligation? How about if it's a few weeks? And so on until you get to the brother-murdering. Without utilitarianism, how do you stop anywhere non-arbitrary?
I guess I don't really get what you mean by "deontological moral scale"—that sounds like implied consequentialist reasoning to me, which would explain why it helps escape the reductio.
> Even if you could create a scenario where there's effectively zero chance of someone finding out, would you still cheat on a significant other/lie to Muslims about the halal-ness of their food, assuming it gives you slightly positive utility for some other reason?
I hate sounding anything like a modal skeptic, but I think it's really hard for our intuitions to guide us in a situation like this. You're right that it still feels icky—but in the thought experiment, that ickiness isn't allowed! It seems like it mostly comes from the expectation of cheating/lying weighing on your conscience, which we've ruled out.
This is where the Oedipus complex comes from—when we find ourselves in a world where back-of-the-head ickiness is all that's left, we instinctively project that ickiness into morality-space. "Duh" intuitions like these are probably better thought of as "ew, gross" intuitions, which strike me as less reliable since they're perfectly explainable by pattern-matching to non-thought-experiment situations where real harm happens.
I think the point about the categorical imperative is interesting! I feel like Amos isn't actually a full deontologist (having not read his post; maybe I'm being too charitable here) and is more like somewhere in between that and a utilitarian (which ig if deontology is rly strict, this could just be defined as a utilitarian that has a complex utility function which includes some discrete ruleset of deontological rules?). Which is why the weights matter and why there's some arbitrary, hand-wavy, probably moving line. So ig yeah in that sense he's not a deontologist, but idk that seems like a rly strict definition of deontology to me? I'm not really sure how most ppl use the term tho - never taken a philosophy class.
And about the ickiness... you're probably right. My own Oedipus complex just keeps making me wish for another, more victim-based reason. :(
Also yeah I think the broader point about how we project morality onto our media is really, really interesting, and I'm really interested to see where else it might appear!
I’m not sure I understand the argument here — it seemed a bit like this:
Ari: utilitarianism has a lot going for it, in that it says the right supervenes on the good, and the good is definitely important!
Amos: ok, here is a case where — assuming we understand the good in terms of felt experiences — the right seems not to supervene on the good.
Ari: it’s a little puzzling why the right wouldn’t supervene on the good — the good matters! Let’s consider a bad objection to utilitarianism which you reject to see whether deontologists have any good story about why, in this case, the right wouldn’t supervene on the good… oop, look, that story doesn’t work!
Amos: 😦
Ari: sometimes, people intuit that someone is being wronged/doing wrong/being harmed because — even though the person doing it feels happy doing it — there are facts that I know which he doesn’t that affect how the case seems to me morally (for example, unwitting incest.)
Amos: right, these are the anti-hedonist intuitions (or anti-consequentialist intuitions, if we’re set on being hedonists about the good), that I take to be prima facie counter-examples to classical utilitarianism.
Ari: ok, but let’s analyse *why* you have those intuitions. You have those intuitions because of social norms that aren’t reliably tethered to the moral facts.
Amos: ok, why think I have these intuitions because of social norms that aren’t reliably tethered to the moral facts? Why not think the same about the intuition that I shouldn’t randomly yell at small children and make them cry?
Ari: you have an Oedipus Complex. You are revising utilitarianism to fit your moral intuitions about a case.
Amos: why not think the utilitarian is revising deontology, or whatever the correct moral theory is?
But I think your summary goes importantly wrong here:
> Ari: ok, but let’s analyse *why* you have those intuitions. You have those intuitions because of social norms that aren’t reliably tethered to the moral facts.
I do make a point about social norms in the post, but the more fundamental issue is that good hedonistic *heuristics* will lead to the outcomes that you call "anti-hedonist intuitions." For instance, I'm very likely to map cases of "unwitting incest that doesn't cause problems" to "all the other cases of incest" which I generally do expect to cause problems (birth defects, awkward reunions, confusing family trees, etc.).
It's a little evo-psych-y, and maybe you could make a symmetric case in the other direction, but probably my stance is best expressed as:
==
Act utilitarianism is really computationally expensive, so we all create heuristics of varying detail in an attempt to approximate it. Often, these heuristics assume that other people know what we know, given that information-sharing and gossip are probably extremely fundamental aspects of human socialization. This means that cases where consequentially-beneficial deception is going on are extremely difficult to differentiate from the isomorphic cases which lack that deception. That confusion leads us to adopt ad-hoc explanations for why cases with beneficial deception actually are still bad. I call this the Oedipus complex.
==
This seems way more plausible to me than trying to work in the other direction: saying that generic opposition to lying is more fundamental, and that only in extreme cases do hedonistic concerns predominate. I can leave some room for boring relativism here—I think it's possible that you just feel "don't lie" a lot more strongly than I do. But for a bit more defense of my position being the right one, see the thread with David, particularly this bit:
"following a deontological intuition is extremely demanding! If you accept that telling the truth is obligated in your Muslim-friend example, you sort of also have to accept it in situations where more will be lost. Like, maybe you signed some kind of weird super-binding NDA, and would have to spend a few nights in jail if you don't lie... does the deontologist maintain the obligation? How about if it's a few weeks? And so on until you get to the brother-murdering. Without utilitarianism, how do you stop anywhere non-arbitrary?"
This was an enjoyable read and a clear defense of utilitarianism, however, to it seems to suffer from a lack of knowledge about evolution of moral intuitions and sentiments. Contrary to the writer's assumption, peoples intuition about moral wrongness of lying does not come from them hearing it from their mother or social norms. It comes from the specifics of human species social evolution. Human species social evolution is different from say ants and bees that is reliant on kin selection and has a secondary path of reciprocal altruism. However, reciprocal altruism can become easily disadvantageous to evolutionary fitness, if organisms are not both skilled in identifying lies and motivated in disincentivizing it. The ability to manage free rider problem has a bigger role in our evolution than the writer of this piece seems to acknowledge. And moral intuitions and sentiments against lying which are the at roots of the ickiness that Amos felt, are theorized to be part of our species evolutionary solution to several problems including free rider problem.
Your explanation of where anti-deception intuitions come from makes sense, but I think it strengthens the point I'm making—namely, that those anti-deception intuitions are poorly suited for Oedipus-y situations.
There's no free riding being done! Oedipus is a good king and there's no reason to think that life would improve in Thebes if his past were discovered—so Sophocles makes a wild insertion to tip the incentives back to what we expect them to be.
And I think Amos is doing the same thing—no real harm is being caused, so he invents a new incentive to be honest in order to satisfy the base evolutionary craving to eliminate free riders. But there isn't really a free rider! So it seems like the addition of deontic law is misguided, driven by brute heuristic instead of true reflection.
There might be some other interpretations of the Oedipus Rex / Oedipal Complex idea…
For example, the idea of fate vs. self-determination: a parent’s act of naming their kid is the second installment of imposing their will on the child that most children submit to and carry for the rest of their lives. The first imposition is genetic code and in utero epigenetics that might determine much of their personality, happiness, etc. There’s also another imposition, which is timing of conception — Love in the Time of Cholera?.. — that often broadly determines the scope of possibilities for kids… Like being in school during a pandemic, graduating from college in a recession, being a dinosaur when the meteorite hits, etc. The Bible has this “original sin” / “children pay for the sins of their parents” idea, too, that’s related, I think — less applicable to the meteorite case, but is relevant to adults causing wars and recessions their kids have to live through.
There’s also the part of growing up when children start hating their families / homes (or at least wanting to explore and mate with other tribes’ children to diversify their gene pool). It’s arguably purely a biological process. It enables them to move away / go on adventures, and then return and seek forgiveness / acceptance from parents with whom they would have had arguments prior to striking out on their own. The relevant biblical tale here is of the wandering son, who returns and his father accepts and embraces. (One political commentator I listen to has a theory that Putin is suffering from an Oedipal complex, and is perpetually driven to seek a father figure to grant him acceptance and protection. His Judo instructor was the first such figure, then his government / mafia boss in St. Petersburg, then Yeltsin, then a frustrated search for acceptance by American presidents, until now…)
I assume you've taken the obvious utilitarian objection to lying about halal meat into account here: namely, that if you start lying because it's convenient, you'll get into the habit of it, and sooner or later you'll lie even when it's not justified on utilitarian terms, because that's how human psychologies work.
But I do also want to point out that there are some accounts of welfare that say, very roughly, that someone can have their welfare impugned without even knowing about it. The most famous example is, of course, the man who's secretly mocked behind his back by everyone who knows him and treated with silent contempt by his associates, even if he never realizes it, and dies happily with nobody attending his funeral. It seems natural to say that this kind of person has suffered some kind of harm to their welfare, even if they never realize it. And so the people who are lied to about what kind of meat they're eating are also harmed in this way. (Even if you don't care that much about violating Islamic religious rules, they're at least harmed epistemically, insofar as their beliefs about the way-the-world-is are nonconsensually sabotaged and they'll never get them back.)
1. Yeah, I think lying for convenience’s sake is probably a bad idea some of the time. But I’m also somewhat skeptical that there’s a lot of unjustifiable lying out there. Most of the lies we tell are probably things like “your hair looks nice,” and I don’t see that naturally leading to “yes I definitely did put batteries in the smoke detector … oh no, the house burned down.”
2. “Seems natural” is doing a lot of work here. It also seems natural that there’s something wrong with Oedipus ruling over Thebes, but it’s hard to prove it’s bad without sending an inexplicable plague.
Our intuitions probably mislead us in a case like you’ve presented—we all have worries and insecurities that people are spreading nasty rumors about us, and those worries and insecurities themselves reduce our wellbeing. If this man really knew and suspected nothing, and he died happy, that’s just fine!
No one has a *right* to know the truth—if nonconsensually sabotaging someone’s beliefs saves a drowning child, we should do it. If it saves $20 (and I really really know that no one will find out and it won’t weigh significantly on my conscience and so on), we should do it.
Yeah, I guess if you don't think the guy who got lied to is suffering a harm, you probably won't be convinced that anyone who's lied to and doesn't find out is suffering a harm.
I do think that there's a lot of unjustifiable lying out there. I don't mean "your hair looks nice", I mean like... people openly lie about things for personal gain all the time. "Of course I didn't use ChatGPT to write that essay." "Sorry, I'm sick today - I can't come in." "Don't be silly - you don't have to worry about any other girls." These are lies that people as a whole tell in large quantities, on a daily basis. And the more they do things like this, the more it becomes psychologically easier to lie about everything, until one day someone finds out and their welfare is unjustly reduced.
(By the way, I used infidelity as an example: do you also think that it's fine to cheat on someone who never finds out?)
Yeah, I think those sorts of lies are probably wrong for psychological slippery-slope reasons. Infidelity fits mostly in that category to my mind, though I can also think of a few exaggerating factors:
a. It won't have zero effect on the quality of the relationship for the cheated-on.
b. It won't have zero effect on the mental well-being of the cheater, usually.
c. Social things are soooo complicated... "never finds out" is a probably near-impossible criterion in the real world.
I think utilitarianism can account for the utility of integrity, which is to stop coordination problems from occurring. When people are more distrustful due to a lack of integrity they encounter, it becomes more difficult to convince everyone to work together to solve common problems. In the restaurant example, if Muslims know that there religious values are not respected by other groups and that people constantly lie to them, it will create fractures in the societal fabric. More broadly, under capitalism market participants should not lie about the services/goods they are providing because then the overall system would be worse off in serving everyone's needs and wants.
While I know this post is supposed to be about deception as an inherent negative in deontology, I think what lies under the surface is something different - the idea, as discussed in the other comment chain, that certain acts (cheating on someone, making Muslims eat non-halal food) are fine when the 'impacted' party doesn't know, but bad when the impacted party does know. And the question of whether the 'impacted' party is impacted at all if they don't know.
I think what Amos is reacting so strongly against isn't deception. He's reacting strongly against the fact that Muslims are being forced to violate their religious principles. The question for utilitarians is then, *why does the badness of this scenario change (get less bad) when deception is added*?
From this point of view, it seems like utilitarians actually are the ones with a weird caveat for deception, whereas deontologists treat it just like they would everything else.
That's why I think Amos is probably fine with "your hair looks beautiful today". It's not about the deception - it's about what's behind it.
On a separate note, I think the reason people lie to the murderous axe-wielder is simply because your brother being murdered is wayyy farther down the utility scale than lying about whether he's home is down the deontological moral scale. I don't think people are "at heart" utilitarians. You can't use an extreme example that's only extreme in one dimension - if you used an example extreme in the other dimension (like, your Muslim friend surprised you at work and is about to eat some non-halal food, do you tell him the truth even if there's slight negative utility from your boss possibly finding out and firing you?) I think you would get the opposite judgement from most people. Somehow you have to find something extreme in BOTH dimensions (although I tried for two minutes and couldn't come up with a great example 💀).
Overall a very interesting question and one that I've personally struggled with over the years. I definitely don't assign a deontological moral value to deception; however, I think there's something to the idea that Amos's example is important for deontology in other ways. Even if you could create a scenario where there's effectively zero chance of someone finding out, would you still cheat on a significant other/lie to Muslims about the halal-ness of their food, assuming it gives you slightly positive utility for some other reason? I'm personally not 100% satisfied with "it weighs down the conscience of the cheater" and "there's a chance the cheatee finds out" - I feel like there must be other utilitarian explanations.
Also, I think your point about sources of morality being "my mom said" and "duh" is great. (However, I think feeding Muslims non-halal food and probably also cheating on someone fall into the "duh" category which is why they're interesting!)
-written by a utilitarian
Wow, thanks! Lots of interesting reframings and thoughts...
> He's reacting strongly against the fact that Muslims are being forced to violate their religious principles. The question for utilitarians is then, *why does the badness of this scenario change (get less bad) when deception is added*?
I think there's still a trick being played on our intuition here. Usually, forcing someone to violate their principles will make them feel bad! That's most of why it's wrong. (Probably there's also room here to say, "I'm humble enough to assign some probability to their principles actually being right, which would mean that their violation is extremely bad.")
In Amos' case, though, that heuristic misleads us! There's no reason that a Muslim customer would be harmed emotionally (they're not gonna find out) and dietary prohibitions are probably the silliest part of religion. If a God exists that will send you to hell for eating non-halal meat, there are bigger issues with the moral structure of the universe...
So the scenario's badness changes because the scenario is fundamentally different! In the case without deception, Muslim customers will be upset, with deception, they won't be.
> You can't use an extreme example that's only extreme in one dimension - if you used an example extreme in the other dimension (like, your Muslim friend surprised you at work and is about to eat some non-halal food, do you tell him the truth even if there's slight negative utility from your boss possibly finding out and firing you?) I think you would get the opposite judgement from most people.
You may be right! Then again, the whole point of deontology is the categorical imperative. So following a deontological intuition is extremely demanding! If you accept that telling the truth is obligated in your Muslim-friend example, you sort of also have to accept it in situations where more will be lost. Like, maybe you signed some kind of weird super-binding NDA, and would have to spend a few nights in jail if you don't lie... does the deontologist maintain the obligation? How about if it's a few weeks? And so on until you get to the brother-murdering. Without utilitarianism, how do you stop anywhere non-arbitrary?
I guess I don't really get what you mean by "deontological moral scale"—that sounds like implied consequentialist reasoning to me, which would explain why it helps escape the reductio.
> Even if you could create a scenario where there's effectively zero chance of someone finding out, would you still cheat on a significant other/lie to Muslims about the halal-ness of their food, assuming it gives you slightly positive utility for some other reason?
I hate sounding anything like a modal skeptic, but I think it's really hard for our intuitions to guide us in a situation like this. You're right that it still feels icky—but in the thought experiment, that ickiness isn't allowed! It seems like it mostly comes from the expectation of cheating/lying weighing on your conscience, which we've ruled out.
This is where the Oedipus complex comes from—when we find ourselves in a world where back-of-the-head ickiness is all that's left, we instinctively project that ickiness into morality-space. "Duh" intuitions like these are probably better thought of as "ew, gross" intuitions, which strike me as less reliable since they're perfectly explainable by pattern-matching to non-thought-experiment situations where real harm happens.
I think the point about the categorical imperative is interesting! I feel like Amos isn't actually a full deontologist (having not read his post; maybe I'm being too charitable here) and is more like somewhere in between that and a utilitarian (which ig if deontology is rly strict, this could just be defined as a utilitarian that has a complex utility function which includes some discrete ruleset of deontological rules?). Which is why the weights matter and why there's some arbitrary, hand-wavy, probably moving line. So ig yeah in that sense he's not a deontologist, but idk that seems like a rly strict definition of deontology to me? I'm not really sure how most ppl use the term tho - never taken a philosophy class.
And about the ickiness... you're probably right. My own Oedipus complex just keeps making me wish for another, more victim-based reason. :(
Also yeah I think the broader point about how we project morality onto our media is really, really interesting, and I'm really interested to see where else it might appear!
I’m not sure I understand the argument here — it seemed a bit like this:
Ari: utilitarianism has a lot going for it, in that it says the right supervenes on the good, and the good is definitely important!
Amos: ok, here is a case where — assuming we understand the good in terms of felt experiences — the right seems not to supervene on the good.
Ari: it’s a little puzzling why the right wouldn’t supervene on the good — the good matters! Let’s consider a bad objection to utilitarianism which you reject to see whether deontologists have any good story about why, in this case, the right wouldn’t supervene on the good… oop, look, that story doesn’t work!
Amos: 😦
Ari: sometimes, people intuit that someone is being wronged/doing wrong/being harmed because — even though the person doing it feels happy doing it — there are facts that I know which he doesn’t that affect how the case seems to me morally (for example, unwitting incest.)
Amos: right, these are the anti-hedonist intuitions (or anti-consequentialist intuitions, if we’re set on being hedonists about the good), that I take to be prima facie counter-examples to classical utilitarianism.
Ari: ok, but let’s analyse *why* you have those intuitions. You have those intuitions because of social norms that aren’t reliably tethered to the moral facts.
Amos: ok, why think I have these intuitions because of social norms that aren’t reliably tethered to the moral facts? Why not think the same about the intuition that I shouldn’t randomly yell at small children and make them cry?
Ari: you have an Oedipus Complex. You are revising utilitarianism to fit your moral intuitions about a case.
Amos: why not think the utilitarian is revising deontology, or whatever the correct moral theory is?
Ari: you have an Oedipus complex!
Heh, you might be a little too close to right...
But I think your summary goes importantly wrong here:
> Ari: ok, but let’s analyse *why* you have those intuitions. You have those intuitions because of social norms that aren’t reliably tethered to the moral facts.
I do make a point about social norms in the post, but the more fundamental issue is that good hedonistic *heuristics* will lead to the outcomes that you call "anti-hedonist intuitions." For instance, I'm very likely to map cases of "unwitting incest that doesn't cause problems" to "all the other cases of incest" which I generally do expect to cause problems (birth defects, awkward reunions, confusing family trees, etc.).
It's a little evo-psych-y, and maybe you could make a symmetric case in the other direction, but probably my stance is best expressed as:
==
Act utilitarianism is really computationally expensive, so we all create heuristics of varying detail in an attempt to approximate it. Often, these heuristics assume that other people know what we know, given that information-sharing and gossip are probably extremely fundamental aspects of human socialization. This means that cases where consequentially-beneficial deception is going on are extremely difficult to differentiate from the isomorphic cases which lack that deception. That confusion leads us to adopt ad-hoc explanations for why cases with beneficial deception actually are still bad. I call this the Oedipus complex.
==
This seems way more plausible to me than trying to work in the other direction: saying that generic opposition to lying is more fundamental, and that only in extreme cases do hedonistic concerns predominate. I can leave some room for boring relativism here—I think it's possible that you just feel "don't lie" a lot more strongly than I do. But for a bit more defense of my position being the right one, see the thread with David, particularly this bit:
"following a deontological intuition is extremely demanding! If you accept that telling the truth is obligated in your Muslim-friend example, you sort of also have to accept it in situations where more will be lost. Like, maybe you signed some kind of weird super-binding NDA, and would have to spend a few nights in jail if you don't lie... does the deontologist maintain the obligation? How about if it's a few weeks? And so on until you get to the brother-murdering. Without utilitarianism, how do you stop anywhere non-arbitrary?"
This was an enjoyable read and a clear defense of utilitarianism, however, to it seems to suffer from a lack of knowledge about evolution of moral intuitions and sentiments. Contrary to the writer's assumption, peoples intuition about moral wrongness of lying does not come from them hearing it from their mother or social norms. It comes from the specifics of human species social evolution. Human species social evolution is different from say ants and bees that is reliant on kin selection and has a secondary path of reciprocal altruism. However, reciprocal altruism can become easily disadvantageous to evolutionary fitness, if organisms are not both skilled in identifying lies and motivated in disincentivizing it. The ability to manage free rider problem has a bigger role in our evolution than the writer of this piece seems to acknowledge. And moral intuitions and sentiments against lying which are the at roots of the ickiness that Amos felt, are theorized to be part of our species evolutionary solution to several problems including free rider problem.
Thanks for reading!
Your explanation of where anti-deception intuitions come from makes sense, but I think it strengthens the point I'm making—namely, that those anti-deception intuitions are poorly suited for Oedipus-y situations.
There's no free riding being done! Oedipus is a good king and there's no reason to think that life would improve in Thebes if his past were discovered—so Sophocles makes a wild insertion to tip the incentives back to what we expect them to be.
And I think Amos is doing the same thing—no real harm is being caused, so he invents a new incentive to be honest in order to satisfy the base evolutionary craving to eliminate free riders. But there isn't really a free rider! So it seems like the addition of deontic law is misguided, driven by brute heuristic instead of true reflection.
There might be some other interpretations of the Oedipus Rex / Oedipal Complex idea…
For example, the idea of fate vs. self-determination: a parent’s act of naming their kid is the second installment of imposing their will on the child that most children submit to and carry for the rest of their lives. The first imposition is genetic code and in utero epigenetics that might determine much of their personality, happiness, etc. There’s also another imposition, which is timing of conception — Love in the Time of Cholera?.. — that often broadly determines the scope of possibilities for kids… Like being in school during a pandemic, graduating from college in a recession, being a dinosaur when the meteorite hits, etc. The Bible has this “original sin” / “children pay for the sins of their parents” idea, too, that’s related, I think — less applicable to the meteorite case, but is relevant to adults causing wars and recessions their kids have to live through.
There’s also the part of growing up when children start hating their families / homes (or at least wanting to explore and mate with other tribes’ children to diversify their gene pool). It’s arguably purely a biological process. It enables them to move away / go on adventures, and then return and seek forgiveness / acceptance from parents with whom they would have had arguments prior to striking out on their own. The relevant biblical tale here is of the wandering son, who returns and his father accepts and embraces. (One political commentator I listen to has a theory that Putin is suffering from an Oedipal complex, and is perpetually driven to seek a father figure to grant him acceptance and protection. His Judo instructor was the first such figure, then his government / mafia boss in St. Petersburg, then Yeltsin, then a frustrated search for acceptance by American presidents, until now…)
I assume you've taken the obvious utilitarian objection to lying about halal meat into account here: namely, that if you start lying because it's convenient, you'll get into the habit of it, and sooner or later you'll lie even when it's not justified on utilitarian terms, because that's how human psychologies work.
But I do also want to point out that there are some accounts of welfare that say, very roughly, that someone can have their welfare impugned without even knowing about it. The most famous example is, of course, the man who's secretly mocked behind his back by everyone who knows him and treated with silent contempt by his associates, even if he never realizes it, and dies happily with nobody attending his funeral. It seems natural to say that this kind of person has suffered some kind of harm to their welfare, even if they never realize it. And so the people who are lied to about what kind of meat they're eating are also harmed in this way. (Even if you don't care that much about violating Islamic religious rules, they're at least harmed epistemically, insofar as their beliefs about the way-the-world-is are nonconsensually sabotaged and they'll never get them back.)
1. Yeah, I think lying for convenience’s sake is probably a bad idea some of the time. But I’m also somewhat skeptical that there’s a lot of unjustifiable lying out there. Most of the lies we tell are probably things like “your hair looks nice,” and I don’t see that naturally leading to “yes I definitely did put batteries in the smoke detector … oh no, the house burned down.”
2. “Seems natural” is doing a lot of work here. It also seems natural that there’s something wrong with Oedipus ruling over Thebes, but it’s hard to prove it’s bad without sending an inexplicable plague.
Our intuitions probably mislead us in a case like you’ve presented—we all have worries and insecurities that people are spreading nasty rumors about us, and those worries and insecurities themselves reduce our wellbeing. If this man really knew and suspected nothing, and he died happy, that’s just fine!
No one has a *right* to know the truth—if nonconsensually sabotaging someone’s beliefs saves a drowning child, we should do it. If it saves $20 (and I really really know that no one will find out and it won’t weigh significantly on my conscience and so on), we should do it.
Yeah, I guess if you don't think the guy who got lied to is suffering a harm, you probably won't be convinced that anyone who's lied to and doesn't find out is suffering a harm.
I do think that there's a lot of unjustifiable lying out there. I don't mean "your hair looks nice", I mean like... people openly lie about things for personal gain all the time. "Of course I didn't use ChatGPT to write that essay." "Sorry, I'm sick today - I can't come in." "Don't be silly - you don't have to worry about any other girls." These are lies that people as a whole tell in large quantities, on a daily basis. And the more they do things like this, the more it becomes psychologically easier to lie about everything, until one day someone finds out and their welfare is unjustly reduced.
(By the way, I used infidelity as an example: do you also think that it's fine to cheat on someone who never finds out?)
Yeah, I think those sorts of lies are probably wrong for psychological slippery-slope reasons. Infidelity fits mostly in that category to my mind, though I can also think of a few exaggerating factors:
a. It won't have zero effect on the quality of the relationship for the cheated-on.
b. It won't have zero effect on the mental well-being of the cheater, usually.
c. Social things are soooo complicated... "never finds out" is a probably near-impossible criterion in the real world.
I think utilitarianism can account for the utility of integrity, which is to stop coordination problems from occurring. When people are more distrustful due to a lack of integrity they encounter, it becomes more difficult to convince everyone to work together to solve common problems. In the restaurant example, if Muslims know that there religious values are not respected by other groups and that people constantly lie to them, it will create fractures in the societal fabric. More broadly, under capitalism market participants should not lie about the services/goods they are providing because then the overall system would be worse off in serving everyone's needs and wants.