I take it that Amos' point was simply that we shouldn't care about justification in itself, and a true belief isn't itself inferior to a justified true beliefs. It seems like most of your arguments are arguments to the effect that justification is instrumentally valuable because it leads to less error outside of the single belief we have stipulated to be true. But I don't think Amos' position is in conflict with that. Sure justification is valuable because it makes us better at getting true beliefs, but the justification part is completely irrelevant to the value of any particular true belief.
That's definitely what I'm saying, and it seems like Amos agrees with what you think he's saying, so maybe my writing this is more a result of interpretive error on my part... But let me try to clarify regardless:
I suppose it's true that, taken in isolation, justification is mostly unimportant to the worth of any given belief—but when we look at it in relation to the rest of human knowledge, actual and potential, it becomes extremely important! Maybe my point is best summarized: "yes, true belief is nominally enough, but, for various pragmatic reasons, you also should want a true belief about *why* you believe something. I call that secondary belief 'justification,' and say that without it you don't actually *know* the first thing."
Like, why did we ever think justification was important in the first place? Because it's necessary to form good heuristics, and the good heuristics are necessary for forming good beliefs about other related and future things. I, personally, in a sense pertaining to myself and no one else, like it when my beliefs lead me to form lots of other true beliefs about related and future things! That feature of knowing seems extremely important, and the "knowledge is overrated" line—while, yeah, not conflicting with my stance—doesn't really address it adequately.
I’m not sure your counterexamples actually disprove what Amos argued. Indeed, in all your examples, the actual problem seems to be that you have false beliefs about topics on whom true information is valuable. For example, in the Newton example the actual problem is that you have false beliefs about how gravity works which makes space travel harder, if your belief is in fact true, you won’t have this problem. Similarly, in the chicken farmer example your actual problem is that you have false beliefs about the effects of killing chicken farmers, which is the real reason why you are behaving in ways that are predictably bad for your goals. Unless I’m misunderstanding some of your examples, this seems to be the case for pretty much all of your examples.
Also, just thinking about it, it seems to me that justified true beliefs are what we actually want. We want true beliefs because accurate information is often valuable for achieving our goals like following policies that actually give us the effects we want or making technological advances like vaccines. We want to reason in a justified way, because reasoning in unjustified ways will predictably lead to us coming up with less accurate beliefs, unless we get lucky, which by definition is not something we can have control over. So reasoning in justified ways is in expectation. The best strategy for obtaining accurate valuable information, which is instrumentally useful for achieving almost any goal. This also justifies the common intuition that knowledge is something even bad actors rationally want since it appeals to purely instrumental reasons rather than virtue. That said, even if you disagree with my argument regarding why justified true beliefs are quite satisfactory for our purposes. My criticism of your examples still stands.
Many single-cell organisms exhibit chemotaxis, phototaxis, etc. — moving toward energy sources, away from poison, etc. Many do it according to a neat mathematical model (look up “fold-change detection”) and one could say that it’s a primitive version of a JTB (and also a primitive version of hedonic adaptation). But few of them can modify that model itself, unless there’s some mutation. The salient point here is that this “belief” is baked into the organism at a very foundational level and is crucial for survival. A species that can modify its “belief in as few generations as possible, in response to changes in environmental patterns, tend to have greater chances of survival. (Many) humans possess mechanisms for belief modification within a single generation (or even within a fraction of time of a single generation) — that mechanism is “beliefs about beliefs” and does elevate the import of the J in JTB. Quite a good survival mechanism.
I take it that Amos' point was simply that we shouldn't care about justification in itself, and a true belief isn't itself inferior to a justified true beliefs. It seems like most of your arguments are arguments to the effect that justification is instrumentally valuable because it leads to less error outside of the single belief we have stipulated to be true. But I don't think Amos' position is in conflict with that. Sure justification is valuable because it makes us better at getting true beliefs, but the justification part is completely irrelevant to the value of any particular true belief.
That's definitely what I'm saying, and it seems like Amos agrees with what you think he's saying, so maybe my writing this is more a result of interpretive error on my part... But let me try to clarify regardless:
I suppose it's true that, taken in isolation, justification is mostly unimportant to the worth of any given belief—but when we look at it in relation to the rest of human knowledge, actual and potential, it becomes extremely important! Maybe my point is best summarized: "yes, true belief is nominally enough, but, for various pragmatic reasons, you also should want a true belief about *why* you believe something. I call that secondary belief 'justification,' and say that without it you don't actually *know* the first thing."
Like, why did we ever think justification was important in the first place? Because it's necessary to form good heuristics, and the good heuristics are necessary for forming good beliefs about other related and future things. I, personally, in a sense pertaining to myself and no one else, like it when my beliefs lead me to form lots of other true beliefs about related and future things! That feature of knowing seems extremely important, and the "knowledge is overrated" line—while, yeah, not conflicting with my stance—doesn't really address it adequately.
Second this
I’m not sure your counterexamples actually disprove what Amos argued. Indeed, in all your examples, the actual problem seems to be that you have false beliefs about topics on whom true information is valuable. For example, in the Newton example the actual problem is that you have false beliefs about how gravity works which makes space travel harder, if your belief is in fact true, you won’t have this problem. Similarly, in the chicken farmer example your actual problem is that you have false beliefs about the effects of killing chicken farmers, which is the real reason why you are behaving in ways that are predictably bad for your goals. Unless I’m misunderstanding some of your examples, this seems to be the case for pretty much all of your examples.
Also, just thinking about it, it seems to me that justified true beliefs are what we actually want. We want true beliefs because accurate information is often valuable for achieving our goals like following policies that actually give us the effects we want or making technological advances like vaccines. We want to reason in a justified way, because reasoning in unjustified ways will predictably lead to us coming up with less accurate beliefs, unless we get lucky, which by definition is not something we can have control over. So reasoning in justified ways is in expectation. The best strategy for obtaining accurate valuable information, which is instrumentally useful for achieving almost any goal. This also justifies the common intuition that knowledge is something even bad actors rationally want since it appeals to purely instrumental reasons rather than virtue. That said, even if you disagree with my argument regarding why justified true beliefs are quite satisfactory for our purposes. My criticism of your examples still stands.
Many single-cell organisms exhibit chemotaxis, phototaxis, etc. — moving toward energy sources, away from poison, etc. Many do it according to a neat mathematical model (look up “fold-change detection”) and one could say that it’s a primitive version of a JTB (and also a primitive version of hedonic adaptation). But few of them can modify that model itself, unless there’s some mutation. The salient point here is that this “belief” is baked into the organism at a very foundational level and is crucial for survival. A species that can modify its “belief in as few generations as possible, in response to changes in environmental patterns, tend to have greater chances of survival. (Many) humans possess mechanisms for belief modification within a single generation (or even within a fraction of time of a single generation) — that mechanism is “beliefs about beliefs” and does elevate the import of the J in JTB. Quite a good survival mechanism.