I have no issue with the first two sentences in the statement of Harris’s “central argument” in his “public challenge” announcement, so I will not be disputing his premises, and the key indefensible bit in his “argument” is (as so often happens in such cases) the “Therefore” bit. I have gone after Harris several times in the past, but now the challenge is to bring all that down to 1000 words.
Fortunately, in order to defeat Harris’ argument this time I do not have to prove that his conclusion is false (though I strongly suspect it is), but only to show that it is not necessarily true.
This does not mean that there are not some grains of truth in his conclusion (which I might identify more fully if given more than 1000 words).
But there is also much nonsense there, and even what might be true has not actually been established anywhere beyond reasonable doubt – and most importantly, it is not established by Sam’s particular argument. I shall establish both of these claims a fortiori by showing that the conclusion itself, as stated, is false.
I need only show that the negation of his claim that “questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice)” is consistent with everything we know at this time (therefore the claim itself cannot yet have been proved at all, and so cannot have been proved by Sam). That is I will argue that “In principle (and probably also in practice), questions of morality and values may conceivably fail to have right and wrong answers that fall within the purview of science”
But first, with regard to the specific sub-theses that Sam would like to see demolished let me make the following comments:
(1) A state of “worst possible misery” is not actually something that has been shown to exist (in fact it is exactly as problematic as finding a state of maximal total well-being discussed below), and in any case a state of arbitrarily severe misery is plausibly better in some consistent moral view than one of semi-conscious happiness in which all of the individual’s personal dignity is lost.
(2)&(4) A science of morality can be a perfectly valid branch of psychology in which the objective is to predict what humans in any particular context will judge to be morally correct, but it will not itself determine that moral correctness and may well lead to the prediction that people with different histories will come to different conclusions as to the rightness or wrongness of the same particular act in the same immediate circumstances. Science can also help people make moral decisions by predicting outcomes, just as in medicine it helps people achieve specific objectives but these often involve trade-offs where the choices made by different people in essentially the same medical context may well be completely different. What science does not do is tell them which choice to make, and there is indeed no universal measure of “health” any more than there is of overall “well-being”. Which brings us to the last issue.
(3) Finally, and most importantly, the “landscape” model is wrong; both firstly because it implies the existence of a single real-valued objective function for which there is neither any reason in principle to exist nor any convincing evidence for even an approximate such thing in practice, and secondly because even if there was one there is no way to exclude equal maxima of whatever it is at widely different states, or to evaluate the relative merit of approaching a nearby local (but not global) maximum as opposed to going down to great depths on the way to a slightly higher one.
With regard to the first point, Harris’s idea of “well being” probably does not correspond to any one scalar parameter in the state description of even a single human brain. (And it is important for it to be a scalar since something with many components cannot necessarily be ordered!). There are various chemical concentrations, neuronal excitation levels, and connectivity parameters, or whatever, that might be associated with different kinds of reported well-being, but many of these are in conflict with one another and there is neither any known way of “best” weighting them to provide a single ordered measure of “well being” nor any reason for such a “best” weighting to actually exist. We often feel “torn” when faced with moral choices, and even a circumstance that gives us a perceived sense of well-being at one instant may fail to do so in the next. But even if there were some single measure of the well-being of an individual, the issue of how to weight the relative importances of different people and potential people over all of time and space would also defeat the project of maximizing “total” well-being. (eg Does one ounce of gain for every one of all future humans outweigh a tonne of suffering for all present now, or is a world of twenty billion each with a micro-unit of well-being preferable to one of two billion each with a kilo-unit?) Amazingly Harris doesn’t seem to understand the difference between questions like these which really don’t necessarily have an answer with ones such as “How many birds are in flight over the earth right now?” which do have an answer but one that is essentially impossible for us to determine.
(And it is also amazing that, as noted above, he also doesn’t seem to understand that the role of science in medicine actually works against his claim for its application to morality.)
On the second point, in the unlikely event that there really was a real-valued (ie ordered) measure of total well-being for us to take as the “Objective Function” to be maximized, then the mental model of a “landscape” (over a multi-dimensional state space) could be realistic, but even then the question of what to do in any given context would not necessarily ever be answerable. This is not just a matter of technical detail or computational difficulty but inherent in the general structure of the problem. There may well be states of equal total well-being achieved in completely different ways, or we may be near a lesser “peak” with no route to the higher one other than by descending through a land of great pain and peril.
None of this is counter to a sensibly modest understanding of utilitarianism or consequentialism as a guiding principle for our moral decision making.
Nor do I deny the utility of looking for neurological correlates of moral attitudes (which may or may not be related to perceptions of ambient well-being among our peers) and of the various kinds and components of what we identify as such well being.
But, getting back to my main objective, dealing with those four canards was not a complete waste of words because if you understood what I was saying, then from where we stand now it is quite conceivable that no matter how much we learn we will never be able to decide moral questions by appeal to science (or to anything else for that matter!)
Let us imagine two people coming from different tribes one of which has succeeded over many generations as cultivaters by encouraging its members to value bling as the primary virtue while the other has succeeded as herders by fostering the values of blaah. Both evaluations have served their respective cultures well but they conflict on the issue of bong. Let us even imagine that someone has defined a parameter that is believed to truly represent the aggregate success and flourishing of the species (not an implausible scenario if one does not require that belief to be well founded but probably a total crock otherwise), but let us also imagine (as may well happen) that that parameter will not be predictably different under either regime. Of course we can imagine such a situation, so it is clear that we can envisage circumstances in which a moral question can have a “right” answer to one person which is “wrong” to another with science unable to adjudicate. This does not mean that science will be unable to “explain” the conflict eg by identifying each of the participants as responding to a different moral principle and predicting what they will choose, but it cannot decide the moral question. I do not need to find such an example in the annals of anthropology in order to establish that it is conceivable, so I am certain that I have demolished Sam’s claim to have shown that such things must exist and that there is any reason to believe that science must ever even in principle provide determinative answers to moral questions
I also believe that I have shown the error in each of the specific sub-arguments that Harris cites in his elaboration of the challenge.
I do not expect to be alone in this and would be happy to see the prize to go to someone else. But go it must!
(And I wouldn’t mind seeing my name somewhere among the long list of those who have effectively refuted all of Sam’s nonsense)
Notes: (not to be included in word count)
1: In fact I do strongly suspect that there are some questions of morality and values for which scientific knowledge may help the participants to achieve more morally satisfying outcomes. There may even be some relatively trivial moral questions which do have objectively “right” and “wrong” answers – at least according to any plausible human sense of morality, but for most interesting moral questions (ie the ones where everyone doesn’t already agree on the answer) there is much less reason to believe in the existence of incontrovertibly right and wrong answers. The conclusion which I will show cannot be proved is that “there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science”.