Archive for March, 2013

An Eye for an Eye – Wrong in Both Directions

Thursday, March 28th, 2013

Wow! A high school essay by a law professor (with a book published by UChicPress).

The reason “rule of law” works is of course because the harm done in a crime is vastly more than the reward to the criminal and so the punishment needed to deter it need not match the harm of the crime itself (even when the actual conviction and punishment are not certain outcomes).
A vengeance-based system of self-protection on the other hand requires being known for the capacity and willingness to provide a disproportionate response so as to deter through expectation of harm even thought the actual achievement of deterrent harm can never be certain. And because an individual does not have the profile of a state it is important to be extreme in the examples in order to gain a reputation as one who will and can respond forcefully.
We probably evolved in a context where self-protection was more important than it is now, so the pain I instinctively desire to inflict on those who offend me may well exceed that which I receive and there is no rational reason for it to be restricted be either equal or less, but in the higher interest of minimizing overall unnecessary suffering I accept that my immediate desires shall not be met and that the pain inflicted on offenders shall not exceed that necessary to deter the offense.
On the other hand, in some cases the probability of meeting justice is so low and the rewards of crime so high that there may be good reason for the penalty to be especially severe. So I have no objection on purely “justice” grounds to the death penalty (or even death by torture) for economic crimes committed in cold calculation of the expected outcome. (I do suspect though that there is a point at which the imposition of such harsh penalties may be undesirable as a result of the “heart-hardening” effect it would have on community as a whole.)

My Genome and My Children’s Privacy

Thursday, March 28th, 2013

The immortal cancer cells that were harvested without permission from Henrietta Lacks in 1951 continue to provide valuable lessons – not just for biology and the practice of medicine, but also in medical ethics and even at a more general level.

The issue of consent by family proxy is not unique but the question of its retroactivity and the impossibility of respecting its denial (due to the already widespread distribution of cells from the culture by the time they found out) made it especially problematic, and now there is a new wrinkle.

Some family members who agreed to the use of the cells for research purposes are now concerned that when such research includes publication of the full genome it impinges unacceptably on the privacy of those who share large known fractions of that genome.

It is quite reasonable to argue that the previous consent did not cover the genome publication, since at the time that was not a known possibility so the consent cannot be said to have been fully informed; but again the horse is already out of the barn as the data have already been circulated, and although the number of known copies is small there is no way of tracing them all and guaranteeing that no further copies will ever be made.

But aside from the issues of retroactivity, proxy, and informedness of consent, we now have also that of third party privacy – which actually applies to the publication of any genome, and indeed the question of whether one has the right to publish one’s own genome in the face of privacy concerns from (present and future?) relatives is itself an ongoing topic of discussion.

Do I, or you, have the right to publish our own genomes without the consent of the unborn descendants about whom those genomes might provide partial information on matters that they might wish not to have revealed?

Rebecca Skloot raises the HeLa issue in an essay in the NYTimes Sunday Review but a response by Michael Eisen points out that she appears to confound “how to retroactively get Henrietta’s permission to experiment with and publish about her cells” and “the seemingly related  issue of whether publication of the HeLa cell genome is an invasion of the privacy of Lacks’ living relatives”. The first involves  consent (retroactively by proxy) on behalf of Ms Lacks for the removal and study of tissue, and the second the second is on behalf of her relatives for publication of information which might invade their privacy (and would arise even if Ms Lacks had in fact given fully informed consent back in 1951). The latter question of third party privacy is also the clear focus of  a subsequent article in ThinkProgress.

The issue is not just a sub-case of the basic proxy consent issue, with the donor (or in this case her proxy) giving consent by proxy for release of information about the relatives, because in this case the relatives in question may be available (or about to become available) to give their own consent, whereas for a deceased person the most affected people going forward are the immediate present family members, so in matters of what happens to the body obtaining their consent might well be seen as sufficient.

Nor is it trivially resolved by arguing that the individual herself, or if she is deceased her closest living relatives, have the right to give consent because of being the one(s) most directly affected by release of genomic information.

There might be some argument that the closest relatives (siblings and children) are most likely to suffer privacy invasion because they share the most genes, but it may be that combined with other information the smaller shared genome fraction of a further descendant might be particularly revealing  (eg if HeLa had the red headed axe murderer gene then a grandchild with red hair might have greater privacy concerns than the brown haired child who was his parent – or even than HeLa herself had she been the one giving consent).

If red headed axe murderer correlations are rare though, then perhaps we can apply the probable information idea slightly differently – not just to the chance of having a particular gene, but (in advance of the sequencing) to the chance of having any potentially embarrassing information revealed. If we take that view then the closest surviving relatives can give consent for a deceased person and the individual’s consent can override the privacy concerns of his or her relatives.

But it is not obvious that this is really fair. Just because I am most at risk, does that entitle me to cause a lesser risk for others? It may seem not, but we commonly allow the individual to elect a risky surgery without giving a veto to dependents who may be at financial risk if it goes wrong.

Some people think that professional Philosophers have  special skills for actually answering such questions.  I don’t.

Crowdsourcing Philosophy

Wednesday, March 13th, 2013

In his latest ‘The Stone’ column at, Mary and the Zombies: Can Science Explain Consciousness?, Gary Gutting admits that non-Philosophers might conceivably have something useful to say (even though he has to add the usual BS  “Of course, professional philosophers have technical resources that non-philosophers lack“).

Frankly I doubt that anything useful will be said though, because I suspect that no-one has anything useful (and new) to say on this particularly ill-posed question.

But since I am a no-one,  I’ll have a go anyhow.

I “liked”  the comment by ‘Jason’ who said

The other comments (thank goodness!) have already outlined all the reasons that these thought experiments don’t hold any argumentative force. I once spent two weeks reading into this literature convinced I had missed something critical about zombies and Mary and Chinese rooms and bats. How else could all these people take seriously such manifestly flimsy arguments? It turns out there is nothing more here than some philosophers incorrectly assuming that their intuitions mean something interesting. Let’s stop talking about bad question-begging thought experiments.

But despite my frustration with the presumptions of (many) professional Philosophers, I do feel that this is a bit too harsh. If the focus was not on “explaining consciousness” but rather on exploring what we mean when we talk about it, then Philosophers, while not uniquely qualified, might have a useful job to do by way of helping to clarify when people are talking at cross-purposes and perhaps seeming to disagree when they really do not. (At least they should be good at it since I see almost the entire history of philosophical argument as being exactly of that nature.)

Commenter ‘Paul M’ says

Both thought experiments assume their conclusions in their premises. 

In Experiment One, what is the “fact” that Mary learns when she sees red? If the answer is that the “fact” is her subjective experience of the color red, then all this thought experiment has done is define subjective experience as a “fact.” It hasn’t demonstrated that it is, and it certainly does not demonstrate that this “fact” is not physical. By saying, in the premises, that Mary knows all physical facts about red, but has not experienced seeing red, an implicit assumption is made that experiencing red is not a physical phenomena. But that is what the experiment is supposed to demonstrate. So the conclusion is assumed in the premises, and the experiment doesn’t demonstrate anything. 

Experiment Two has the same problem. By postulating a physically identical zombie without any of the same subjective experiences, a separation between physical and subjective is simply assumed. If you don’t’ believe that such a zombie is possible, then this experiment does nothing to establish that there is a separation between subjective experience and the physical world. Again, by assuming the possibility of such a zombie, the conclusion is simply assumed in the premises. 

Most importantly, what is “physical”? As Chomsky points out in his talk “The Machine, the Ghost, and the Limits of Understanding,” we presently have no coherent answer. So the physical/subjective dichotomy is incoherent.

Here I almost decided not to quote the last paragraph because I don’t actually think the dichotomy is “incoherent” although perhaps most expressions of it have been.

The essence of the issue of “qualia” and the experience (as opposed to the phenomenon) of consciousness is that the experience is entirely and essentially subjective. It cannot be communicated by description or even by direct neuronal stimulation since even if I create in zombie you the exact same pattern of neuronal stimulation and hormonal responses that occur in person me there is no way for me to tell whether the resulting experience in you is the same as in me.  We can agree on what red “looks like” because we will agree on what things look red,  and we can understand what it “feels like” on the basis of emotional connections we make with the fact of seeing red (on the basis of either experience or instinct). But we seem unable to imagine that there is not some aspect of the redness we experience which is more than the sum of its associations. The pattern of uncomfortable confusion we feel when we try to imagine a different version of “redness” can probably be dismissed as identifiable with some kind of biochemical response to the “churning” of our neuronic computational circuits, and in that sense  “our” private version of “redness”  may even be explainable, but to explain something is not necessarily to explain it away and even when “explained” that sense may be hard (or even impossible) to eliminate.

Commenter ‘Graham Anderson’ says

When Mary awakens from her operation and sees the red roses, she has then become a red-seeing mind. Previously, she understood everything there is to know about red-seeing minds, but was not one herself. When the phenomenon that Mary *is* changes, it does not give her a new fact. She simply has new experiences, the products of the change in state of her mind. 

. . .

I believe we are talking about two distinct concepts: knowing about something, and being that thing. The two concepts are easily confused when we’re talking about a human mind, for which knowing and being are eerily similar.

Similarly, even if consciousness is explained as the process of laying down recoverable memories this does not undo my feeling of consciousness – or even prove that that feeling is not actually unique to me alone.

Ultimately, the “resolution” to this issue may have to be that there is no way I can ever tell whether or not the rest of you are zombies, but the prospect that you are entails such a terrifying sense of godlike loneliness that I have no choice but to credit you with the same consciousness as I feel myself. (And then it’s a normative question – why should I not also credit zombie Mary with consciousness as well? and to a lesser extent my cat? or even this computer?)

Was Wittgenstein Right?

Thursday, March 7th, 2013

It’s an interesting coincidence that just a few days after my posting on the discussion at ‘Butterflies and Wheels‘, the topic of Philosophy’s relevance was taken up by Paul Horwich in ‘the Stone’  at (though fortunately with less dismissive rudeness in Michael Lynch’s response).

According to Horwich

Wittgenstein claims that there are no realms of phenomena whose study is the special business of a philosopher, and about which he or she should devise profound a priori theories and sophisticated supporting arguments. There are no startling discoveries to be made of facts, not open to the methods of science, yet accessible “from the armchair” through some blend of intuition, pure reason and conceptual analysis. Indeed the whole idea of a subject that could yield such results is based on confusion and wishful thinking.

(If)Philosophy is respected, even exalted, for its promise to provide fundamental insights into the human condition and the ultimate character of the universe, leading to vital conclusions about how we are to arrange our lives. . . .(then) we are duped and bound to be disappointed, says Wittgenstein. For these are mere pseudo-problems, the misbegotten products of linguistic illusion and muddled thinking. So it should be entirely unsurprising that the “philosophy” aiming to solve them has been marked by perennial controversy and lack of decisive progress — by an embarrassing failure, after over 2000 years, to settle any of its central issues. Therefore traditional philosophical theorizing must give way to a painstaking identification of its tempting but misguided presuppositions and an understanding of how we ever came to regard them as legitimate. But in that case, he asks, “[w]here does [our] investigation get its importance from, since it seems only to destroy everything interesting, that is, all that is great and important? (As it were all the buildings, leaving behind only bits of stone and rubble)” — and answers that “(w)hat we are destroying is nothing but houses of cards and we are clearing up the ground of language on which they stand.”


We might boil (Wittgenstein’s position) down to four related claims.

— The first is that traditional philosophy is scientistic: its primary goals, which are to arrive at simple, general principles, to uncover profound explanations, and to correct naïve opinions, are taken from the sciences. And this is undoubtedly the case.

—The second is that the non-empirical (“armchair”) character of philosophical investigation — its focus on conceptual truth — is in tension with those goals.  That’s because our concepts exhibit a highly theory-resistant complexity and variability. They evolved, not for the sake of science and its objectives, but rather in order to cater to the interacting contingencies of our nature, our culture, our environment, our communicative needs and our other purposes.  As a consequence the commitments defining individual concepts are rarely simple or determinate, and differ dramatically from one concept to another. Moreover, it is not possible (as it is within empirical domains) to accommodate superficial complexity by means of simple principles at a more basic (e.g. microscopic) level.

— The third main claim of Wittgenstein’s metaphilosophy — an immediate consequence of the first two — is that traditional philosophy is necessarily pervaded with oversimplification; analogies are unreasonably inflated; exceptions to simple regularities are wrongly dismissed.

— Therefore — the fourth claim — a decent approach to the subject must avoid theory-construction and instead be merely “therapeutic,” confined to exposing the irrational assumptions on which theory-oriented investigations are based and the irrational conclusions to which they lead.


Philosophical problems typically arise from the clash between the inevitably idiosyncratic features of special-purpose concepts —true, good, object, person, now, necessary — and the scientistically driven insistence upon uniformity. Moreover, the various kinds of theoretical move designed to resolve such conflicts (forms of skepticism, revisionism, mysterianism and conservative systematization) are not only irrational, but unmotivated.The paradoxes to which they respond should instead be resolved merely by coming to appreciate the mistakes of perverse overgeneralization from which they arose. And the fundamental source of this irrationality is scientism.

As Wittgenstein put it in the “The Blue Book”:

Our craving for generality has [as one] source … our preoccupation with the method of science. I mean the method of reducing the explanation of natural phenomena to the smallest possible number of primitive natural laws; and, in mathematics, of unifying the treatment of different topics by using a generalization. Philosophers constantly see the method of science before their eyes, and are irresistibly tempted to ask and answer in the way science does. This tendency is the real source of metaphysics, and leads the philosopher into complete darkness. I want to say here that it can never be our job to reduce anything to anything, or to explain anything. Philosophy really is “purely descriptive.

These radical ideas are not obviously correct, and may on close scrutiny turn out to be wrong. But they deserve to receive that scrutiny — to be taken much more seriously than they are. Yes, most of us have been interested in philosophy only because of its promise to deliver precisely the sort of theoretical insights that Wittgenstein argues are illusory. But such hopes are no defense against his critique. Besides, if he turns out to be right, satisfaction enough may surely be found in what we still can get — clarity, demystification and truth.

Horwich presents (this view of ) Witgenstein’s position as worthy of consideration (but without wholeheartedly endorsing it)

Lynch responds

According to HW (Horwich’s Wittgenstein), we get trapped in our glass cages because we philosophers fetishize science’s success in giving reductive explanations. A reductive explanation of X is one that tells us the underlying essence of X – that says what all and only X’s have in common. As HW points out, the concepts philosophers are interested in seem highly resistant to this sort of analysis. And this is something we could appreciate if we just paid attention to the role such concepts really play in our thought and language. Once we do so, we’ll see that traditional philosophical answers to its traditional questions are “mistakes of perverse overgeneralization.”


First, just because we can’t reductively (“scientifically”) define something doesn’t mean we can’t say something illuminating about it. Go back to HW’s account of truth. He assumes that there is either a single nature of truth (and we can reductively define it) or that truth has no nature at all. But why think these are the only two choices?


So no uniform reductive explanation perhaps, but illumination just the same.

This brings me to the second way that I think HW’s metaphilosophy overgeneralizes. According to HW, philosophy is purely descriptive; it should “leave the world as it is” — only describe how we think and talk, and stop at that.

I think philosophy can play a more radical role. Return to our fly. Wittgenstein was not the first to compare the philosopher to one, nor the most famous. That award goes to Socrates, who claimed that the role of the philosopher was to act as a gadfly to the state. This is a very different metaphor. Leaving the world as it is isn’t what gadflies do. They bite. As I see it, so can philosophers: they not only describe how we think, they get us to change our way of thinking — and sometimes our ways of acting. Philosophy is not just descriptive: it is normative.

This is most obvious with ethical questions. Locke’s view that there are human rights, for example, didn’t leave the world as it was, nor was it intended to. Or consider the question of what we ought to believe – the central question of epistemology. As I’ve argued here at The Stone before, questions about the proper extent and efficacy of reasons aren’t just about what is, they are about what should be. In getting more people to adopt new evidence-based standards of rationality — as the great enlightenment philosophers arguably did —philosophers aren’t just leaving the world as they found it. And that is a good thing.

Lynch ends with

Philosophy is not science. Knowing how we ordinarily use our concepts of truth, or personhood or causation is important. Wittgenstein was certainly right that philosophers get into muddles by ignoring these facts. Yet even when it comes to the abstract concerns of metaphysics, philosophy can and should aspire to be more than just a description of the ordinary. That is because sometimes the ordinary is mistaken. Sometimes it is the familiar from which we need liberating — in part because our ordinary concepts themselves have a history, a history that is shaped in part by certain metaphysical assumptions.

Consider the idea that the real essence of truth is Authority — that is, what is true is whatever God, or the King or The Party commands or accepts. This is a reductive definition, one that still lurks in the background of many people’s worldviews. It has also been used over the centuries to stifle dissent and change. In order to free us from these sorts of thoughts, the philosopher must not only show the error in such definitions. She must also take conceptual leaps. She must aim at revision as much as description, and sketch new metaphysical theories, replacing old explanations with new. She must risk the fly bottle.

Perhaps it’s an annual event there since it’s almost exactly a year since Gary Gutting addressed the same question in the same place.

If you think that the only possible “use” of philosophy would be to provide a foundation for beliefs that need no foundation, then the conclusion that philosophy is of little importance for everyday life follows immediately.  But there are other ways that philosophy can be of practical significance.

Even though basic beliefs on ethics, politics and religion do not require prior philosophical justification, they do need what we might call “intellectual maintenance,” which itself typically involves philosophical thinking.  Religious believers, for example, are frequently troubled by the existence of horrendous evils in a world they hold was created by an all-good God.  Some of their trouble may be emotional, requiring pastoral guidance.  But religious commitment need not exclude a commitment to coherent thought. For instance, often enough believers want to know if their belief in God makes sense given the reality of evil.  The philosophy of religion is full of discussions relevant to this question.  Similarly, you may be an atheist because you think all arguments for God’s existence are obviously fallacious. But if you encounter, say, a sophisticated version of the cosmological argument, or the design argument from fine-tuning, you may well need a clever philosopher to see if there’s anything wrong with it.


The perennial objection to any appeal to philosophy is that philosophers themselves disagree among themselves about everything, so that there is no body of philosophical knowledge on which non-philosophers can rely.  It’s true that philosophers do not agree on answers to the “big questions” like God’s existence, free will, the nature of moral obligation and so on.  But they do agree about many logical interconnections and conceptual distinctions that are essential for thinking clearly about the big questions.   Some examples: thinking about God and evil requires the key distinction between evil that is gratuitous (not necessary for some greater good) and evil that is not gratuitous; thinking about free will requires the distinction between a choice’s being caused and its being compelled; and thinking about morality requires the distinction between an action that is intrinsically wrong (regardless of its consequences) and one that is wrong simply because of its consequences.  Such distinctions arise from philosophical thinking, and philosophers know a great deal about how to understand and employ them.  In this important sense, there is body of philosophical knowledge on which non-philosophers can and should rely.

In an interview a month earlier (for 3am magazine), Gutting had said something similar.

Over its history, philosophy has accumulated an immense store of conceptual distinctions, theoretical formulations, and logical arguments that are essential for this intellectual maintenance of our defining convictions. This constitutes a body of knowledge achieved by philosophers that they can present with confidence to meet the intellectual needs of non-philosophers. Consider, for example, discussions of free will. Even neuroscientists studying freedom in their labs are likely to offer confused interpretations of their results if they aren’t aware of the distinction between caused and compelled, the various meanings of “could have done otherwise”, or the issues about causality raised by van Inwagen’s consequence argument. Parallel points apply for religious people thinking about the problem of evil or atheists challenged to explain why they aren’t just agnostics. Philosophers can’t show what our fundamental convictions should be, but their knowledge is essential to our ongoing intellectual engagement with these convictions.

Now it’s my turn.

Telepathic Rats

Thursday, March 7th, 2013

From what I can gather,  the experiment referred to in this discussion may have just involved transmitting the excitation pattern of motor neurons associated with pressing the (say) left button rather than transmitting any “conceptual” association of that button with the subsequent reward. The receiving rat would then reflexively press the left button and after getting the reward might then be reinforced and so have “learned” the association on the basis of its own experience. What would be needed to demonstrate the transmission of anything close to learning would be for stimulation of the receiving rat prior to exposure to the apparatus to have the effect of increasing its likelihood of making  a subsequent correct choice while NOT stimulated.

The Future Is Hers

Thursday, March 7th, 2013

I wish the name of Gordon Brown didn’t always take the lead in these promotions but I don’t know of any more effective way of promoting the cause of women’s education around the world – and I do think that cause is perhaps the most important one in the air right now.