Blindsight and the Problem of Consciousness 

Source: How blindsight answers the hard problem of consciousness | Aeon Essays

This starts with some interesting stuff that is new to me (though actually quite old) and leads us through a lifetime of thinking towards a view of consciousness and “qualia” that seems to be quite close to (and perhaps an improvement of) my own.

Consciousness, as I see it, seems to involve having a mental model of one’s own mental state. So the qualia of redness corresponds not so much to the sensation of seeing red as to the knowledge that one is seeing red. The evolutionary pressure to evolve such a capacity might arise in any species whose members have to modify their behaviour in response to inferences about their environments – as awareness of the possibility of being wrong might well have survival value. (For example having a mental state that includes some representation of “I see stripes” may allow the individual to evaluate the appropriate response in a more nuanced way than just having a built in algorithm that jumps back in fear of a tiger rather than considering the possibility that the tall grass may yield edible seeds.)

With this in mind though, I can imagine that the fish which passes the mirror test and the octopus which learns how to open a lock might also be in a position where there is some reproductive advantage to be gained from having a mental model of one’s own mental state. For the fish I find it hard to imagine where the necessary neural complexity might be located; but for the octopus, as I think I’ve been told, there seems to be neural complexity throughout the entire body. And so I would not be so quick to dismiss the possibility of their “consciousness”.

If scientific investigation shows that there is no evidence to support the notion that human beings have free will, does it become unreasonable to be held accountable for one’s actions, if not why? – Quora

Even if our behaviour is entirely predictable it is not unreasonable to hold us accountable for our actions.

In fact an understanding of how our response to an action will change the likelihood of similar or related actions in future is essential in order to justify any adverse or punitive response at all. So only a predictable being is responsible in that sense.

The only situation in which determinism undermines responsibility or accountability is in the case where there is nothing that can be done to reduce the likelihood of future offensive behaviour. But most of us can indeed be manipulated, and it is only that lack of truly “free” will that justifies holding us accountable for our actions.

Source: (27) Alan Cooper’s answer to If we accept that (or perhaps given that) scientific investigation shows that there is no evidence to support the notion that human beings have free will, does it become unreasonable to be held accountable for one’s actions, if not why? – Quora

What can the zombie argument say about human consciousness? | Aeon Essays

My response to the philosophers’ zombie argument is usually to do a kind of “reverse Turing test” – ie to challenge the philosopher to prove to himself (and to me) that I am not a zombie. If there was anything that had all the characteristics and (in principle completely predictable – or at least explainable) behaviours and responses of a human, then anyone (other than the solipsistic philosopher himself) might be not “really” conscious. The alternative is that we all are conscious but that what we perceive as conscious experience is just the physical property of recording memories into a system with some kind of recall and reprocessing mechanism. And if I think about it too much, that’s pretty much what if “feels like” to me ….. Oops!!!(maybe I just failed the real TTest)

Source: What can the zombie argument say about human consciousness? | Aeon Essays

Schwitzgebel on Michael Tye on Vagueness 

Eric Schwitzgebel has also written a review of ‘The Splintered Mind’ by Michael Tye  in which he…

My comment on that was:

There seems to be an explanation of our lack of experience of partial consciousness in “it’s not Tye’s emphasis, that there must also be transitional, borderline states between non-conscious sleep and conscious waking”. If indeed the groggy waking state is still fully “conscious” of the fuzziness of its experience, then perhaps the levels of lesser consciousness are seen only in less mentally complex organisms. That wouldn’t be surprising and would make a plausible case for the possible definition of the consciousness of any physical system in terms of its information processing capacity.

But the idea that any other consciousness-related property such as “consciousness*” might exist for quarks just strikes me as completely silly – especially since the proposal is that there is no physical way to detect it. The “philosophical zombie”, I’m afraid, is either physical nonsense or ultimate solipsism. If every physical property and action of the zombie is indistinguishable from that of a non-zombie then it will tell me it is conscious just like me even if it is not. So any entity that accepts the possibility of such a thing is basically accepting the possibility that its own consciousness is the only one there is.

Borderline Consciousness

Philosopher Eric Schwitzgebel has written a paper: Borderline Consciousness, When It’s Neither Determinately True nor Determinately False That Experience Is Present that I think is interesting but perhaps somewhat over-plays the idea of “vagueness” as if something different from merely being a quantity that does in fact have a continuum of values. We don’t talk of position or temperature as “vague” even though we often use vague language like “hot” and “cold” to describe them. Similarly, I think it plausible that while the description of an entity as “conscious” may be vague there may nonetheless be a fairly well defined measure of consciousness which has values on a continuum. (And if I were looking to find such a measure I would probably start by considering the amount of information that the entity is capable of storing in a way that influences its future behaviour. )

Maybe I’ll start some pages on this.