Massimo Pigliucci has a new blog about his experience of following a "stoic" philosophy.
As one who strives to live "according to positive values” but “reject the idea of an objective, universal and unchanging moral law”, I don’t see any contradiction there. But given how often people claim to see one, I will be interested in leaning more about how the classical responses to that perception compare to my own. ...more »
Main Blog Page
All recent posts are listed here in reverse chronological order. For a more focused view you can use the "Blog Topics" listing on the right - and the little icons on the right of the topic names toggle display of subtopics (if any).
Massimo Pigliucci has a new blog about his experience of following a "stoic" philosophy.
Sean Carroll identifies some Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics, which is fine but I'd rather he addressed some of the better ones.
I have always thought of (my own experience of) the universe as corresponding to (a very small part of) one particular configuration of a stochastic system, and that having a theoretical model for that system allows me to predict conditional probabilities of certain features (measurements) given others (state preparations). I suppose other configurations could be regarded as alternate worlds which *could* in some sense exist. But why is it necessary (and in fact, what would it mean) to suggest that they *do* exist?
Oh dear! Now I feel a little Feynmanesque "poem" coming on:
We don't know the meaning of "meaning",
And we don't know the meaning of "is".
So how can we possibly claim to know
What the meaning of "the meaning of "is"" is?
Shadi Hamid and William McCants (of the Brookings Institution) object that "John Kerry Won’t Call the Islamic State by its Name Anymore". The article contains two arguments, the first very bad and the second very good.
The good argument is that no non-Muslim has any business dictating what is or is not "true Islam". It may be fine to report on one's understanding of what the majority of Muslims seem to be saying on the matter, but to put that in the form of an authoritative declaration is both patronizing and ridiculous.
But the idea that this forces one to accept the self-declared "name" of an organization is also ridiculous. If the LRA had decided to call itself "Christians of Africa" would anyone seriously think it appropriate to go around saying "the Christians of Africa must be destroyed"? Of course it is appropriate to refuse ISIS the dignity of calling it "The Islamic State", and indeed to accept that designation is an insult to the many Muslims who do not accept it as such. (But of course this refusal can and should be done by reference to the requests of other Muslims rather than to one's own position on what is "true Islam".)
The proper response to Islamic Zealotry (including some criticism of Walzer's position) has also been discussed (again in 'The Atlantic') by Shadi Hamid with particular reference to the situation in France.
Hamid points out some of the real inconsistencies in the French (and much of the European) position, but his analysis also seems to me to be dangerously off-base in some respects.
Apparently the conservative pundits and right wing religious types are all excited about President Obama's having mentioned that using religion to brutalize other people is neither a Muslim invention nor foreign to the American experience.
According to ThinkProgress, Russell Moore, President of the Southern Baptist Ethics and Religious Liberty Commission, said, “The evil actions that he mentioned were clearly outside the moral parameters of Christianity itself and were met with overwhelming moral opposition from Christians.” Would that be the same Southern Baptists whose very raison d'etre was to split off from the main Baptist Church in order to allow their preachers to be slave owners (and who continued resisting integration right through the 1960's)?
The European has an article on the new Greek finance minister - which apparently picks up on a report in The Guardian. There is also news about his discussions with Osborne and other Euro finance ministers but in a profile of the man I would have liked to learn more about his economics background. Anyhow I guess it's nice to see even just one example of a male politician being discussed largely in terms of his appearance and clothing.
From Kenan Malik comes this discussion (to me via 3QuarksDaily) of the 'Kennewick Man' controversy (which, it seems, basically boils down to the question of whether modern indigenous tribes have a right to claim the bones of Starfleet Captain Luc Picard - who was apparently killed by someone he came upon during a time travel excursion to ancient North America).
To what extent should people's religious beliefs and claims be given sufficient credence to interfere with the reasonable activities of others (such as the excavation of a site which any plausible interpretation of the data dates to well before any ancestors of the claimants were likely to be in the area)?
Is it perhaps plausible that North American aboriginal populations of hunter gatherers of 5 to 10 thousand years ago were so much more sedentary in their habits that they occupied the same territories for periods over which almost every other region of the Earth has been occupied by multiple different populations?
Does "respecting" even the irrational and/or harmful aspects of traditional belief systems as appropriate for some racially defined populations not insult the basic humanity of the individuals in those populations by letting them be indoctrinated as children into irrational and often racist beliefs and attitudes?
Has not the urge to throw off the chains of silly oppressive dogma been both expressed and suppressed throughout history in all races and cultures, so that the torch of enlightenment has never been owned by one particular culture and the struggle to maintain and extend its reach is not a recent "clash of civilizations" but an ongoing conflict within each culture and family, and even often within each individual?
Prompted by Michael Walzer's piece on Islamism and the Left in 'Dissent Magazine' (to which I was led by Jeffrey Goldberg's report in 'The Atlantic' on French PM Manuel Valls' resistance to the term "islamophobia" ).
The Zealotry of Righteous Assholes
- is a universal phenomenon to which we are all susceptible
- is the most disproportionally visible external face of all religions
- is also highly visible in various political tendencies
- is often prompted by some kind of imperialist or classist oppression but distorts the response into an excuse for the exercise of excessive violence and other self-indulgent behaviour
The identification of everything that is fair and reasonable as "Western" values to which the rest of the world should not be "subjected" is a cruel "Occidentalist" echo of the simplistic and patronizing "Orientalist" attitude that was condemned (and arguably much too sweepingly attributed) by Edward Said. Or, as Walzer says, “individual liberty, democracy, gender equality, and religious pluralism aren’t really Western values; they are universal values.”
The article is linked to a response by Andrew March and a reply to that from Walzer. March's response strikes me as turning Walzer on his head and interpreting him as accusing the left of refusing to confront Islamism at all, when his main thrust seemed to me just to be against the all-too-frequent pseudo-PC rejection of even appropriate levels of anti-Islamism as "Islamophobic". March correctly identifies that the problem is often "a less black-and-white disagreement about political judgment in specific contexts". And he goes on to identify the "critical motive" of those who "have expressed doubts about the applicability of European conceptions of strict secularism to Muslim countries" as "the freedom, autonomy, and self-representation of the peoples in question"(note peoples not people). But when he says that "The war against violent Islamism is taking care of itself", what he really seems to mean is that it is just fine to let it be handled in the worst possible way - which will indeed turn it into a "clash of civilizations" rather than an appropriate level of support for those who resist zealotry wherever it arises.
In a way this argument is reminiscent of some at the height of the cold war when leftists struggled with their own kinds of zealotry and disagreed on how to respond to the errors and misdeeds of the "communist" world relative to those of our own people and governments.
Briggs is certainly right that much of what is touted as 'Artificial Intelligence' is just the use of electronic machinery to implement the calculations in a method devised by human intelligence. Indeed, for now that really is all that computers can do. But it already goes far beyond playing optimally at tic-tac-toe (trivial) , or chess, or simple kinds of poker. Those are what attract popular attention, but they imply far less for the future than do even the earliest attempts at voice and image recognition./p>
A true "learning" program isn't just the implementation of a previously worked out method of solution to a particular problem, but takes as its input the results of its various earlier responses to similar problems and from that constructs a better solution algorithm than the one it had before[*]. This can be done deterministically by a machine, and perhaps it is only hubris that convinces some of us that what we do is qualitatively different from that.
So I think Briggs' commenter Larry Geiger nailed it with “The extrapolation from what computers can do to what some people *think* that they can do is foolishness”(emphasis added).
[*]Update(2015-01-13) - like this which coincidentally came to my attention on the morning after I wrote the above.
Suis-je Charlie? Muss ich Charlie werden? Why am I not Charlie?
Perhaps I am *not* Charlie, but if I am not Charlie, then I *must* be Ahmed!
I have often (well, at least occasionally) loved Charlie Hebdo with a love that brought pain to my heart (due primarily to having a diaphragm seized with convulsive laughter at the pricking of an inflated pope or politician). But I never wanted to *be* Charlie because the sharpness of his humour was often (well, at least occasionally) more than I wanted to express. Does the revolting incident in Paris now obligate me to take on a persona that is not my own in order to preserve the right of free expression?
Do I have some obligation to carry forward any benign message that is not my own but would otherwise be suppressed? What if the message was not benign? - eg if the victims were holocaust deniers?
When a man-killing lion is brought caged to my village I bear it little sympathy and in truth would dispatch it quickly if I could. But (unless I am overwhelmed with shameful revenge for attacks on my nearest and dearest) I have no urge to torment it. So when the slavering monster strikes out through the bars of the cage and catches one of the tormenting boys who have been poking it with pencils, why should I feel the need to take that boy's place and start a poking that I was not doing before? Of course I should not. But perhaps this is different. In this case the beast has a mind with malicious purpose and killed the boys as a threat to prevent others from playing a game that was doing it no real harm.
If someone uses threats to deny me the right to go where I do not now need to, does my failure to challenge his proscription encourage him to keep on expanding his claims until, when I start to feel the pinch, I discover that all my freedom to move has been taken and that the beast is engorged and empowered with resources? Do I stand with the Czechs? the Poles? or wait til the French are also gone? (Actually I guess only the first and last turned out to be feasible in 1939.)
I never wanted to draw Mohammed in carnal knowledge of a pig (I don't think Charlie ever did that either - and actually it was some Muslim cleric who circulated those images as a false purported example of the Jyllands-Posten cartoons). And I still don't want to (though I may have had a brief vengeful thought in that direction on first news of the killings). What I really want to do is draw or evoke in words an image of his spirit with a shocked look of shame and a tear in his eye[*]. Should I now be concerned that some rabid fuckhead might want to kill me for that?
Is an image of Benjamin Netanyahu's face on the head of a cockroach anti-semitic?
Would it be so if it was part of a larger image of all world heads of state similarly disposed?
Is it anti-islamic to point out that Islam is the only major religion in the modern world whose *current* adherents include a non-negligible fraction who endorse the rape of an innocent child as a punishment for some alleged offense of her parent? or which is the state religion in places where abandonment of the religion is punishable by death?
How could *any* truly faithful followers of a prophet who forbade them to idolize his image convert that proscription into the idolatrous prohibition of any kind of mockery?
[*]written yesterday, but great minds think alike:Charlie Hebdo Cover Features Muhammad Holding 'Je Suis Charlie' Sign - so, perhaps I am Charlie.
See also these links
I have to admit that I find Oreskes' piece less than compelling and in many ways seriously wrong. Her main point is (or should be) twofold, namely that
1. if the cost of neglecting a possible risk is higher than that of protecting against it, then it may make sense to increase our chances of falsely believing that the risk is real when it is not, if that is necessary in order to reach an acceptable chance of identifying it when it is real, and
2. if we already have good reasons based on well-established theories to expect something is true, then we don't need to demand the same level of direct evidence as we would if that evidence was our only reason for expecting the effect.
But Oreskes attempts to dress these (obvious?) claims in technical language about statistics and scientific practice - which she often garbles into either meaninglessness or outright error. For example she says "Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20." So far as it is even parseable this is wrong in at least six ways. Scientists don't ever "apply" a confidence "limit" but, when assessing the implications of evidence regarding the value of a parameter, they may use that evidence to construct the limits of a confidence interval by imposing, requiring, or perhaps "applying" a confidence level. This process of estimating a parameter has nothing to do with whether or not they will "accept" a causal claim, and the practice of determining the significance level of evidence with regard to a purported relationship is quite independent of the question of whether or not that relationship is "causal". And the 95% significance level, or whatever other level is applied in that horrible terminology, is complementary not to the "odds" of the relationship , but to the predicted probability of the observed (or more "extreme") data in a stochastic model in which the relationship is not included.
In fact the question of confidence intervals , ie the issue of whether or not it is appropriate to use the available data to place narrower bounds on our estimates of parameters (such as the expected change in annual rate of change of temperature per doubling of atmospheric CO2 from current levels), is largely irrelevant to the decisions we need to make with regard to ignoring or attempting to mitigate that effect, since whatever the extremes of what we consider likely, the middle of that range is what will govern our decisions. And the question of significance levels is irrelevant for two reasons. Firstly we already have plenty of data to rule out the hypothesis that temperature is fluctuating randomly without any long term trends - and to do so with a p-value of much less than 5%. Secondly (and much more importantly), the real null hypothesis is not random fluctuations around a constant. For more than a century we have known, with as much certainty as we can predict the orbit of a comet, that the Fourier-Arrhenius effect is pumping energy into the Earth's surface at a rate which, absent unknown effects, would raise the surface temperature by between 2 and 4 degrees per doubling of atmospheric CO2. So the real question that we should be testing by data is whether or not there is evidence for an unknown higher order effect or some outside factor that mitigates the predicted warming. And since the warming could have very serious consequences, we should, if anything, require a higher significance level (ie lower p-value) before rejecting that null hypothesis.
But for all that, Schachtman's criticism has its own weaknesses. He moves quickly to an ad hominem attempt to get the reader to dismiss Oreskes' decision analysis by challenging her history. But in fact he is the one who gets it wrong! What Oreskes actually said is "The 95 percent confidence level is generally credited to the British statistician R. A. Fisher" and this is undoubtedly true, for even though Fisher was not the originator of confidence *intervals* , long before they were invented he did so much to popularize p=5% as an appropriate indicator of significance that our friend Briggs exemplifies the masses by saying "That rotten 95-percent 'confidence' came from Fisher... " After that, Schachtman devotes a lot of attention to Oreskes reference to EPA's use of a weakened standard (10% p-value or 90% confidence intervals) for early (1990's) analysis of the effects of second-hand cigarette smoke. This appears to be a sore point for him, perhaps because of his 30 years of legal practice "focused on the defense of products liability suits, with an emphasis on the scientific and medico-legal issues that often dominate such cases". But it has little to do with the climate issue.
What all these people all seem to share is a very limited view of what "science" is. Although statistical analysis of "noisy" data often plays a role, it is not true that our normal modus operandi is to assume that nothing ever happens unless we see direct evidence for it. Rather we have a whole interlocking body of mutually consistent experiments in a wide range of applied contexts which all support the same basic theoretical structure. Sometimes a situation is so complicated that we cannot predict its behaviour from the basic theory without making simplifying assumptions. In such cases we expect that factors we have neglected will impact the behaviour in ways we cannot predict and so which we "model" by adding small terms whose "random" values are drawn from a probability distribution. The questions of "significance", "p-values", and "confidence intervals" apply only to the question of whether our stochastic terms are adequate to effectively summarise all of the effects that have been left out of our analysis.
I actually thought it was a pretty good movie, but in her review of The Imitation Game: A Smart Person's Fantasy, Emma Green at The Atlantic encourages the reader to confound fantasy and reality. The creators of a movie, presented as a fictionalized account, may perhaps legitimately change the story for dramatic effect but for a reviewer of a historical drama to talk of the drama without clearly distinguishing it from the history is inexcusable. Green does admit the neglect of prior Polish work on decoding earlier versions of 'Enigma' as an instance of how "(t)he filmmakers tweaked some of the details", and her subsequent discussion could be defended as summarising the plot of the movie rather than the historical facts, but the distinction is nowhere near clear enough. In particular the entire premise of a lone misunderstood genius is contrary to the facts. Turing was always fully accepted as a central part of a team whose work was never threatened with withdrawal of support and the only person who really had to struggle against skepticism is Tommy Flowers who persisted against some resistance (though not from Turing) with the idea of substituting electronic for electromechanical switching in the construction of a new 'Colossus' to replace the 'Bombe' computing machine.
In what is an extreme anomaly relative to most of my on-line experience, most of the comments (including some that reeked of homophobia) were well worth reading and I learned a lot from many of them (including some of the nasty ones).
As for the characterization of Turing, although I enjoyed the movie I think in retrospect that I am disappointed in Cumberbatch. After seeing 'August: Osage County' I had come to expect him to be one of those rare actors (like Kevin Cline) who can disappear into a role so that you don't immediately know it's the same guy acting. But here he seems to be just replaying his 'Sherlock' character without trying to pick up on the aspects that make Turing completely different. Perhaps he should have taken some lessons from the kid who played him as a child and I think may have played it much closer to the mark!
Meanwhile the Atlantic also puts a foot on the other side of the fence by having Kevin O'Keefe bleating about a false happy ending - which is pretty far from how I would characterize the final scene where, alone and sickened by meds Turing turns to his computer, named "Christopher" after his dead first love, as the only "person" in his life he seems to have any faith in or connection with, and the screen fades to a text-over account of his subsequent suicide. This too is of course another bit of dramatic licence as Turing apparently had an active social life, including sex on various holidays abroad, in the year that he lived after concluding the hormone treatment. (This is not to deny that the hormone treatment may have induced ongoing depression or unrecorded sexual problems, and/or that the prevailing antipathy and distrust towards homosexuals may have driven him to suicide - if that is indeed how he died.)
P.S. This 'Slate' article gives a lot more detail about the extent to which the movie respects or distorts the facts.
I have been wondering about the possibility of using small rfid tags on fingernails to enable fingertip tracking by a computer or tablet so that free hand signals (and maybe even sign language) could be used for input.
But anyone who could say "Epistemic reductionism is obviously false", at least on the basis of only the feeble grounds provided by Pigliucci, apparently doesn't understand what such reductionism should reasonably be expected to claim.
The simplest model that shows his error is perhaps the reduction of thermodynamics to statistical mechanics wherein our inability to keep track of all the 10^26 odd microvariables (representing positions and momenta of all the particles in a sample of material) is overcome by identifying suitable combinations of them as macrovariables (such as volume, temperature, pressure etc) with the rules or axioms of the macrotheory being "explained" as theorems of the microtheory. We don't yet have a satisfactory quantum theory of gravity but there is no known logical obstruction to finding one, and if we do then one of the constraints on it will be that its classical limit *does* provide a "quantum theory of planets".
Note: This rant is more about the evils of headline writers and professional C&M types than about the substance of the proposed study (which I think is very interesting):
The University of Washington is rightly proud of the fact that UW statistician, philosopher win prize for detecting bias in peer review. Except that the headline is a lie! The award is for proposing a method for detecting a certain kind of bias if it exists. The researchers quite reasonably guess that it might, and they suggest that it may explain the fact (which really was observed in a different study) that blacks are underfunded compared to whites of the same “educational background, country of origin, training, previous research awards, publication record, and employer characteristics”. And the study they propose (and which will be funded by the award) may well detect it. But it has NOT been "detected" yet.
P.S. I actually came to this via a link in a comment on a blog post in a series by Susanna Siegel in which an earlier post had referred to a study by the psychologists Uhlmann and Cohen, where they were investigating the role of gender stereotypes in hiring decisions and in which a very similar kind of bias to that proposed by the UW team was observed in a mock hiring experiment.
The answer provided by a professional philosopher (at AskPhilosophers.org) to the question "If we have no free will, then is the entire legal system redundant since no one can be held accountable for anything since no one has control over their own actions?" would be shocking if my expectations were not already so low.
Ophelia Benson is often proudly childish in giving vent to her emotions, but rarely falls into the kind of childish pride with which Peter Boghossian asserts the "adult" nature of his comments.
I don't know if Tim Cook really meant to claim that he is proud of his sexual orientation per se rather than of his ability to thrive in a world where even to survive is often challenging for gays, but regardless of whether or not he meant his words to be taken literally, it is meanly small-minded to question that pride by way of a sarcastic "tweet" or "status" comment.
Personally, I walk in the Gay Pride Parade more to support the latter interpretation (so well expressed by Greta Christina and many others who responded to Boghossian's Facebook post), but even if Tim either misspoke or feels a pride that I could not share, I am sure he deserves a more respectful response than was shown by Boghossian.
I do think, though, there is a sense in which some expressions of "gay pride" may be unfortunate - either by being misleading as to the intent, or by actually claiming a sense of superiority that just perpetuates the power of prejudice by seeking to reverse it rather than to end it. And there may be a context in which such concerns can be usefully addressed. ...more »
I'm of two minds about this article by Massimo Pigliucci. While I continue to be dismissive of the claim that there is anything of substance in the Gettier examples I can agree that there may be progress in the game of clearly expressing why that is the case. But the role of philosophers in advancing that progress is often more obstructive than constructive.
My own initial response to the Gettier problems was basically what Pigliucci refers to as the false premise objection - namely that the claimed "justification" for the belief in the allegedly problematic cases may be make the belief "excusable" or "blameless" but is not justification in the intended (logical) sense because it is based on a false premise. And my respect for the discipline is not enhanced by the proposed example of "more sophisticated Gettier cases that do not seem to depend on false premises".
What Pigliucci proposes is as follows:
I am walking through Central Park and I see a dog in the distance. I instantly form the belief that there is a dog in the park. This belief is justified by direct observation. It is also true, because as it happens there really is a dog in the park. Problem is, it’s not the one I saw! The latter was, in fact, a robotic dog unleashed by members of the engineering team from Bronx High School. So my belief is justified (it was formed by normally reliable visual inspection), true (there is indeed a dog in the park), and arrived at without relying on any false premise .
But here the "justification" clearly involves the false premise that what looks like a dog is a dog. Without that premise, any claim of justification "by direct observation" is just clearly nonsense.
Briggs is right to complain that "natural variability" is an ambiguous and easily abused term, but what I would be most inclined to use it for is different from either of the usages that he identifies.
He notes that some use "natural variability" of a phenomenon (such as some average of temperature measurements) to refer to the actual values taken by the data at different points in time, and others use it for the values that would be expected in the absence of some "unnatural" factor (such as CO2 emissions from human use of technology). But to me it seems much more natural to use it to refer to the unexplained deviations of the data from what would be predicted by a (partially) explanatory model.
I misunderstood Briggs' claim that theoretical and/or statistical modellers claim to "skillfully" predict the natural variability in his first sense as meaning that they claim to predict it completely or accurately, whereas he was referring to the technical definition used in meteorology where one prediction is said to be relatively skillful compared to another if its mean squared deviation from the observed data is less. But this depends both on the reference model used for comparison and on the interval over which the comparison is made. A model that is skillful over a long interval may well have substantial shorter intervals over which it is not skillful, and even though a prediction of an upward trend in global temperature may appear not to be skillful over the interval from 2008 to 2014, that made by Arrhenius in 1898 does seem to be so (and would be even more so if he had predicted a faster rate of increase by reducing his estimated doubling time for CO2 to account for the subsequent increase in both population and per capita energy use).