Main Blog Page

All recent posts are listed here in reverse chronological order. For a more focused view you can use the "Blog Topics" listing on the right - and the little icons on the right of the topic names toggle display of subtopics (if any).


On Zealotry

January 18th, 2015

Prompted by Michael Walzer's piece on Islamism and the Left in 'Dissent Magazine' (to which I was led by Jeffrey Goldberg's report in 'The Atlantic' on French PM Manuel Valls' resistance to the term "islamophobia" ).

The Zealotry of Righteous Assholes

  • is a universal phenomenon to which we are all susceptible
  • is the most disproportionally visible external face of all religions
  • is also highly visible in various political tendencies
  • is often prompted by some kind of imperialist or classist oppression but distorts the response into an excuse for the exercise of excessive violence and other self-indulgent behaviour

The identification of everything that is fair and reasonable as "Western" values to which the rest of the world should not be "subjected" is a cruel "Occidentalist" echo of the simplistic and patronizing "Orientalist" attitude that was condemned (and arguably much too sweepingly attributed) by Edward Said.  Or, as Walzer says, “individual liberty, democracy, gender equality, and religious pluralism aren’t really Western values; they are universal values.”

The article is linked to a response by Andrew March and a reply to that from Walzer. March's response strikes me as turning Walzer on his head and interpreting him as accusing the left of refusing to confront Islamism at all, when his main thrust seemed to me just to be against the all-too-frequent pseudo-PC rejection of even appropriate levels of anti-Islamism as "Islamophobic". March correctly identifies that the problem is often "a less black-and-white disagreement about political judgment in specific contexts". And he goes on to identify the "critical motive" of those who "have expressed doubts about the applicability of European conceptions of strict secularism to Muslim countries" as "the freedom, autonomy, and self-representation of the peoples in question"(note peoples not people). But when he says that "The war against violent Islamism is taking care of itself", what he really seems to mean is that it is just fine to let it be handled in the worst possible way - which will indeed turn it into a "clash of civilizations" rather than an appropriate level of support for those who resist zealotry wherever it arises.

In a way this argument is reminiscent of some at the height of the cold war when leftists struggled with their own kinds of zealotry and disagreed on how to respond to the errors and misdeeds of the "communist" world relative to those of our own people and governments.

 

On Computers ‘Learning’

January 12th, 2015

Briggs is certainly right that much of what is touted as 'Artificial Intelligence' is just the use of electronic machinery to implement the calculations in a method devised by human intelligence. Indeed, for now that really is all that computers can do. But it already goes far beyond playing optimally at tic-tac-toe (trivial) , or chess, or simple kinds of poker. Those are what attract popular attention, but they imply far less for the future than do even the earliest attempts at voice and image recognition./p>

A true "learning" program isn't just the implementation of a previously worked out method of solution to a particular problem, but takes as its input the results of its various earlier responses to similar problems and from that constructs a better solution algorithm than the one it had before[*]. This can be done deterministically by a machine, and perhaps it is only hubris that convinces some of us that what we do is qualitatively different from that.

So I think Briggs' commenter Larry Geiger nailed it with “The extrapolation from what computers can do to what some people *think* that they can do is foolishness”(emphasis added).

[*]Update(2015-01-13) - like this which coincidentally came to my attention on the morning after I wrote the above.

Suis-je Charlie? Si non, il me *faut* etre Ahmed!

January 12th, 2015

Suis-je Charlie? Muss ich Charlie werden? Why am I not Charlie?
Perhaps I am *not* Charlie, but if I am not Charlie, then I *must* be Ahmed!

I have often (well, at least occasionally) loved Charlie Hebdo with a love that brought pain to my heart (due primarily to having a diaphragm seized with convulsive laughter at the pricking of an inflated pope or politician). But I never wanted to *be* Charlie because the sharpness of his humour was often (well, at least occasionally) more than I wanted to express. Does the revolting incident in Paris now obligate me to take on a persona that is not my own in order to preserve the right of free expression?
Do I have some obligation to carry forward any benign message that is not my own but would otherwise be suppressed? What if the message was not benign? - eg if the victims were holocaust deniers?

When a man-killing lion is brought caged to my village I bear it little sympathy and in truth would dispatch it quickly if I could. But (unless I am overwhelmed with shameful revenge for attacks on my nearest and dearest) I have no urge to torment it. So when the slavering monster strikes out through the bars of the cage and catches one of the tormenting boys who have been poking it with pencils, why should I feel the need to take that boy's place and start a poking that I was not doing before? Of course I should not. But perhaps this is different. In this case the beast has a mind with malicious purpose and killed the boys as a threat to prevent others from playing a game that was doing it no real harm.

If someone uses threats to deny me the right to go where I do not now need to, does my failure to challenge his proscription encourage him to keep on expanding his claims until, when I start to feel the pinch, I discover that all my freedom to move has been taken and that the beast is engorged and empowered with resources? Do I stand with the Czechs? the Poles? or wait til the French are also gone? (Actually I guess only the first and last turned out to be feasible in 1939.)

I never wanted to draw Mohammed in carnal knowledge of a pig (I don't think Charlie ever did that either - and actually it was some Muslim cleric who circulated those images as a false purported example of the Jyllands-Posten cartoons). And I still don't want to (though I may have had a brief vengeful thought in that direction on first news of the killings). What I really want to do is draw or evoke in words an image of his spirit with a shocked look of shame and a tear in his eye[*]. Should I now be concerned that some rabid fuckhead might want to kill me for that?

Is an image of Benjamin Netanyahu's face on the head of a cockroach anti-semitic?
Would it be so if it was part of a larger image of all world heads of state similarly disposed?

Is it anti-islamic to point out that Islam is the only major religion in the modern world whose *current* adherents include a non-negligible fraction who endorse the rape of an innocent child as a punishment for some alleged offense of her parent? or which is the state religion in places where abandonment of the religion is punishable by death?

How could *any* truly faithful followers of a prophet who forbade them to idolize his image convert that proscription into the idolatrous prohibition of any kind of mockery?

[*]written yesterday, but great minds think alike:Charlie Hebdo Cover Features Muhammad Holding 'Je Suis Charlie' Sign - so, perhaps I am Charlie.

See also these links

Significance Levels and Climate Change

January 6th, 2015

D.G.Mayo has reproduced Nathan Schachtman's post on Naomi Oreskes' op-ed in the NYTimes - and of course Briggs also chimes in.

I have to admit that I find Oreskes' piece less than compelling and in many ways seriously wrong. Her main point is (or should be) twofold, namely that

1.  if the cost of neglecting a possible risk is higher than that of protecting against it, then it may make sense to increase our chances of falsely believing that the risk is real when it is not, if that is necessary in order to reach an acceptable chance of identifying it when it is real, and

2.  if we already have good reasons based on well-established theories to expect something is true, then we don't need to demand the same level of direct evidence as we would if that evidence was our only reason for expecting the effect.

But Oreskes attempts to dress these (obvious?) claims in technical language about statistics and scientific practice - which she often garbles into either meaninglessness or outright error. For example she says "Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20." So far as it is even parseable this is wrong in at least six ways. Scientists don't ever "apply" a confidence "limit" but, when assessing the implications of evidence regarding the value of a parameter, they may use that evidence to construct  the limits of a confidence interval by imposing, requiring, or perhaps "applying" a confidence level. This process of estimating a parameter has nothing to do with whether or not they will "accept" a causal claim, and the practice of determining the significance level of evidence with regard to a purported relationship is quite independent of the question of whether or not that relationship is "causal". And the 95% significance level, or whatever other level is applied in that horrible terminology, is complementary not to the "odds" of the relationship , but to the predicted probability of the observed (or more "extreme") data in a stochastic model in which the relationship is not included.

In fact the question of  confidence intervals , ie the issue of whether or not it is appropriate to use the available data to place narrower bounds on our estimates of parameters (such as the expected change in annual rate of change of temperature per doubling of atmospheric CO2 from current levels), is largely irrelevant to the decisions we need to make with regard to ignoring or attempting to mitigate that effect, since whatever the extremes of what we consider likely, the middle of that range is what will govern our decisions. And the question of significance levels is irrelevant for two reasons. Firstly we already have plenty of data to rule out the hypothesis that temperature is fluctuating randomly without any long term trends - and to do so with a p-value of much less than 5%. Secondly (and much more importantly), the real null hypothesis is not random fluctuations around a constant. For more than a century we have known, with as much certainty as we can predict the orbit of a comet, that the Fourier-Arrhenius effect is pumping energy into the Earth's surface at a rate which, absent unknown effects, would raise the surface temperature by between 2 and 4 degrees per doubling of atmospheric CO2. So the real question that we should be testing by data is whether or not there is evidence for an unknown higher order effect or some outside factor that mitigates the predicted warming. And since the warming could have very serious consequences, we should, if anything, require a higher significance level (ie lower p-value) before rejecting that null hypothesis.

But for all that, Schachtman's criticism has its own weaknesses. He moves quickly to an ad hominem attempt to get the reader to dismiss Oreskes' decision analysis by challenging her history. But in fact he is the one who gets it wrong! What Oreskes actually said is "The 95 percent confidence level is generally credited to the British statistician R. A. Fisher" and this is undoubtedly true, for even though Fisher was not the originator of confidence *intervals* , long before they were invented he did so much to popularize p=5% as an appropriate indicator of significance that our friend Briggs exemplifies the masses by saying "That rotten 95-percent 'confidence' came from Fisher... " After that, Schachtman devotes a lot of attention to Oreskes reference to EPA's use of a weakened standard (10% p-value or 90% confidence intervals) for early (1990's) analysis of the effects of second-hand cigarette smoke. This appears to be a sore point for him, perhaps because of his 30 years of legal practice "focused on the defense of products liability suits, with an emphasis on the scientific and medico-legal issues that often dominate such cases". But it has little to do with the climate issue.

What all these people all seem to share is a very limited view of what "science" is. Although statistical analysis of "noisy" data often plays a role, it is not true that our normal modus operandi is to assume that nothing ever happens unless we see direct evidence for it. Rather we have a whole interlocking body of mutually consistent experiments in a wide range  of applied contexts which all support the same basic theoretical structure. Sometimes a situation is so complicated that we cannot predict its behaviour from the basic theory without making simplifying assumptions. In such cases we expect that factors we have neglected will impact the behaviour in ways we cannot predict and so which we "model" by adding small terms whose "random" values are drawn from a probability distribution. The questions of "significance", "p-values", and "confidence intervals" apply only to the question of whether our stochastic terms are adequate to effectively summarise all of the effects that have been left out of our analysis.

 


A Popular Fantasy About a Smart Person

January 5th, 2015

I actually thought it was a pretty good movie, but in her review of The Imitation Game: A Smart Person's Fantasy, Emma Green at The Atlantic encourages the reader to confound fantasy and reality. The creators of a movie, presented as a fictionalized account, may perhaps legitimately change the story for dramatic effect but for a reviewer of a historical drama to talk of the drama without clearly distinguishing it from the history is inexcusable. Green does admit the neglect of prior Polish work on decoding earlier versions of 'Enigma' as an instance of how "(t)he filmmakers tweaked some of the details", and her subsequent discussion could be defended as summarising the plot of the movie rather than the historical facts, but the distinction is nowhere near clear enough. In particular the entire premise of a lone misunderstood  genius is contrary to the facts. Turing was always fully accepted as a central part of a team whose work was never threatened with withdrawal of support and the only person who really had to struggle against skepticism is Tommy Flowers who persisted against some resistance (though not from Turing) with the idea of substituting electronic for electromechanical switching in the construction of a new 'Colossus' to replace the 'Bombe' computing machine.

In what is an extreme anomaly relative to most of my on-line experience, most of the comments (including some that reeked of homophobia) were well worth reading and I learned a lot from many of them (including some of the nasty ones).

As for the characterization of Turing, although I enjoyed the movie I think in retrospect that I am disappointed in Cumberbatch. After seeing 'August: Osage County' I had come to expect him to be one of those rare actors (like Kevin Cline) who can disappear into a role so that you don't immediately know it's the same guy acting. But here he seems to be just replaying his 'Sherlock' character without trying to pick up on the aspects that make Turing completely different. Perhaps he should have taken some lessons from the kid who played him as a child and I think may have played it much closer to the mark!

Meanwhile the Atlantic also puts a foot on the other side of the fence by having Kevin O'Keefe bleating about a false happy ending - which is pretty far from how I would characterize the final scene where, alone and sickened by meds Turing turns  to his computer, named "Christopher" after his dead first love, as the only "person" in his life he seems to have any faith in or connection with, and the screen fades to a text-over account of his subsequent suicide. This too is of course another bit of dramatic licence as Turing apparently had an active social life, including sex on various holidays abroad, in the year that he lived after concluding  the hormone treatment. (This is not to deny that the hormone treatment may have induced ongoing depression or unrecorded sexual problems, and/or that the prevailing antipathy and distrust towards homosexuals may have driven him to suicide - if that is indeed how he died.)

 

P.S. This 'Slate' article gives a lot more detail about the extent to which the movie respects or distorts the facts.

 

rfid4fingertips

December 5th, 2014

I have been wondering about the possibility of using small rfid tags on fingernails to enable fingertip tracking by a computer or tablet so that free hand signals (and maybe even sign language) could be used for input.

This set of OneTab shared tabs, (also saved here in case one-tab.com doesn't keep it) is just some links I want to follow up on re small tags and how to address and locate them.

3quarksdaily: Who knows What

November 20th, 2014

3quarksdaily links today to a two year old article by Massimo Pigliucci objecting to a 1998 book by E.O.Wilson.

But anyone who could say "Epistemic reductionism is obviously false", at least on the basis of only the feeble grounds provided by Pigliucci, apparently doesn't understand what such reductionism should reasonably be expected to claim.

The simplest model that shows his error is perhaps the reduction of thermodynamics to statistical mechanics wherein our inability to keep track of all the 10^26 odd microvariables (representing positions and momenta of all the particles in a sample of material) is overcome by identifying suitable combinations of them as macrovariables (such as volume, temperature, pressure etc) with the rules or axioms of the macrotheory being "explained" as theorems of the microtheory. We don't yet have a satisfactory quantum theory of gravity but there is no known logical obstruction to finding one, and if we do then one of the constraints on it will be that its classical limit *does* provide a "quantum theory of planets".

UW Lies about Peer Review Study

November 17th, 2014

Note: This rant is more about the evils of headline writers and professional C&M types than about the substance of the proposed study (which I think is very interesting):

The University of Washington is rightly proud of the fact that UW statistician, philosopher win prize for detecting bias in peer review. Except that the headline is a lie! The award is for proposing a method for detecting a certain kind of bias if it exists. The researchers quite reasonably guess that it might, and they suggest that it may explain the fact (which really was observed in a different study) that blacks are underfunded compared to whites of the same “educational background, country of origin, training, previous research awards, publication record, and employer characteristics”. And the study they propose (and which will be funded by the award) may well detect it. But it has NOT been "detected" yet.

P.S. I actually came to this via a link in a comment on a blog post in a series by Susanna Siegel in which an earlier post had referred to a study by the psychologists Uhlmann and Cohen, where they were investigating the role of gender stereotypes in hiring decisions and in which a very similar kind of bias to that proposed by the UW team was observed in a mock hiring experiment.

AskPhilosophers.org

November 15th, 2014

The answer provided by a professional philosopher (at AskPhilosophers.org) to the question "If we have no free will, then is the entire legal system redundant since no one can be held accountable for anything since no one has control over their own actions?" would be shocking if my expectations were not already so low.

...more »

On Pride

November 8th, 2014

Ophelia Benson is often proudly childish in giving vent to her emotions, but rarely falls into the kind of childish pride with which Peter Boghossian asserts the "adult" nature of his comments.

I don't know if Tim Cook really meant to claim that he is proud of his sexual orientation per se rather than of his ability to thrive in a world where even to survive is often challenging for gays, but regardless of whether or not he meant his words to be taken literally, it is meanly small-minded to question that pride by way of a sarcastic "tweet" or "status" comment.

Personally, I walk in the Gay Pride Parade more to support the latter interpretation (so well expressed by Greta Christina and many others who responded to Boghossian's Facebook post), but even if Tim either misspoke or feels a pride that I could not share, I am sure he deserves a more respectful response than was shown by Boghossian.

I do think, though, there is a sense in which some expressions of "gay pride" may be unfortunate - either by being misleading as to the intent, or by actually claiming a sense of superiority that just perpetuates the power of prejudice by seeking to reverse it rather than to end it. And there may be a context in which such concerns can be usefully addressed. ...more »

Progress(?) in philosophy: the Gettier case

October 15th, 2014

I'm of two minds about this article by Massimo Pigliucci. While I continue to be dismissive of the claim that there is anything of substance in the Gettier examples I can agree that there may be progress in the game of clearly expressing why that is the case. But the role of philosophers in advancing that progress is often more obstructive than constructive.

My own initial response to the Gettier problems was basically what Pigliucci refers to as the false premise objection - namely that the claimed "justification" for the belief in the allegedly problematic cases may be make the belief "excusable" or "blameless" but is not justification in the intended (logical) sense because it is based on a false premise. And my respect for the discipline is not enhanced by the proposed example of "more sophisticated Gettier cases that do not seem to depend on false premises".

What Pigliucci proposes is as follows:

I am walking through Central Park and I see a dog in the distance. I instantly form the belief that there is a dog in the park. This belief is justified by direct observation. It is also true, because as it happens there really is a dog in the park. Problem is, it’s not the one I saw! The latter was, in fact, a robotic dog unleashed by members of the engineering team from Bronx High School. So my belief is justified (it was formed by normally reliable visual inspection), true (there is indeed a dog in the park), and arrived at without relying on any false premise [2].

But here the "justification" clearly involves the false premise that what looks like a dog is a dog. Without that premise, any claim of justification "by direct observation" is just clearly nonsense.

What is “Natural Variability”?

October 15th, 2014

Briggs is right to complain that "natural variability" is an ambiguous and easily abused term, but what I would be most inclined to use it for is different from either of the usages that he identifies.

He notes that some use "natural variability" of a phenomenon (such as some average of temperature measurements) to refer to the actual values taken by the data at different points in time, and others use it for the values that would be expected in the absence of some "unnatural" factor (such as CO2 emissions from human use of technology). But to me it seems much more natural to use it to refer to the unexplained deviations of the data from what would be predicted by a (partially) explanatory model.

I misunderstood Briggs' claim that theoretical and/or statistical modellers claim to "skillfully" predict the natural variability in his first sense as meaning that they claim to predict it completely or accurately, whereas he was referring to the technical definition used in meteorology where one prediction is said to be relatively skillful compared to another if its mean squared deviation from the observed data is less. But this depends both on the reference model used for comparison and on the interval over which the comparison is made. A model that is skillful over a long interval may well have substantial shorter intervals over which it is not skillful, and even though a prediction of an upward trend in global temperature may appear not to be skillful over the interval from 2008 to 2014, that made by Arrhenius in 1898 does seem to be so (and would be even more so if he had predicted a faster rate of increase by reducing his estimated doubling time for CO2 to account for the subsequent increase in both population and per capita energy use).

Misreading Statistics

October 11th, 2014

Briggs, points out a very important real issue here – though I wouldn’t call the confusion between group and individual differences a matter of “exaggeration” exactly.

No matter how carefully one tries to express a claim about population differences, the risk of feeding prejudice about individuals is always substantial – so much so that I think there may be many true statements that would be best left unsaid.

OneTab shared tabs for 2014-10-06

October 6th, 2014

OneTab shared tabs.

The Story of a Disaster

October 5th, 2014

On reading the comments on this old Tyee article I was struck by one particular exchange which follows a pattern that is sadly all too common.

When commenter Chris Abel made the claim that "NOBODY was harmed by radiations at Fukushima" the response by G West included a number of links which were presumably intended to convey the impression that in fact MANY were harmed "by radiation". But if one reads them it becomes clear that the claims of actual and prospective medical harms due to radiation, (while obviously non-zero if one includes delayed rather than just immediate effects), are really very modest (and essentially negligible in comparison to the number of deaths and injuries caused by the tsunami itself)

For example the first link is to a report from Physicians for Social Responsibility which, while striking me as somewhat alarmist, does stick reasonably close to actual facts and so does not make any claims as to the actual number of expected morbid or fatal outcomes.

The second link is to the World Health Organization which reports that:

The WHO report ‘Health Risk Assessment from the nuclear accident after the 2011 Great East Japan Earthquake and Tsunami based on preliminary dose estimation’ noted, however, that the estimated risk for specific cancers in certain subsets of the population in Fukushima Prefecture has increased and, as such, it calls for long term continued monitoring and health screening for those people.

Experts estimated risks in the general population in Fukushima Prefecture, the rest of Japan and the rest of the world, plus the power plant and emergency workers that may have been exposed during the emergency phase response.

“The primary concern identified in this report is related to specific cancer risks linked to particular locations and demographic factors,” says Dr Maria Neira, WHO Director for Public Health and Environment. “A breakdown of data, based on age, gender and proximity to the nuclear plant, does show a higher cancer risk for those located in the most contaminated parts. Outside these parts - even in locations inside Fukushima Prefecture - no observable increases in cancer incidence are expected.

In terms of specific cancers, for people in the most contaminated location, the estimated increased risks over what would normally be expected are:

all solid cancers - around 4% in females exposed as infants;
breast cancer - around 6% in females exposed as infants;
leukaemia - around 7% in males exposed as infants;
thyroid cancer - up to 70% in females exposed as infants (the normally expected risk of thyroid cancer in females over lifetime is 0.75% and the additional lifetime risk assessed for females exposed as infants in the most affected location is 0.50%).
For people in the second most contaminated location of Fukushima Prefecture, the estimated risks are approximately one-half of those in the location with the highest doses.

The report also references a section to the special case of the emergency workers inside the Fukushima NPP. Around two-thirds of emergency workers are estimated to have cancer risks in line with the general population, while one-third is estimated to have an increased risk.

The almost-200-page document further notes that the radiation doses from the damaged nuclear power plant are not expected to cause an increase in the incidence of miscarriages, stillbirths and other physical and mental conditions that can affect babies born after the accident.

For the benefit of any readers who can't read, the "up to 70%" increase (in usually non-fatal thyroid cancers) does NOT mean that 70% of those exposed (in the worst area) will get the cancer but that the number of cancers might increase by 70% of what it was previously - ie from about 1.5 in 200 to 2.5 in 200 - and the increases in other cancers are all by less than 10% of the background rate. So the actual number of expected extra cancers is indeed quite small.

But poor Mr Christian Abel apparently didn't bother to follow the links, and just like many other readers assumed the WHO supported G West in contradicting him - and so resorted to foolishly dismissing them without realizing that they essentially supported his assertion (even to the extent that they were criticized in another of G West's links)

G West's third link is to the Health Physics Society which in turn links to many useful sources - most of which are consistent with the assertion by Robert Gale in their panel discussion of the event that (with regard to probable increases in the lifetime cancer rate over Japan's pre-Fukushima rate which was about 50%) "You can see that these are incredibly small increases that would never be detectable, especially in light of a very steeply increasing
incidence in cancer deaths in Japan over the last 60 years."

Next is an article which asserts that:

In theory there is a possibility of cancer among people exposed in the accident at the Fukushima Daiichi NPP. Assuming the LNT model represents the reality of radiation-induced cancer at low doses, however, significant excess risk due to exposure is unlikely to be detected for the emergency workers and the public living around the site unless their doses have been seriously underestimated

and concludes that:

In the Fukushima accident, no acute radiation injuries have been observed even among people associated with the operation of the plant or responding to the accident in contrast to the Chernobyl accident where a number of people suffered acute radiation injuries. The anxiety among most of the civilian population is the future increase in the possibility of tumorigenesis.

West's fifth link is to the rebuttal of WHO that I mentioned above (about which he helpfully says "And I suppose you thing these guys are biased too?" as if to identify them as even more mainstream than the rest when in fact, whether right or wrong, they are by far the most extreme in their assessment of the likely harm.

Finally, the last link is to the Science Daily report of a Stanford University study which unfortunately does set up the straw man claim that "There are groups of people who have said there would be no effects", but the effects it does claim are really quite modest (even at the high end of a very wide range).

Radiation from Japan's Fukushima Daiichi nuclear disaster may eventually cause anywhere from 15 to 1,300 deaths and from 24 to 2,500 cases of cancer, mostly in Japan, Stanford researchers have calculated.
...
The numbers are in addition to the roughly 600 deaths caused by the evacuation of the area surrounding the nuclear plant directly after the March 2011 earthquake, tsunami and meltdown.

So the expected number of cancer deaths is anywhere from many times less to maybe a couple of times more than the number of people killed by the decision to evacuate. (Not to mention the almost 20,000 immediate fatalities resulting from the tsunami itself and the totally ignored number of cancers etc that may result from other kinds of pollution caused by the destruction of various toxic chemical repositories!)

God, Darwin and the College Biology Class

September 30th, 2014

David Barash apparently thinks it appropriate to discuss religion in his Biology class.

But his understanding of both religion and biology are flawed (ie not in accord with my own). ...more »

Dawkins Needs Better Friends (not Defenders)

September 23rd, 2014

Adam Lee thinks that Dawkins Needs Better Defenders after having (again) posted a series of tweets that undermine his credibility as a serious thinker.

But so far as I can see, it's not better "defenders" that he needs, but better *friends* - ones with the sense to recognize, and the courage to say, when he has gone off the rails. Sadly he lacks the wit to recognize that Adam Lee and Ophelia Benson are probably the best and most useful friends he could have, and so, like an ageing rock star or corrupt monarch, he continues to rely on the usual crop of groupies and sycophants to support him down the path to perdition (and irrelevance).

On David Frum and the Non-Faked 'Fake' Photos

August 1st, 2014

James Fallows at The Atlantic congratulates his friend and colleague David Frum  for finally apologizing (a whole week later despite prompt and incontrovertible correction from several other reliable sources) to one of the four slandered photographers (and none of the traumatized victims). And Frum then has the gall to use his "apology" as an excuse for repeating the accusation against other unnamed parties, and to describe as "skepticism" his uncritical acceptance of a source who is self-identified as unreliable. (See also this from the Washington Post).

More on Learning Theories

July 31st, 2014

Tony Bates on Learning theories(via Stephen Downes)

Measurement in QM

July 27th, 2014

This set of OneTab shared tabs collects some recent blog activity on the measurement process in Quantum Mechanics. This reminds me of the fact that once as a graduate student (in the '70s), on hearing once too often that the measurement process was a mystery because unitary evolution cannot take a pure state into a mixed state, I thought I had something useful to say on the matter but was pointed to the discussion in von Neumann's book and later elaborations by Jauch and Hepp that seemed to deal with the issue on the same lines (namely by modelling the measured and measuring systems together as a tensor product of a pure state of the former and a mixed state of the latter which could evolve unitarily in such a way that the marginal state in the measured system does evolve from pure to mixed).


Copenhagen Interpretation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)

Sardonic comment: The measurement problem in QM

Spherical Harmonics: On quantum measurement (Part 2: Some history, and John von Neumann is confused)

Spherical Harmonics: On quantum measurement (Part 3: No cloning allowed)

The Many Worlds of Quantum Mechanics | Sean Carroll

Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)

Quantum Mechanics Smackdown | Sean Carroll

Why Probability in Quantum Mechanics is Given by the Wave Function Squared | Sean Carroll