Radia Perlman is another woman with a major place in the history of computing - whose spanning tree algorithm greatly facilitated the creation of the internet. This interview with Rebecca J. Rosen in 'The Atlantic' includes a number of interesting points, but what strikes home the most for me is at the end where she talks about the questions of luck and priority.
Progressive media are generally approving and maybe even right that it's not a question of censorship. But Ophelia Benson is right about the offensive and dishonest wording of the Brandeis announcement.
Ayaan Hirsi Ali's courage in the face of unspeakable abuse and her resolution to protect others from the same thing may be marred by identifying the source of the abuse both too broadly (all of Islam) and too narrowly (not including similar tendencies in other religions). And she may well be wrong in other ways as well. But there isn't an educated person on this planet who doesn't know damned well that she reviles Islam and seeks its transformation by "defeat".
I don't know if I would ever offer her an honorary degree, but I hope that if I did I would never make the egregious claim that I hadn't known her opinions when I made the offer.
While the predictable reaction to Brendan Eich's appointment as CEO at Mozilla does make one wonder why they made it, it also raises the question of whether either that reaction or the response was appropriate.
One could argue that publicly taking any potentially unpopular political position should be considered as disqualification from any future CEOship of an enterprise that depends on a broad base of customers and contributors, but that would both limit the available talent pool and unfairly restrict the political lives of potential candidates.
Those who take pride in a "progressive victory" here should consider how they would react if the shoe was on the other foot.
Middle that is.
Why should there be only two choices? asks philosopher Barry C Smith in an article/interview at the Institute of Art and Ideas, and I agree that the binary nature of logic may be as much a reflection of how our minds work than an absolute aspect of reality. But his examples are odd.
There are certainly mathematical propositions that are neither provably true nor provably false (from standard axioms by first order logic), but just being unproved (like his Goldbach Conjecture example) does not guarantee being one of those. Also on the other side he identifies acid vs alkaline, and being Carbon or not, as true binary situations in the real world. But pH varies continuously from one extreme to the other and some large molecules may behave as acid in some configurations or contexts and alkaline in others, and it seems quite conceivable (in principle) that in a suitable (but possibly impracticable) intersection of electron and neutrino beams a C14 nucleus might undergo a series of stimulated beta emissions and absorptions with oscillations fast enough that due to quantum uncertainty one could not say whether at any one instant it was either Carbon or Nitrogen. Indeed it is likely that the identity of any particle, no matter how stable cannot really be said to be absolutely this or that if we take account of all possible real and virtual quantum phenomena.
The limits of reason are also explored at AIA in a video (which might be interesting but tl;dw), and Michael Potter discusses the origins and limits of modern logic, including both the "linguistic turn" which seems to be about the attempt to define a perfectly rigorous formal language and the philosophy of ordinary language that is sometimes called "linguistic analysis" (which strikes me as pretty much the opposite so it's no wonder simpletons outside the field get confused - perhaps that's why they do it!).
But when it comes to the "origins" bit he describes Frege's "polyadic quantification logic" as "enormously more powerful than anything ...since Aristotle" - which I would say undervalues Boole (and others of his ilk).
Interestingly Boole's wife helped in his work and considered it to have been influenced (via her uncle George Everest) by ideas about logic from Hindu philosophy. She also wrote this and was an early proponent of both cooperative learning and what illiterate educators now call "manipulatives" (even though what they really are is manipulable).
Hangers-on at the edge of academic philosophy often challenge the lack of respect for their purported discipline in a way that undermines both the respect they want to encourage and the employment prospects that they presumably hope to enhance. Here however is something much better(though I suppose some metaphysicists might not be so keen on it).
Metaphysics is an illusion that besets philosophers and philosophically-minded scientists from generation to generation, which it is the task of good philosophy to dispel. But although periodic fumigation is recurrently necessary for intellectual health, what else is there for philosophy to do? What can it achieve? In the sense in which the sciences have a subject matter, it seems, philosophy has none. In the sense in which the sciences construct theories that are confirmed or infirmed by experiment or observation, there are obviously no theories in philosophy. In the sense in which the sciences make discoveries about the world around us, philosophy clearly does not. So what is its task?
We must challenge the thought that philosophy aims to contribute to human knowledge of the world. Its task is to resolve philosophical problems. The characteristic feature of philosophical problems is their non-empirical, a priori character: no scientific experiment can settle the question of whether the mind is the brain, what the meaning of a word is, whether human beings are responsible for their deeds (i.e. have free will), whether trees falling on uninhabited desert islands make any noise, what makes necessary truths necessary. All these, and many hundreds more, are conceptual questions. They are not questions about concepts (philosophy is not a science of concepts). But they are questions that are to be answered, resolved or dissolved by careful scrutiny of the concepts involved.
Here "scrutiny of the concepts" is intended a bit more strongly than just "clarification of the language" but does not stray into the territory of claiming to establish what concepts really do mean as if that were something more than just what we are meaning when we think of them.
But I do have some quibbles.
For one thing I would have preferred to see some mention of the value of just addressing questions without necessarily expecting to ever fully "resolve" them (either by answer or dissolution).
And concerning the philosophy of science he says
At a more specialised level, philosophy is a technique for examining the results of specific sciences for their conceptual coherence, and for examining the explanatory methods of the different sciences – natural, social and human. The sciences are no more immune to conceptual confusion than is any other branch of human thought. Scientists themselves are for the most part ill-equipped to deal with conceptual confusions.
Though I might balk at the "any" in the second to last sentence (since there are some branches of "human thought" which are so conceptually confused as to be embarrassing to anyone associated with them), my only real concern is with the last, where "for the most part" is, I suspect, an extrapolation from a very biased exposure to actual scientists (in particular dominated by those who are keen on talking to people outside their own discipline). It is not so much the apparent insult to scientists that concerns me though, but rather the presumption by omission that philosophers are better-equipped.
Indeed, the claimed uniqueness of philosophy occurs more explicitly elsewhere as well.
At a very general level, it is a unique technique for tackling conceptual questions that occur to most thinking people
Actually it includes a collection of techniques and strategies that can be called "unique" only if you define philosophy to comprise all thought about "conceptual" questions regardless of whether it has occurred in the mind of someone publicly identified as a "philosopher". Which is fine, but perhaps changes the interpretation of "study Philosophy" from what was intended.
Hacker's last three paragraphs are great and point to the real practical utility of training in the subject - which is more to provide facilitators who may help us understand one another than to send arbitrators to tell us who is right.
But in the end, isn't the best reason for doing anything just "because we enjoy it"?
Both the number of people who share my view (and arguably express it better than me), and my inability to quickly convince everyone else, in the long discussion thread on this post at Ophelia Benson's blog makes me fear that Russell Blackford too might miss the uniquely devastating brilliance of my demolition of Sam Harris' thesis (for which the most successful argument as judged by Blackford will earn $2000 - and $20000 more if Harris himself concedes the point)
ThinkProgress asking Is The Solution To Climate Change In Vancouver? is for me a sad reminder of how the political party that I supported totally misused my contributions. Thanks a bunch BillT and CarolJ
In a post On Returning the Lost | Talking Philosophy, philosopher Mike LaBossiere raises the question as to whether his habit of returning found wallets without removing the money is abnormal.
I should hope not! but in fact one commenter confessed to the opposite practice (which I'm afraid drew me in to a rather heated exchange - prompted in particular by his self-serving presumption about the relative wealth of the wallet's owner).
In any case, the question of actual statistics is interesting, and another commenter referred to experiments in which wallets have been deliberately "lost", so I thought it might be worth reporting on another such
We only have about half a dozen sample points so far, but my wife seems intent on running a long term observational study of this matter and in Vancouver BC has had 100% return of the wallet, 50% with cash included and 50% cash removed (including one case where the wallet returned for reward had been in a dropped bike pannier with other items which were never recovered).
Although the financial benefit of getting the money back is usually quite small (at least in relation to other matters) the sense of faith in one's fellow humans that results from such an event is quite wonderful - and I am pretty sure that benefits of that sort continue to multiply.
In a post about the recent WhatsApp sale, Robert Reich points out what was obvious to everyone when I was a teenager (more than half a century ago!) - namely that the needs of billions can be provided for by hundreds and so that if it weren't for the massive arrogation of communal goods by a greedy few we would all have been living lives of relative comfort and minimal required work well before the end of the last millennium.
Robert Kurland who self-describes as a "Retired, cranky, old physicist.Convert to Catholicism in 1995."responded to a comment I made on an earlier post about his path to religion.
My comment on his latest post was as follows:
Thanks Bob for responding -and for helping me in my (still ongoing) effort to gain a better understanding of the nature of your belief. I hope you don't mind my repeating some of what I said already in our email correspondence (partly to explain the context, and partly to gain further clarification from you).
One thing I mentioned in our email exchange was that my own understanding of the word "counterfactual" is that it refers to a hypothesis that we know is false (eg that Mount Baker had a massive eruption in 2012) rather than one which we may think unlikely but about which we do not yet have any confirmed facts (eg a story based on the hypothesis that the now very mildly active Mount Baker will have a massive eruption in 2024 - which I would only consider counterfactual if it were the case right now that Baker had been stone cold dormant for long enough for geologists to be almost certain that such an eruption in 2024 is not possible). The only reason I repeat this quibble is because if we are using words the same way, then your use of "counterfactual" tells me something about the strength of your belief - ie that you are essentially certain that no evidence against the resurrection could ever be found which would be able to alter the strength of your belief. Indeed this is one way that I could interpret your conclusion that "I take Alan's proposition as a counterfactual--conceivable in an alternative (hypothetical) universe, but not possible in ours" - except for the doubt remaining in my mind as to whether it is my hypothetical *premises* which could never be conceivable to you in our universe or just your proposed *conclusion* (which I never actually claimed to follow from the premises - though I certainly meant to imply that they would increase its credibility).
This brings me to a second comment re this post - just that I would like to emphasize again that I wasn't proposing the hypothetical future evidence as necessarily proving the "Conclusion: Jesus was not resurrected." but rather just that it if such evidence did transpire then it might serve to weaken the strength of your evidence-based conviction that he was.
I certainly appreciate the sentiment of your final paragraph, and in fact I have often thought that we should draw a distinction between belief and faith. Is it possible to have faith in the resurrection of Jesus as a redeeming concept with which humanity has been blessed, rather than as an actual historical event which we believe really happened? If so, then it may also be possible to retain that faith while believing that the actual physical event did not ever really happen. If so (and if it were widely advertised to be so) then there would be at least two positive consequences - more people could share in the blessings of faith, and fewer people would feel the need to feel that the use of reason would pose a threat to their receipt of those blessings.
Matt, Your very first claim is nonesense: "Evil is a problem for atheists because, for them, it does not exist absolutely”
Lucia bases this response on the claim to be an atheist who does have a definition of absolute evil, but my reasons are different from (in fact in a sense opposite to) lucia’s.
I don’t know if I qualify as an atheist, but for me good and evil do not exist absolutely – and for me that is not a problem.
So far as I have ever been able to tell, “good” and “evil” are just words used by people to label certain behaviours that they feel compelled to encourage or (resp.) discourage (usually on the basis of effects of such behaviours on the perceived welfare of the family, tribe, or super-tribe, rather than on the immediate well-being of the individual); and they tend to have the desired effect by virtue of being connected to approval and shaming since infancy in a brain which evolved over many generations to manage the behaviour of a social animal so as to be successful in its context by responding to approval and shaming signals from its peers.
Certainly Briggs' demand for a definition was odd since the obligation to define a word must surely fall on the one who uses it. But on further reflection I am tempted to actually use and define the word "evil" as something distinct from merely "bad". Because, for all the risk of harm he brings to the world with his denialism, I would be inclined to say Briggs, though maybe "bad", is not "evil" . And the reason I deny him that label is because I don't suspect any real wish to hurt others in his vainly posturing behaviour of picking holes on the arguments of AGW advocates and left-liberal politicos rather than seriously considering the overall picture. So I suppose that it is malicious intent - or any other kind of deliberate overriding of the dictates of conscience - that underlies my own conception of evil (something like what the religious might describe as knowingly making a pact with the devil).
So it seems that for me the distinction between "bad" and "evil" is a matter of intent. But, as usual with words, there is a shading of meaning. We might consider a person who cannibalizes children to be evil even if he was a psychopath without conscience (though perhaps less so if we knew he was delusional). I think though that the reason for this is more out of inability to imagine the truly psychopathic state than out of really extending the definition. (After all we probably wouldn't use the word evil if the psychopath were replaced by a baby-eating lion).
So perhaps I do have an absolute definition of "evil" as deliberate action contrary to the dictates of conscience - even though it is one I cannot ever truly test with regard to another person because I don't have access to their internal mental processes. Despite that untestability, the evilness of any particular act is either true or false independent of the observer (though not the actor). But then being evil is not a property of the act itself but rather of its relation to the conscience (or perhaps to a religious the "soul") of the actor. So even though the definition of evil is absolute, the evilness of a particular action is relative.
Of course, if I am going to suggest that Briggs, though probably not evil, is maybe "bad" then I guess I do need to say what I would mean by that as well.
Even though I do use those words I do not have an absolute definition of “good” and “bad”, but as I said before, for me that is still not a problem.
When I say some action is “bad” or “wrong” I am merely expressing my own feelings and I do not believe that anyone else’s such assertions have any more absolute (ie observer-independent) content than my own.
This does not mean that discussions of ethics (and aesthetics) are pointless, but logical argument may play only a small part in them. How we feel about things influences our behaviour and my own sense of ethics does not rule out trying to persuade others to change their ethical and aesthetic positions. Since the object of such arguments is more to change feelings than opinions I have no objection to the use of appeal to emotions in such arguments. (The only problem is that if it’s so blatant that the manipulative intent becomes clear then it might not be effective.)
Note: Really this wouldn’t have been be a bad post if Briggs could only suppress his tendency to throw in egregious straw men at every opportunity. It’s especially discouraging when he raises a topic about which there might be some interesting things to say which get smothered and lost in the overwhelming mass of “cleverly” inserted straw.
This leaves me with more sympathy than I expected to have for the Syrian gov't position - specifically with regard to the need for the terms to explicitly disavow terrorist Wahabi elements in the opposition. And I am also puzzled by the exclusion of Iran when Saudis are included. Ban Ki-Moon's weird invitation certainly looked inept though - but perhaps he was playing at a different level from what appeared on the surface.(But from this account I would have to say that the Americans fucked it up - deliberately)
P.S. The live breaking news format is engaging but in this context I don't like reverse chronological order and would prefer to see chronological order with latest displayed and a scroll-back option.
Kenan Malik starts the year with a preview of FIVE BOOKS TO ARGUE WITH.
The best kind of book, to my mind, is the kind of book you can have an argument with. Not a book so wrong that I want to throw it across the room, but one that I disagree with and yet find challenging enough to force me to re-examine my own views, and often to put down my disagreements in writing to help me better to clarify them.
For me, not being much of a historian, the first, fourth, and last on Malik's list look most interesting.
He starts with Joshua Greene's 'Moral Tribes:Emotion, Reason, and the Gap Between Us and Them' which ties in nicely with my own interest in how we can control the dangerous effects of our tendency to form aggressively competing identity groups.
This ties in with his own forthcoming book 'The Quest for a Moral Compass' parts of which apparently come from a talk on ‘Science, morality and the Euthyphro dilemma’; a review of Sam Harris’ The Moral Landscape; and a critique of Alex Rosenberg’s moral nihilism.
Roger Scruton's 'The Soul of the World',according to the blurb as excerpted by Malik, " ‘defends the experience of the sacred against today’s fashionable forms of atheism’. For Scruton, ‘To be fully alive – and to understand what we are – is to acknowledge the reality of sacred things’. The book is not ‘an argument for the existence of God, or a defense of the truth of religion’ but ‘an extended reflection on why a sense of the sacred is essential to human life – and what the final loss of the sacred would mean' "
I will be curious to see how Scruton sees 'today’s fashionable forms of atheism' as necessarily threatening that basic 'sense of the sacred' - and also to see how Malik responds.
The last book on Malik's list, Nicholas Wade's 'A Troublesome Inheritance: Genes, Race, and Human History' may well be a dangerous and unnecessary exploration, but just because a concept can be misused doesn't mean it has no meaning, and despite Malik's reasonably thoughtful but ultimately disappointing effort at fence-sitting I don't think (as I have previously noted here, here, and here) that we can realistically deny the possibility of defining "race" (in various ways) as a possibly useful scientific concept. Whether we should make use of it is another matter and I remain nervous about where Wade may want to go.
Re Malik on the fence:
It is always a bad sign when people throw around expressions like "85% of variation" without saying what they mean and Malik's failure to understand the Leewontin Fallacy had to be pointed out to him in the comments by Lou Jost.
I am also inclined to expect nonsense when I read an unsupported bald pedantic claim that starts with "Any scientific classification must.." and was confirmed in that expectation by Malik's insistence on "a classification system must be complete and able to absorb even those entities not yet identified." But even if there were such a well-defined concept of "scientific classification" there are lots of scientific concepts which are not "classifications" in that sense (and any plausible concept of race is almost certainly one of them).
Finally, in his criticism of a 2003 paper in the New England Journal of Medicine that made the case for ‘The importance of race and ethnic background in biomedical research and clinical practice’, he makes much of the use of standard biomedical terminology on one side versus "looseness of the language" on the other. I don't intend to check whether these authors actually provided explicit procedures for how they identified people with "racial" labels (although they certainly could and should have!), because I am more interested (here) in what is possible than what happened in this particular case. But even if it was just self-identification into clearly defined categories, if that self-identification correlates with a choice of optimal treatment that cannot be cost-effectively determined any other way then it is indeed both scientifically and practically meaningful.
This article, like most of Le Nguyen Hoang's work, brings together a wonderful mix of ideas and resources and is mostly great. But I am a bit worried that it might leave some readers with the idea that kin selection and group selection are mutually exclusive alternatives when in fact I doubt that anyone in the "group selection" camp denies the importance of kin selection.
The "war" is I think between those who propose methods of group selection that are not explained by kinship and those who insist that everything can be explained by a concept of "inclusive fitness" defined in terms of kinship defined *only* by the family tree (and who also insist that nothing else can possibly happen).
With regard to what will eventually disappear, I don't think there's any chance that kin selection will disappear since everyone can see that it sometimes works. As for non-kin-based group selection, I can see ways it could work; and so, given the infinite inventiveness of nature, I am sure we will eventually identify examples where the pure-kin explanation does not suffice. So I like the article's concluding reference to Max Plank. What will eventually disappear are those who are stuck on a currently favourite theory (only to be temporarily replaced of course by those who get stuck on the next one).
To a late riser like me it seems that the days are already getting longer even though is is still almost a week before the actual winter solstice. But this is not an illusion as the sunsets really are starting to happen later here, even as the length of time from sunrise to sunset continues to shorten.
How can this be? Well, the sunrises are actually also getting later too because in addition to getting longer and shorter the days actually "move around" a bit (in the sense that the time that a regular clock gives as noon even on the exact meridean of a time zone is not exactly the time when the sun is highest in the sky). This is due to a fairly subtle effect of the earth's tilt combined with a slightly smaller effect of its orbital eccentricity (the fact that it is an ellipse rather than a perfect circle). These effects cause the length of the entire (noon to noon) day to vary with the seasons so that the clock's equal 24-hour days cannot all be exactly centred on solar high points (so noon on a sundial moves back and forth relative to noon on a clock). Popular accounts of this phenomenon frequently relate it only to the eccentricity which actually gives the smaller effect , but this article in The Atlantic is closer to the mark in that it gets the sizes of the two effects right. (It does however give a wrong impression of how the obliquity effect works though, because it implies that it's like the difference between summer and winter when in fact it is between solstices and equinoxes. At both solstices the noon to noon time is about 20 seconds longer than average and at both equinoxes it is about the same amount shorter.)
This effect has been well understood for (at least a couple of hundred) years but first came to my attention when a colleague asked me to look at an essay in which one of his students seemed to have figured it out for himself!
Last week we got several reminders of the significant roles played by women in what is often perceived these days as a rather male-dominated and even somewhat macho field.
On Dec 6 Maria Popover's Brain Pickings site pointed to a tribute video about holocaust survivor and 1960's software entrepreneur 'Steve' Shirley (produced by Google about 3 months ago as part of their 'computing heritage' series). Then on Dec 9 the Google Doodle was in honour of the birthday of Grace Hopper who created the first compiled computer language in the 1950's, developed COBOL, and coined the term "debugging". And it may have been that which prompted the Christian Science Monitor to do a piece on a number of important female pioneers of computing - going all the way back to Ada Lovelace who worked with Charles Babbage on the idea of programmable computers long before electronic versions were available (and whose birthday just happens to have been the very next day - but no doodle for her this year as she already got that honour last year).