I commented at Stephen Downes’ website on Patricia Kuhl’s TED talk about “The Linguistic Genius of Babies”. My quibble was less with the content than with the sentimentalized headline, because, although the babies’ brains do appear to implement a sophisticated statistical algorithm (to identify the phonemes of relevance to the language of their community), there is of course no serious suggestion that they actually understand the process any more than our immune system understands the “algorithms” by which it operates or snowflakes and other crystals understands the symmetry groups which govern the way they construct themselves.
Stephen agreed, saying that to call the babies’ mental process an algorithm (with the understanding presumably that an algorithm is an intentionally arranged process) amounts to “rationalization after the fact of a process that is based in neural networks”. This struck me as very apt (although I might allow for extending the use of algorithm to include also a non-intentional sequence of operations which produce a result which we can recognize as the solution to something we might have posed as a problem) but it left me feeling a bit nervous about my dismissal of John Searle’s Chinese Room argument (as well as about my own understandings of statistics, or of anything else for that matter, as these too are, in my opinion, probably all processes based in neural networks – so what makes my understanding different from that of the baby?).
Just to recap: the Chinese Room argument was intended by John Searle to defeat “strong AI” (which is the hypothesis that human reasoning can be duplicated by an algorithm). The argument claims that strong AI allows us to imagine a (large) room in which someone ignorant of the Chinese language follows instructions of an (exceedingly complex) algorithm (designed by someone else) for producing sensible Chinese language responses to the input of questions in Chinese, and Searle argues that this is somehow a contradiction.
Searle’s argument is silly (at least in the form he presented in Scientific American or as summarized in the SEP), but the Chinese Room is still useful as a kind of thought experiment or paedagogical tool for testing our understanding of some of these issues, and in particular it may help us to deal with the distinction between me and a baby with regard to whether “understanding” exists and where it resides. In the Chinese Room, the operator is of course a red herring and if the algorithm is effective then the whole system could be managed by a machine and it is the algorithm or program itself which “understands” Chinese. Correspondingly, in a neural network “theory” (I have to be careful with that word!) it would presumably be some pattern of connections among the neurons of a person’s brain which provides the physical correlate to our notion of that person understanding something. But just as a typical speaker of Chinese may not be able to answer questions about the algorithm for answering questions in Chinese, so the algorithm for responding to normal Chinese conversation need not answer questions about itself, though it may have to be capable of receiving information (in Chinese) which would enable it to do so in future. Similarly, the neural net of a baby may implement a statistical “algorithm” (if we allow the word to refer to a non-intentional process “designed” by evolution) without understanding it, and my understanding of statistics may be represented by a different network structure without me understanding how that network “works”.
But if we ever do get a proper theory of neural networks modelling human understanding, then if I can understand it, my neural network will include a structure which represents the understanding of itself – which will include a structure which represents the understanding of the understanding of itself, and so on.
And in the absence of infinite capacity this appears to be a proof that no mater how long I meditate I will never achieve the goal of complete self-knowledge. So I might as well go have another drink!