Or “In a Chinese Room, not far from the loo”

I have been a little unwell. Nothing serious, a stomach bug that my four-year-old daughter shrugged off without so much as a backward glance to check whether there was any puke in her curly locks (there was, we found it later) but which put dad in bed for two days. Rubbish? Me?

Of course being too sick to move far enough from the toilet for long enough to go to work but not so sick that you can’t sit up in bed with an endless supply of weak lemon drink does have advantages – like the chance to read, uninterrupted.

Which is how I came to be the last “fan” in Christendom to read Peter Watt’s novel Blindsight. Now I’m not sure exactly what I was expecting from the book, but whatever it was, Blindsight wasn’t it.That’s not to say there weren’t bits of the book I really liked – but there were also big bits of it that didn’t work at all for me. Like vampires. And perfunctory plotting. And evaporating alien threats.

But I don’t want to concentrate on the things I didn’t like – I want to talk about the stuff that interested me and that clearly interested Peter Watts more than the rest of the book. Particularly his discussion of consciousness, what it means and what it might be for.

I’ve been doing some of my own reading on consciousness and intelligence because I’m thinking about writing a story (that might be my first serious punt at a novel for over fifteen years) that deals with AI and identity and what a thinking machine might really be like.

Blindsight gives a run out to one of the most venerable (if that’s the right world in a field as young as artificial intelligence research) paradoxes in the theory of machine thinking – the Chinese Room.

The Chinese Room problem places a man in an almost completely sealed room. The man has a stack of tiles and a rulebook. The only opening into the room is a slot through which tiles are passed. On the tiles are markings that the man inside the room does not understand. The man opens his rulebook and, depending on what the book says about each set of markings, he selects tiles from his own stock and passes them through another slot to the outside world.

The man doesn’t know that the markings on the tiles that he is receiving are one side of a conversation conducted in Chinese or that the markings on the tiles he is sending out are responses, also in Chinese.

Effectively the man in the room has been having a conversation in Chinese even though he speaks no Chinese. Indeed for the person passing the tiles into the room, not only is the “system” conducting a conversation but it is doing it so well that it meets the criteria for intelligence as set down by the Turing Test.

And this is where the controversy starts. The Chinese Room problem was first set down in 1980 by John Searle who sought to use it to demonstrate the weakness of the position of advocates of so called “Strong AI” who argue that if a system works in a way that is functionally equivalent to an “understanding” being then it must be considered to “understand”. But, no matter where you look in the Chinese Room system, Searle argued, there is no evidence of understanding.

Proponents of Strong AI argue that, far from disproving their case, Searle’s Chinese Room demonstrates that (as with the human brain) understanding, consciousness even, can emerge from in a system where the constituent parts are dumb. Neurons don’t “understand” the world but the mind does.

This, broadly speaking, is the line that Watts has his characters accept in Blindsight.

Watts’s narrator, Siri Keeton, was the subject of radical brain surgery as a child, which has left him somewhere just this side of autistic with his ability to relate to those around him severely impaired. With a half a hemisphere full of technology instead of squishy organic brain stuff, Kiri’s “disability” has helped create the perfect impartial observer – he watches the world, he applies rules, he responds. Siri is, as one of the characters notes, a walking Chinese Room.

But for me, Searle was right. The Chinese Room is not conscious. Nor does it display understanding. The Chinese Room displays only syntax – the operation of rules – not semantics or meaning.

A conscious mind does more than apply rules to inputs received from the outside and respond with appropriate outputs. For understanding to have any real meaning, it must demonstrate an appreciation of context, the capability to recognise patterns and make predictions from limited knowledge and some degree of empathy (to understand the likely positions of other actors – human or otherwise – in the system). The “mind” (whatever that is) is not just sorting rules – understanding implies anticipation, adaptation and reshaping the environment – not just reacting to it. All of these elements are missing in the Chinese Room, no matter how far up the system we look.

This is not the fuzzy, warm argument about man being divided from the machine by his ability to write a sonnet or appreciate Wagner – I’d be excluded from higher-thinking for a start – but it is to say that I don’t think understanding or consciousness can be divided from the physical sack of mostly water our brains find themselves sloshing around in. The emphasis on the brain as the source of “intelligence” ignores the fact that a great deal of the bodies processing power is distributed around the nervous system and that the feedback from that system, the demands it makes, the perceptions it provides and the context it sets must play as important a part in human “consciousness” and “understanding” as the sparking neurons in our frontal lobe.

The stimulation of the senses, the context provided by the body, the environment in which the body exists, these are as much part of the human consciousness machine as the brain. And the differences in the physicality of an octopus, a parrot and a human go some way to explaining why these three problem-solving creatures exhibit their intelligence and their “understanding” in very different ways. Think how much more complex understanding between an alien or an AI will be.

Actually the acceptance of some Strong AI advocates of a Chinese Room as a real thinking device is a symptom of the malaise that gripped AI research for decades. Another is the concentration on gimmicky talking programmes designed to meet the very narrow (human-centric) view of language as evidence of intelligence. A considerable proportion of AI researchers have spent decades bogged down with coming up with ways to trick an observer that the box they’re talking is just as stilted and unenlightening in conversation as the average geeky AI researcher. The Turing test has warped AI research and taken it down some unhelpful dead-ends.

Intelligence, understanding, consciousness are words we’re barely able to define as they might relate to non-human creatures and by far the most exciting idea in Blindsight is the idea that “consciousness” is an evolutionary dead end. Watts presents us with the superhuman vampire – a highly intelligent predator without any of our “drawbacks” – and with military officer Susan Bates’s robot arsenal – crippled by being slaved to her conscious decision-making – and, of course, the starfish like aliens, who, freed from consciousness, aren’t just cleverer than humans but display intelligence of a different order of magnitude and are terrifyingly capable of the most minute manipulation of our brains.Watts’s cold, clinically realised conclusion in Blindsight is that consciousness will condemn us to the fate of the dodo.

It’s not a conclusion I agree with, nor from Blindsight’s appendices (I love hard sf, where else does a novel come with appendices…) I suspect does Watts, but it is fantastically well argued and cleverly woven into the fabric of the novel – though there’s a fair degree of infodumping as well. Still, Blindsight is one of those novels that restore my sometimes battered faith in hard sf. It takes really BIG ideas and wrestles with them and tries with honest endeavour to make them fit into a rational world, whether they want to fit or not.

© Beli. All Rights Reserved.