There is no point mincing my words. As a work of fiction, Bernard Beckett’s Genesis is a bit of a disaster. While there are interesting philosophical points raised, Beckett has made the fundamental mistake of forgetting that the first task of a novelist is to engage and entertain. If instruction is the author’s goal – and there is nothing fundamentally wrong with that aim – then it should emerge from the plot and characters. Genesis is too didactic. Beckett is too determined to teach us a lesson – even to the point that the story is told through the framing device of a viva voce examination.

Anaximander is a student hoping to enter “the Academy” – a mysterious organisation that guides her nation – by demonstrating her knowledge of Adam Forde (2058-2077). Forde, it appears, is a crucial figure in her people’s history. This is a post-apocalyptic tale. Anaximander lives on Aotearoa (New Zealand’s northern island) which was spared the ravages of the war by the foresight of a man called Plato, who built defences and created an isolationist republic (yes it is indeed modelled on Plato’s Republic) to preserve civilization while the rest of the world crumbles. Adam Forde, we learn through the questioning of Anaximander, was a trouble-maker, expelled from the philosopher class he became a soldier, killed his comrade, allowed an outsider to cross the boundaries of the republic (risking the eruption of virulent plagues) and was captured. In a public trial his charisma won him the sympathy of his fellow citizens who were chafing under the rigid demands of the republic and he was saved from death. Instead he is imprisoned with an experimental AI robot, Art. The latter sections of the book are dominated by the debate that took place between Adam and Art as their discussions became philosophical and they batter, somewhat crudely, at the question of the difference between artificial and human intelligence, machine and conscious being.

I disliked pretty much everything about this book – apart from the fact that the UK edition (from Quercus) has lovely cover art and, at 180 pages (of widely spaced) text, it didn’t take long to read.

The basic conceit of telling the story through the medium of an examination seems, to me, almost perverse – placing the reader at the furthest possible remove from the action. The Socratic dialogue might have a long tradition in philosophical exploration but it’s not a storytelling method designed to make a reader’s life easy or to win a book many friends. Beckett seems aware of this problem, so key moments are re-enacted through “holograms” and the action moves to a more traditional, third-party narration. All this really achieves is to highlight the perverse discomfort of reading the rest of the novel.

The final twist – one which Beckett keeps secret only through deliberate obfuscation – is one that most science fiction magazine editors will have seen a hundred thousand times before and probably warn their prospective authors not to try and pull. I found the revelation so annoying I almost threw the book across the room.

But where Genesis fails most seriously is in its pretensions to philosophy.

The core of this book is obviously intended to be the debate between Art and Adam about the nature of intelligence. Adam insists that Art is not capable of true intelligence, that a machine cannot be conscious.

The turning point of the debate comes when Adam invokes John Searle’s “Chinese room” problem. Put simply a man sits in a room, from an input slot he receives a list of characters in a language he doesn’t understand – say Chinese. He has, in the room (in this version) a complex machine with levers and pulleys and instructions in how to respond to each possible input. By following those instructions an output is created – a Chinese phrase – which to those outside the machine appears to make sense. A conversation, apparently between two intelligent constructs, takes place beyond the confines of the room.

Searle’s point in creating the Chinese room was to demonstrate that Turing type tests are not sufficient to demonstrate “intelligence” or “conscious thought”. Those outside the Chinese room may believe that they are conducting a conversation with an intelligent entity – and thus the Turing Test has been passed – but Searle’s point is that what goes on within the machine is important. There is no intelligence, no understanding, within the Chinese room and that makes a difference – because any machine that works in a purely rule-driven way like the Chinese room (such as the overwhelming majority of imagined artificial intelligences) cannot be conscious.

Art responds to Adam’s invoking of this problem first by stating:

“I believe I am a Chinese room”

The story then poses a “what if?”: What if the message passed into the machine was “I’m going to burn your building down?” – how can the Chinese room machine respond? Art posits a range of possible responses that convince Adam, with this caveat.

“A thousand things to say, and for each a million ways of expressing them. Your example only works if we can imagine how the machine chooses its response.” (p133)

But Adam is mistaken to concede this point. We have no need to imagine how the Chinese room chooses its responses – we can know precisely how the response is chosen because we can know all the rules that the operator of the machine must obey and read blueprints of how the machine works so as to allow us to predict exactly what it will say in any given circumstance. This is the definition of a Chinese room – the programming can be read – no imagination is required.

Next Art claims that for meaningful conversation the Chinese room:

“must be able to interpret the intentions of the Chinese speaker and it must be able to pursue its own objectives in framing its responses. If it has no intentions, it can make no conversation.”

Art may be right that for an Artificial Intelligence to genuinely embody consciousness it will need to be able to interpret the intentions of those it is speaking to and have their own objectives, but it is not legitimate to impart these qualities on a Chinese room. If a Chinese room can be said to have objectives at all, then they must come from the creators of the machine and be whatever purpose they have decided for. Some commentators have argued that it is legitimate to include the Chinese room’s creators in a “system” response to Searle – arguing that while there may not be understanding within the physical limitations of the room, the whole system which allows the machine to respond (which includes the knowledge of the machine’s creators) can be said to include understanding and therefore be called intelligent.

But the question becomes where do the key elements of intelligence lie? Do they rest inside the room or are they in the heads of the creators of the machine. The answer, it seems to me, to be the latter. The intelligence in a Chinese room all belongs to humans and not to the machine. All the system’s response to Searle achieves is to reaffirm that human intelligence is distinct from the kind of cleverness embodied in the Chinese room.

Imagine, for a moment, that the Chinese room’s creators had made a mistake. Suppose they had swapped the meaning of two words, so that when the Chinese room saw “burn” it responded as though it was the word was “wash” and vice versa. Now let us go back to Art’s initial example, the first thing the Chinese speaker writes is “I’m going to burn your building down” but the machine believes that it has been told to prepare for a thorough cleaning. Nothing in the machine’s workings can respond intelligently to this mistake.

The operator, seeing the flames suddenly leap about him, might rush outside in terror but he has no way of knowing what the conversation he has engaged in meant or how the machine has failed. The system designers are not present and cannot make a contribution. The Chinese room is incapable of learning – the operator has no idea how to change the rules appropriately, the levers and pulleys just do as they are told – only by reprogramming the device from the outside can the machine respond to its mistakes, but then it ceases to be an independent thinking device and is again reliant on human intelligence to change its behaviour.

Yet a genuinely intelligent entity must surely be able to identify when it makes a mistake, respond and learn from the error.

Art concedes

“That for a simple conversation, of course, the room does not have to be conscious any more than you have to engage your consciousness to grunt your greetings to the guards who clean out your cell. But at some point, when the room is called upon to access its own memories, respond to changing circumstances, modify its own objectives, all the things you do when you engage in a meaningful conversation, all that changes.  You think the thing you call consciousness is some mysterious gift from the heavens, but in the end consciousness is nothing but the context in which your thinking occurs.” (p134)

Again Beckett has Art set out precisely those qualities which might be necessary for a machine that could genuinely be credited with possessing understanding and perhaps even consciousness but which are, by definition, absent from Searle’s Chinese room.

If a machine could access its own memories, respond to changing circumstances, modify its own objectives and do all those other things that human’s (and as far as we know only humans) do when they engage in conversation – and could do all this reflexively, i.e. be aware of what it is doing and why without relying on preset instructions or outside interference – then we would be close to a general thinking machine. By insisting that a thinking machine requires these items – that Art himself possesses them – Beckett is not negating the argument of the Chinese room but confirming it. Art is more than mechanism responding to inputs with pre-programmed outputs. Art demonstrates that he is – despite his protestations – far more than a Chinese room.

Unfortunately that’s not the lesson he takes. The argument is “settled” when Art claims:

“You don’t have to understand the conversation at all, because the person on the other side of the wall isn’t speaking to you. They are speaking to the machine whose levers you are pulling. And the machine understands just fine.” (135)

But, of course, as Searle has pointed out, the machine doesn’t understand. The machine appears, because of the outsider’s limited viewpoint, to understand the conversation but it is merely responding mechanically based on preset rules. Searle created the Chinese room to demonstrate that appearances are deceptive, that Turing tests and their like are insufficient to judge understanding/intelligence/consciousness because what happens inside – the process by which responses are created – matters.

Art “wins” the argument by ignoring Searle’s central point.

This has crucial importance for Adam’s position – he is left only with metaphysical responses – ideas of soul and the like to distinguish human from mechanical thought, which Art quite properly dismisses these with ease.

All this might seem esoteric, but since Beckett’s novel makes so much of its philosophical discussion (and offers so little in terms of the drama) it seems only fair to demand that the debate is rigorously constructed.

And it isn’t.

Beckett’s creates a machine in Art that possesses faculties that are far beyond those that can be encompassed by a Chinese room. To then offer Art as “proof” that a mechanistic, rules-based machine like the Chinese room can possess real understanding – even consciousness – borders on the disingenuous. Art doesn’t disprove the conjectures of the Chinese room. Rather – in being able to react to memories, to respond to changing circumstances, modify his own objectives, and examine his actions and thoughts reflexively – Art demonstrates how much more than a Chinese room a system must have under the surface for it to be considered a real thinking machine.

Beckett undermines his own point but never seems to realise his error.

And this might be the first book review to be longer than the actual book.

© Beli. All Rights Reserved.