After Grenfell: the Faith Groups’ Response
A study into the responses of faith groups to the Grenfell Tower tragedy on 14th June 2017. (2018)
Nick Spencer reflects on recent advances in Artificial Intelligence and why they don’t constitute ‘human beings’. 25/06/2018
I have visions of humiliating IBM. Their new AI initiative, Project Debater, has just been launched. Designed to conduct (and win) an argument, it engaged in mouth–to–mouth combat with two real human foes at IBM’s San Francisco office this week. None of the participants had been told about the topics under discussion (they were “We should subsidize space exploration”, and “We should increase the use of telemedicine”). Each was given an opportunity to make an opening statement and then respond as appropriate.
This is the next frontier of the human vs AI confrontation. A decade or so ago it was Chess. Then it was the fiendishly complex Chinese board game Go. Both were clearly won by our artificial brethren.
Chess is complex and Go even more so. But they are still games. Try using language, marshalling arguments, and deploying rhetoric, my silicon friend. Then we’ll see who’s really savvy. My years of education, wide reading and raw wit would make Project Debater look like an Atari. Human uniqueness would be reasserted, the line against the machines held. IBM programmers slink away, their tails between their legs.
I think we both know it probably wouldn’t work quite like that. My ability to remember things I needed to know for exams has not improved with middle age. Project Debater’s capacity to recall, select and retrieve pertinent facts from vast tracts of information would humiliate me. Moreover, in spite of the painstaking attention to logic and reason that shines through every blog post I dash off – and here I must acknowledge the criticisms from valued Theos colleagues that I have dutifully ignored – I do occasionally make minor logical errors. Let’s be honest – and this is just between you, me, and the machines on which I am writing and you are reading this – I think the machine might win. It did in San Francisco, at least according to the audience of IBM staff (who might, to be fair, have been biased), although the Guardian, more cautiously, thought the encounter a score draw.
Does this mean the war is drawing to a close, that the challenge – and the claim – that AI poses to personhood will soon be successful? We lost Donkey Kong, Chess and Go. We moved back to language. We ceded that. We withdrew to conversation and argument, but it seems we can’t hold that line any longer. The machines will be over the Channel before next week.
Perhaps. The most familiar argument against machine intelligence is that it isn’t real intelligence; more like an elaborate tube. You simply get out what you put in. Feed Project Debater false facts and garbage arguments and you’ll hear them back.
I have to confess, I don’t see how that distinguishes it from us. Pretty much everything I would say in my imagined titanic clash with Project Debater would have been gleaned from books, articles, and conversations; i.e. other sources. Much of it will have been selected, filtered and processed in my mind, but that is pretty much what the IBM machine does. I can’t see any qualitative difference between my slow carbon–based processes and its snappier silicon ones.
So, does that mean, when this line goes we cede the entire kingdom of personhood, and machines thereby qualify for the full human menu of rights, dignity and respect? I have written elsewhere that I don’t see this as an a priori impossibility but I suspect we remain a long way from it, for one critical reason.
The arguments I might deploy in a debate are my arguments. They may be good or bad, well–formed or ill, logical or full of holes, but they remain mine, an overspill of my view of and position in the world. Whose are Project Debaters? The machine will absorb and process information from many more sources than I could, and will filter and select them accordingly, but on what criteria? The apparent answer is – on their proximity to reality: Project Debater will be more objective than I am because it will filter and select arguments from its sources that better reflect ‘that which is the case’.
But how will Project Debater know what is ‘the case’? However many sources it absorbs, it will not be able to short circuit the actual business of knowing or gain some mysterious shortcut to unalloyed reality, primarily because there is no such thing. Everything that is known is known by someone (or perhaps, on occasion, something). There is no view from nowhere. This is not, please, the lazy postmodern argument, that there is no such thing as reality. I’ll wager there is and that I know quite a bit about it. But all I know is what I know, not the thing in itself. After Immanuel Kant, we’re in the business of phenomena not noumena.
Project Debater’s feted objectivity is, in fact, an aggregation of many subjective positions. It uses probabilistic statistics to determine the best response from a large number of human sources – many spot on, some no doubt awful. That may well make it better at judging a situation than many a human, but it does not give it a position or even an interest in the debate itself. It does not make Project Debater a participant, a knower, an I. Project Debater is some way from being an end in itself, being rather a very clever means.
It could perhaps make that leap one day but I suspect that the gap may be bigger than we think as it has more to do with bodies than brains. Concentrate on minds as we usually do when discussing this AI, we forget that it is our embodiment that enables the knowing in the first place. We are materially invested in it reality in such a way that helps us to (begin to) know it. Project Debater isn’t, yet.
So for all we might respect and even defer to Project Debater’s immense argumentative capacity, I would suggest that the confrontation, such as it is, between the human and AI, would not be settled when IBM’s son or grandson of Project Debater finally and comprehensively wins the Oxford Union Debating Society Prize – any more than it was when AlphaGo beat the three–time European Go champion Fan Hui in 2016 or when Deep Blue beat Gary Kasparov in 1997. Ironically, it is its alleged objectivity that is the problem. We admire it; perhaps even need it. But persons are subjects, with all their embodied faults, shortcomings and personal biases. It is not until IBM can design something as fallible, prejudiced and subjective as the average human being, that it will deserve to be treated as one.
See other recent events and articles
An event co–organised by Capital Mass, St Paul’s Institute and Theos marking the 5th anniversary of Archbishop Justin Welby’s “War on Wonga”.Book Tickets
Charlotte Hobson discusses the results of new polling on responses from the public to Bishop Curry’s sermon. 18/06/18In Depth
Theos researches and investigates the intersection of religion, politics and society in the contemporary world.