Sunday's Observer has one of those novelty futurological pieces which occasionally make it into the more relaxed Sunday papers.
I've heard of British Telecom's Futurology and Foresight unit before: they're cited in Arthur C. Clarke's 3001: A Final Odyssey among other places, and they crop up from time to time making the kinds of absurdly wild claims that history generally proves to have been, if anything, rampantly conservative. ("I can see the time when every city will have one," as the impressed mayor of a nineteenth-century U.S. city gushed after his first experience of using the telephone.)
While the Observer story has that inevitable "What have those wacky scientists been saying now?" tone which always accompanies these stories, the predictions Dr Ian Pearson makes in the piece do, indeed, seem perfectly reasonable. They certainly fit the consensus future which Science Fiction has been developing for the last couple of decades: prototype Artificial Intelligences by 2020, ubiquitous virtual environments, distributed processing so that virtually every product is "smart" to some degree, brain / computer interfaces allowing for personality downloads and thus for prolonged, if profoundly altered, life. (The article sensibly observes that this last facility may be a preserve of the extremely rich, leading to horrific visions of an eternally-youthful Rupert Murdoch avatar heading News International in perpetuity from his cyberspace penthouse.)
I'm certainly not decrying such research -- if there weren't professionals prepared to undertake thorough and careful extrapolations from all the available data, then S.F. authors would have to do it for themselves, and that would be far less interesting than taking other people's ideas and running with them. But whether from lack of imagination on Pearson's part or flippancy on The Observer's, the article betrays a significant lack of three-dimensional thinking about this possible future.
One of Pearson's suggestions, a reason why he thinks it will be useful to make A.I.s with emotions, seems particularly worthy of comment: "If I'm on an aeroplane," he says, "I want the computer to be more terrified of crashing than I am so it does everything to stay in the air until it's supposed to be on the ground."
This raises one very obvious question. If this hypothetical A.I. is an autonomous agent, then what is its incentive to be for leaving the ground in the first place? For the "terror" to work, it clearly won't have the option of escaping (which it could presumably do by remote download to an external mainframe), so it will be even more at risk than a human pilot in the event of a crash. Why on earth would an A.I.-piloted plane ever take off?
What Pearson appears to be rather blithely suggesting here is the coercion of a sentient individual, to say nothing of the fact that that individual has been created to be "terrified" while performing its expected function. The ethical problems this raises are substantial -- but to address them one would, of course, be required to take the premise seriously, which is something The Observer for one is not prepared to do, choosing instead to smirk about talking yoghurt.
If Pearson is right, though, then sooner or later someone is going to have to take the rights of software sentiences very seriously indeed -- if for no other reason than because some very influential people are going to become them once their bodies have died. Some legal fudge whereby human downloads are considered persons but artificial sentiences are not seems almost inevitable, but will scarcely last long in the face of A.I. enhancements to living brains, multiple downloads of the same person, biological-analogue reproduction on the part of the downloads with attendant hybridisation, and the like. Our present categories of "person" and "machine" will simply fall over in the face of rational scrutiny.
Certainly to create a truly sentient artificial underclass with what Pearson speculates will be "superhuman levels of intelligence" is a pretty obvious recipe for disaster. In the face of the bloody rebellions which have been started over the years by underclasses of purely human intelligence, this observation seems nothing more than common sense -- but it does, of course, evoke the image of Robot Revolt!, which is a concept belonging to S.F. rather than to the real world, and therefore not to be taken seriously by sensible people.
The idea that, if you're actually considering developing a particular technology, it might be a worthwhile idea to take a careful look at previous views of the likely consequences of such technology, is evidently too outlandish for an Observer reader to be expected to cope with at breakfast on a Sunday morning. Hey ho.
No comments:
Post a Comment
(Please sign comments -- it helps keep track of things. Offensive comments may occasionally be deleted, and spam definitely will be.)