Time-traveler Kyle Reese hunches in the driver’s seat next to Sarah Connor. She’s confused, panicky, and cute as a button. Reese’s dirty-blond hair is stylishly mussed, his artfully scarred face sweaty, and streaked with grime just-so.
“Defense network computers,” rasps Reese. “New… powerful… hooked into everything and trusted to run it all.”
Sounds serious. And it’s surely something Connor should find interesting, what with Arnold Schwarzenegger dogging her like the Fifth Horseman of the Apocalypse. Even so, she seems to have trouble focusing.
“They say it got smart,” continues Reese, slamming another shell into the 12-gauge shotgun on his lap. “A new order of intelligence…Decided our fate in a microsecond. Extermination.”
The Terminator is an awesome movie because, even way back in the technological Dark Age of 1984, it wasn’t hard to believe that one day soon there’d be robots walking among us, sophisticated machines doing what they do without so much as a by-your-leave from feckless and fragile Homo sapiens sapiens. And it’s only easier to believe now, in this more refined era of talking phones and Google drones and wristwatches that can guess what you might want for lunch, maybe. But it’s a long walk from GPS-guided lawn mowers to metal-punk kill-bots from the future, and, at the moment, everything fashioned by the hand of Man must also be guided by it. The goal of self-directed machines remains elusive, and will remain so until scientists working in the field of artificial intelligence (AI) solve a couple of particularly prickly problems.
A truly “thinking” machine must, at bare minimum, be capable of doing two things that people, puppies and plankton do without thinking. First, a genuinely autonomous device must be able to process vast amounts of information instantaneously to produce a minutely accurate real-time understanding of its environment. Although the vast and constantly expanding universe of Internet databases and increasingly agile optics may allow a machine to feast on all the same information that its creator can, and probably more, the breakdown occurs in digestion. Current computer architecture manages information in a rigid series of logical steps. It’s an orderly and reliable process that can tot up a spreadsheet in the blink of an eye, but that quickly becomes overwhelmed by the flood of data presented by sensory input like vision. Sure, your PC can handle it, it just can’t handle it fast enough to permit practical autonomy. And yet…
Last year, scientists working at separate laboratories across the country simultaneously unveiled their own versions of the “neurochip,” a microprocessor that mimics the inner workings of the human bean. To understand how, consider that your brain contains something like 100 billion cells connected by 100 trillion synapses.
Rather than passing every impulse along in restricted linear fashion, each neuron in your brain communicates directly with thousands of others, allowing the parallel processing of almost unlimited input. At present, IBM’s neurochip prototype, “TrueNorth”, contains 5.4 billion transistors and 256 million electronic “synapses” that together can process information far faster and more fluently than your one-thing-at-a-time Pentium can. And while that’s a baby step toward achieving even plankton’s mental acuity, it’s a giant leap toward creating an electronic “brain”, and IBM is already exploring ways to connect individual neurochips together into the kind of faux-neural network that could one day drive, say, a Cyberdyne Systems Model 101 cybernetic infiltration unit on a hyper-alloy combat chassis.
The second basic skill an “intelligent” device must master is doing things without being told exactly when and how to do them. No matter how smart the car, phone, or drone, it simply cannot do anything it’s not told to do. Our theoretical autonomous robot can adapt to an ever-changing and unpredictable environment. To do so, it must instantly identify and assess a potentially huge number of possible variables, arrive at a wholly independent “decision” based on nothing more than its own self-processed input, and originate action in the absence of situation-specific programming. For the armies of roboticists working the puzzle, the goal is to create an appliance that, given a clearly-defined “mission,” will figure out how to achieve that end all by itself. Needless to say they’re not there yet, but they recently came a little closer with the development of new software and sensory apparatus that help machines become not only more aware of their surroundings, but able to perform rudimentary tasks cooperatively.
Armed with those new technologies, scientists recently turned loose about 1,000 robots, each about the size of your thumbnail. On command, the devices assorted themselves into squares, letters and sundry other shapes with no help from their keepers.
You’re thinking, “A bunch of wind-up toys made an ‘X’ – what’s on TV tonight?”
Actually, each of a thousand self-directing machines was ordered to create something that it couldn’t possibly make by itself. Each one kept a clear picture of the objective in its tiny electronic noggin while maintaining a constant awareness of its precise position relative to all 999 of its shifting, shuffling mates. Each robot independently adjusted its location within the evolving scheme until the mission was accomplished. And they did it all by themselves.
Together, those AI breakthroughs are beginning to satisfy the requirements for robotic independence. The evolution of neurochips may one day make it possible for machines to process information with organic efficiency, and cooperative software improvements will likely confer the environmental awareness they’ll need to navigate complex real-world situations. Granted, making a scraggly triangle isn’t in the same league with systematically annihilating the human race, but warts of the worrying kind are quick to assure that some form of brutal “Skynet” is inevitable if we persist in trying to build a better autopilot.
Leaving questions of potential AI self-awareness and spirituality to philosophers and theologians, should we be concerned by the prospect of intelligent machines? Done right, they’d be both smarter than us, stronger than us, and if we don’t get along for some reason things could get awkward in a hurry.
Ask the folks who spend their weekdays messing around with AI and they’ll assure you that thinking robots will be pussycats because they don’t think anything like we do. Their electronic minds won’t be subject to deadly sins like greed and envy, pride and wrath – all those base impulses that make humans so dangerous to be around. Ask Oxford University philosophy professor Nick Bostrom, on the other hand, and he’ll say it’s precisely because they won’t be carrying any human emotional baggage that smart-bots might easily slip their leashes and chew our collective slippers into oblivion. They’re just too darned task-oriented.
Even smart machines, Bostrom asserts, would be programmed to execute specific, exclusive and imperative tasks, such as calculating the precise amount of tea in China, or making widgets. While an appliance imbued with reason wouldn’t be angling to corner the market on Oolong, or give a fig what happens to all the widgets it produces, it must necessarily care a great deal about sustaining its ability to perform its particular function. Keeping track of the world’s supply of Orange Pekoe, for example, would demand unfettered access to mountains of relevant source data. Manufacturing widgets would require a secure supply of whatever physical resources widgets are made out of. And neither function would be possible without an uninterrupted flow of electrical power.
“An agent with such a final goal would have a convergent instrumental reason to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system,” explains Bostrom, in his down-home, folksy, Oxford way. “We cannot blithely assume that a super-intelligence would limit its activities in such a way as to not infringe on human interests. The first super-intelligence could easily have non-anthropomorphic final goals, and would likely have an instrumental reason to pursue open-ended resource acquisition.
“If we now reflect that human beings consist of useful resources (such as conveniently located atoms) and that we depend for our survival on many more local resources, we can see that the outcome could easily be one in which humanity quickly becomes extinct.”
A grim prognosis, and certainly one open to debate. But we can’t say we haven’t been warned.
“Listen and understand,” pleads Reese, struggling against two beefy orderlies. “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop – ever! – until you are dead.”
Awesome movie.
You must be logged in to post a comment.