Famous linguist Benjamin Lee Whorf believed that the words we use influence how we think. If his hypothesis holds true, it seems only fitting that we remain ever critical of our language, of its ability to bias our beliefs. Take, for example, the term “artificial intelligence.”
Some have called AI the next big thing in technology—the next Internet. We’re coming off a year in which IBM supercomputer Watson bested Jeopardy’s most formidable players. A year in which the iPhone’s new, natural language-understanding personal assistant, Siri, reached pockets the world over. There’s momentum, all right, and it has a lot of dollars behind it. That’s why Google—you know, the company that changed the Internet as we know it—plans to be just as involved in the AI revolution.
But should artificial intelligence, a term with no popular synonym, be the one used to describe the technology of tomorrow? For lack of alternatives, I find myself jotting it down often—but I’m wary, and I’m not the only one. In fact, the very man who coined the term “AI,” computer scientist John McCarthy, told his friend late in life that he regretted not having gone with “computational intelligence” instead.
The major issue is this: People often believe that artificial means fake, but there’s no such thing as fake intelligence, and knowing this is important—for reasons I’ll come back to.
I spoke with David Poole, a professor of computer science at UBC and co-author of Artificial Intelligence: Foundations of Computational Agents, in November for an article that ran in our Winners & Losers issue. He directed me to a section from his book where he and co-author and fellow UBC professor Alan Mackworth explain that “the term ‘artificial intelligence’ is a source of much confusion because artificial intelligence may be interpreted as the opposite of real intelligence.”
But while real may be the opposite of fake, they explain, it is not the opposite of artificial. The opposite of artificial is natural. “Natural means occurring in nature and artificial means made by people.”
A tsunami, they say, can occur naturally or can be the result of something human-made—like a bomb exploding in the ocean. The latter would be an artificial tsunami, but still a real one. An example of a fake tsunami would be one that’s computer-generated, like in a movie. But “you cannot have fake intelligence. If an agent behaves intelligently, it is intelligent.”
So why does this matter? Well, because if people think something is fake, then it cannot be exploited or abused. It cannot feel pain—or at least not real pain.
In the 2001 movie A.I. Artificial Intelligence, human-like robots are taken to an event called the Flesh Fair, where cheering humans watch as robots are brutally torn apart. People satiate their thirst for destruction at what’s perceived to be a low cost: the destruction of unwanted machinery. Even if they look like people, and even if they sometimes act like people—they’re fake, right?
The first step to answering that question is knowing the difference between artificial and fake. All intelligence is real, including that of robots. The question that follows, then, is when does intelligence begin to have moral worth? When it can conceive seemingly original ideas? When it can care about its own existence? Or perhaps once it’s able to pass the Turing test?
Whatever the case may be, the term AI discourages some from even considering the question. When people believe something isn’t real, then how it’s treated matters little, or not at all. (For example, in his review of A.I., Roger Ebert takes for granted the assumption that androids cannot truly feel.)
AI is a business story right now. No one believes that Siri can feel (not even this author). We care only about how useful she is for the consumer. How many iPhone’s she’ll sell. Right now, that’s fine. But will it always be?
There’s reason to believe that AI will become more than just a major industry. There’s reason to believe we’re toying with life. South Korea, one of the world’s most notable tech markets, has taken a small first step in developing policy around robot ethics, but there’s much more that will need to be put in place—by all countries. And if the very term that categorizes the discussion—artificial intelligence—isn’t serving the issue neutrally, we should perhaps aim for one that causes less confusion.