Blogs & Comment

The biggest hurdle for A.I. may be philosophical

Siri is pretty sweet, but it could still be a while before we have self-aware robots.

CB_IBMWatson

Photo: Raysonho/Open Grid Scheduler

Siri, the iPhone 4S’s new personal assistant, has journalists buzzing about artificial intelligence again—just as IBM’s Watson did earlier this year (video below). Resident blogger Peter Nowak pondered Siri’s smarts a week ago, debating whether or not she really was a she—or just an it. Which raises another interesting question: When exactly will robots be, for lack of a better term, human-smart? Well, that’s a very tough question to answer.

The challenge of building really smart robots is twofold. On the one hand, there’s the technology required—which includes a level of processing power not yet reached, advanced mechanical bodies and so on. And on the other hand, there’s the philosophy—and that’s what could be the bigger holdup, at least according to David Deutsch, often considered the forefather of quantum computing.

“Although I’m absolutely certain that artificial intelligence will be discovered, I don’t think it’s a matter of computer science to discover it,” Deutsch told CBC’s Jian Ghomeshi in a radio interview earlier this month. “I think what is lacking is a philosophical breakthrough.”

He added that a lot of present thinkers believe A.I. will be the next big advancement, citing the Internet as the most recent major milestone. If it is, in fact, of that scale, surely everyone from venture capitalists to philosophers should be watching closely. Regarding the optimistic timeframe, however, Deutsch has his doubts. “We’re seeing amazing progress in the power of computer programs,” he told Ghomeshi, but there’s more to it than that. Even with adequate processing power and a plethora of complex programs, a human-like robot may not be immediately realized.

The development of complex sensory systems will no doubt be one of the major hurdles. I’m reminded of a cognitive systems class I took in my undergraduate days at UBC. The evolution of robot intelligence, I was told, while blazingly fast compared with biological progression, is happening in a backwards sort of way—again, relative to us. That which seems easy for people is hard to replicate in robots (like vision, which actually eats up a lot of brainpower), and that which we struggle with robots tend to be very good at (like memory and computation).

So even though processing power will soon be where it needs to be—some have estimated in the 2020s, a prediction supported by Moore’s law, which describes a trend whereby processing power doubles about every two years—the battle is far from over.

The reason is because it’s not just a matter of power, but of innovation too—of philosophy. How do we give robots vision and have them understand the finer details of what they’re seeing? How do we make robots conscious, if, in fact, consciousness is a real thing? We really don’t know, and when we get there we may not immediately realize it. Their smarts may not always look like ours. After all, we’re a product of one form of evolution, and robots are of another.