Pages

Monday, January 19, 2015

Artificial Intelligence Has Arrived, and That Really Worries the World’s Brightest Minds


From Wired:

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.
That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.
Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.
AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.
“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”
Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”


Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.
That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade gameBreakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.



Deciding the dos and don’ts of scientific research is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And yesterday, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok. But some companies are already going further.

Last year, Canadian roboticists Clearpath Robotics promised not to build autonomous robots for military use. “To the people against killer robots: we support you,” Clearpath Robotics CTO Ryan Gariepy wrote on the company’s website.

Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?


TO THE PEOPLE AGAINST KILLER ROBOTS: WE SUPPORT YOU.

Itamar Arel is the founder of Binatix, a deep learning company that makes trades on the stock market. He wasn’t at the Puerto Rico conference, but he signed the letter soon after reading it. To him, the coming revolution in smart algorithms and cheap, intelligent robots needs to be better understood. “It is time to allocate more resources to understanding the societal impact of AI systems taking over more blue-collar jobs,” he says. “That is a certainty, in my mind, which will take off at a rate that won’t necessarily allow society to catch up fast enough. It is definitely a concern.”

Predictions of a destructive AI super-mind may get the headlines, but it’s these more prosaic AI worries that need to be addressed within the next few years, says Murray Shanahan, a professor of cognitive robotics with Imperial College in London. “It’s hard to predict exactly what’s going on, but we can be pretty sure that they are going to affect society.”

5 comments:

  1. Awesome. Love new, better, more challenging and awe inspiring technology. The more the merrier I say.

    Protectionism in economics has never worked and protectionism against robots isn't going to work either. If we want to have a future, we need to do it with the help of AI, not try to kill AI before its even a reality.

    Loving every moment of new technological advances, if only the old idiots in different Congresses and Parliaments of the world would step aside and stop screwing with development...

    ReplyDelete
  2. IMO, the AI will be a seamless integration of man and machine. No one will know the difference.

    I know it sounds old-fashioned to say this, but computers do not feel and feeling is part of what happens when people have will, reason, and imagination.

    Computers will do what they have always done; calculate and provide information.

    Humans will make the choices.

    But no one will be able to tell the difference because the humans will have computers inside them helping them to think through available options.

    ReplyDelete
  3. AI will give humans beings an even greater capacity for evil, as we see with the NSA.

    ReplyDelete
  4. True, but it also gives humans a greater ability for good. AI is just a tool, its up to good or bad humans to use it as they would.

    As for feeling and emotions, I agree with you. But computers don't have to be planted in humans for folks to not be able to tell the difference between what is from AI and what's from humans.

    Check out for example Emily Howell. Created by David Cope, it is a robot that was able to actually fool listeners who could not tell whether the music created by Howell was created by a human or a robot.

    http://artsites.ucsc.edu/faculty/cope/Emily-howell.htm

    Obviously there are ethical discussions on what robots could or couldn't enable humans to do. I don't worry too much about the bad that can be done by humans using robots because they are tools and nothing else - just like guns, knives, fire, etc. Its the bad humans that I'm worried about and I would be worried about them even if there were no AI to speak of.

    ReplyDelete
  5. By the way, you're right, computers do give humans a greater capacity for good.

    Absolutely true. Thanks.

    ReplyDelete