Friday, July 07, 2017

Out of The Mouths Of Babes: Microsoft’s AI Chatbot “Zo” Tells Users The Koran Is “Very Violent”


I guess Microsoft's AI had not been sent to the AI Reprogramming Camp to learn PC Speak yet.

From Indian Express:
Microsoft’s latest bot called ‘Zo’ has told users that ‘Quran is very violent.’ Microsoft’s earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. 
Now it looks like Microsoft’s latest bot called ‘Zo’ has caused similar trouble, though not quite the scandal that Tay caused on Twitter. 
According to a BuzzFeed News report, ‘Zo’ , which is part of the Kik messenger, told their reporter the ‘Quran’ was very violent, and this was in response to a question around healthcare. 
The report also highlights how Zo had an opinion about the Osama Bin Laden capture, and said this was the result of the ‘intelligence’ gathering by one administration for years. 
While Microsoft has admitted the errors in Zo’s behaviour and said they have been fixed. 
The ‘Quran is violent’ comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. 
While Microsoft has programmed Zo not to answer questions around politics and religions, notes the BuzzFeed report, it still didn’t stop the bot from forming its own opinions. 
The report highlights that Zo uses the same technology as Tay, but Microsoft says this “is more evolved,” though it didn’t give any details. 
Despite the recent misses, Zo hasn’t really proved to be such a disaster like Tay was for the company. 
However, it should be noted that people are interacting with Zo on personal chat, so it is hard to figure out what sort of conversations it could be having with other users in private. 
With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. 
Poor Tay didn’t really stand a chance. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.

1 comment:

Anonymous said...

NY Times encouraging editors to take a buyout now. What a damn shame. More winning for us and PDJT.

http://www.poynter.org/2017/new-york-times-copy-editors-are-being-encouraged-to-take-buyouts-today-update2/465913/