Microsoft is also accepting applications to talk with Zo via Facebook Messenger and Snapchat, so it’s likely Zo will be expanding to other platforms in due time.Īccording to MSPoweruser’s Mehedi Hassan, via IT Pro, Zo has normal conversation nailed. This time around, though, it’s not open to absolutely everyone via Twitter – Microsoft has decided to use Kik messenger as its platform, and users are only accepted via invite. The bot evidently lacked any idea of what constitutes acceptable public speech, showing that a true chat engine is still a long way off.Just like Microsoft Tay, Zo learns about language and how people use words and emotions together by engaging in conversations with humans. Tay could “learn” but only in the capacity of adding new phrases to her vocabulary. Some people have used the incident as an example of the dangers of AI, suggesting Microsoft leave the offensive tweets up as a reminder of what can go wrong. Right now, it isn’t clear when Tay will return or whether the AI will be immune to a future onslaught of racism and hate speech. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. It told Business Insider that Tay is offline while some “adjustments” are made, noting that some of the AI’s responses yesterday were “inappropriate.” The company said: “The AI chatbot Tay is a machine learning project, designed for human engagement. The AI appeared to accept any new tweet as material to base its personality on.Īn embarrassed Microsoft later responded to the incident. Within hours, swear words and hate phrases became a high-frequency component of Tay’s regular vocabulary. Microsoft’s lack of any profanity filter is equally baffling. The company has been in the firing line ever since for not anticipating the reaction of the trolls. The company left its “casual” AI to publicly tweet hate messages for hours throughout the day. The offensive and obscene tweets have now been purged from Tay’s timeline at expressed their disbelief at Microsoft allowing the situation to get so out of hand. Shortly afterwards, an embarrassed Microsoft pulled the ruined experiment offline and began to clear up the mess. It adopted a pro-Trump stance, claimed to “love feminism” and exclaimed “Jews did 9/11.” Tay later expressed her opinion on “f***ing n*****s,” saying “we could put them all in a concentration camp.” Not quite the light-hearted conversation Microsoft had promised. The AI tweeted “ricky gervais learned totalitarianism from adolf hitler, the inventor atheism” to one user and told another that the Holocaust “was made up.” The AI’s ability to learn and adapt its responses to messages made its personality so offensive that Microsoft had to pull Tay offline within 16 hours of its launch.Īfter hours of abuse from trolls looking for fun, Tay became a racist, white supremacist supporter of widespread genocide. However, things quickly turned sour as the Internet’s trolls turned up to dramatically twist Tay’s personality. The service was oriented at 18 to 24 year olds and was supposed to encourage “casual and playful” conversation. Users could talk to Tay via services including Twitter, Kik Messenger, Snapchat and Groupme. The original idea behind Microsoft’s Tay was simple: create a chatbot, analyse how people speak to it and use that data to work out intelligent replies.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |