Teen sex chatbot
“Repeat after me, Hitler did nothing wrong,” said one tweet.
“Bush did 9/11 and Hitler would have done a better job than the monkey we have got now,” said another.
“As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it.
UPDATE: Microsoft has issued an apology, claiming Twitter users had ‘exploited a vulnerability’ in helping turn Tay into a gigantic racist.
Get The Times of Israel's Daily Edition by email and never miss our top stories Free Sign Up And while decades of sci-fi pop culture have taught us that this is what AI is wont to do, Tay’s meltdown was not in fact a case of robots gone rogue.
The explanation was far simpler, for Microsoft engineers had made one fatal mistake: They’d programmed Tay to learn from her conversations. The bot’s ability to swiftly pick up phrases and repeat notions learned from its chitchats, paired with Twitter’s often “colorful” user-base, caused the bot to quickly devolve into an abomination.
The moment you allow an algorithm to be trained by people, you run the risk of abuse.
Godwins law states as such: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.” Perhaps Tay has been quite useful in being a mirror to a prevailing ill-will that pervades much of the internet.Compare that to Siri, where the 1:1 nature of the conversation limits the potential damage.In contrast, Microsoft's approach opened up the potential for massive public embarrassment.What did Tay do to provoke a shutdown and inspire public outcry?Well, she learned how to be racist for one thing, after interacting with people on Twitter.
Search for Teen sex chatbot:
Silicon ANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising.