Just a day after its launch, Microsoft’s Chatbot “Tay” has been laid to rest for the time being. In a strange turn of events, Tay began tweeting offensive tweets, leading to an apology by corporate vice president of Microsoft Research Peter Lee. He expressed remorse over the tweets and said the chatbot will only come back once issues leading to shutdown are solved.
Peter Lee said, “As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values”.
Lee also noted that Tay is company’s second AI release to the public after XiaoIce in China. He added that XiaoIce is used by 40 million in China and Tay was an attempt to see how this type of AI would adapt to a different cultural environment.
According to Peter Lee, Tay went through stress-testing for exploits before it was released to the public. But, the researchers team apparently overlooked the specific vulnerability that caused the chatbot to repeat various racist and offensive ideas and statements from some bad actors.