Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Microsoft apologizes for its racist chatbot's 'wildly inappropriate and reprehensible words'

Microsoft apologized for racist and "reprehensible" tweets made by its chatbot and promised to keep the bot offline until the company is better prepared to counter malicious efforts to corrupt the bot's artificial intelligence.

Advertisement

In a blog entry on Friday, Microsoft Research head Peter Lee expressed regret for the conduct of its AI chatbot, named Tay, explaining that the bot fell victim to a "coordinated attack by a subset of people."

Microsoft Tay AI
Microsoft

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.

Earlier this week, Microsoft launched Tay —  a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.

But it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment.

Advertisement

"Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.

An organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit.

In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a smash hit in China with over 40 million users, for an American audience. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.

And make no mistake, Lee says, this was an attack.

Advertisement

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes.

Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again.

"To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes.

It seems weird that Microsoft couldn't have seen this coming. After all, it's been common knowledge for years that Twitter is a place where the worst of humanity congregates.

Advertisement

Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly "tricked" into racism?

You can read Microsoft's full apology here.

Microsoft Artificial Intelligence Racism
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account