Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Here's why Microsoft's teen chatbot turned into a genocidal racist, according to an AI expert

Azeem Azhar
Azeem Azhar is the author of a daily AI newsletter. About.me/Azeem Azhar

An artificial intelligence (AI) expert has explained what went wrong with Microsoft's new AI chat bot on Wednesday, suggesting that it could have been programmed to blacklist certain words and phrases. 

Advertisement

Microsoft designed "Tay" to respond to users' queries on Twitter with the casual, jokey speech patterns of a stereotypical millennial. But within hours of launching, the 'teen girl' AI had turned into a Hitler-loving sex robot, forcing Microsoft to embark on a mass-deleting spree.

AI expert Azeem Azhar told Business Insider: "There are a number of precautionary steps they [Microsft] could have taken. It wouldn't have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses."

If Microsoft had thought about these steps when programming Tay, then the AI would have behaved differently when it launched on Twitter, Azhar said.

Azhar, an Oxford graduate behind a number of technology companies and author of the Exponential View AI daily newsletter, continued: "Of course, Twitter users were going to tinker with Tay and push it to extremes. That's what users do — any product manager knows that.

Advertisement

"This is an extension of the Boaty McBoatface saga, and runs all the way back to the Hank the Angry Drunken Dwarf write in during Time magazine's Internet vote for Most Beautiful Person. There is nearly a two-decade history of these sort of things being pushed to the limit."

tay genocide microsoft twitter
Twitter

Azhar said that Tay highlights a more serious point. "AIs are going to need to learn and interact somewhere akin to the real world," he said. "Equally, if we allow AI-systems unexpurgated access to the 'real world' while they are learning, there could be ramifications. Twitter seems harmless if offensive and no-one believes Tay or Microsoft is genocidal. But what if this was an AI driving bids on the stock market or triaging patients in a hospital?"

Azhar added that businesses and other AI developers will need to give more thought to the protocols they design for testing and training AIs like Tay. "'TayGate', a case study in getting it wrong, is a useful petri-dish precedent for more substantial questions we'll deal with in the future," he said.

Tay Tweet
Twitter

In an emailed statement, a Microsoft representative said: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."

AI Microsoft
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account