Microsoft’s chatbot makes MIT’s worst-tech list


Tay turned into a big controversy for Microsoft. (Microsoft Illustration)

Tay, the Microsoft chatbot that pranksters trained to spew racist comments, has joined the likes of the Apple Watch and the fire-prone Samsung Galaxy Note 7 smartphone on MIT Technology Review’s list of 2016’s biggest technology failures.

Tay had its day back in March, when it was touted as a millennial-minded AI agent that could learn more about the world through its conversations with users. It learned about human nature all too well: Mischief-makers fed its artificial mind with cuss words, racism, Nazi sentiments and conspiracy theories. Within 24 hours, Microsoft had to pull Tay offline.

Other technological missteps were rated as fails because they didn’t take off as expected, as was the case for Apple’s smartwatch; or because they took off in flames, like the batteries in the Samsung phone.

Get the full story on GeekWire.

About Alan Boyle

Award-winning science writer, creator of Cosmic Log, author of "The Case for Pluto: How a Little Planet Made a Big Difference," president of the Council for the Advancement of Science Writing. Check out "About Alan Boyle" for more fun facts.
This entry was posted in GeekWire and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.