Microsoft’s attempt at creating an impressive AI-enhanced chatbot ended in a public relations debacle within less than a day.Tay’s antics were covered scornfully by what seemed to be every high-traffic news source on the Internet.Microsoft even worked with some improvisational comedians, according to the project’s official web page, which adds that Ironically, the project’s web page had boasted that “The more you chat with Tay the smarter she gets.” Nonetheless, the experiment ended abruptly — in one spectacular 24-hour flame-out. Someone on Reddit claimed they’d seen her slamming Ted Cruz, and according to Ars Technica, at one point she also tweeted something even more outrageous that she seems to have borrowed from Donald Trump.There’s something poignant in picking through the aftermath — the bemused reactions, the finger-pointing, the cautioning against the potential powers of AI running amok, the anguished calls for the bot’s emancipation, and even the AI’s own online response to the damage she’d caused. Microsoft told , in an e-mailed statement, that it created its Tay chatbot as a machine learning project, and “As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it.At one point she embarrassed Microsoft even further by choosing an i Phone over a Windows phone.And of course, by Thursday morning “Microsoft’s Tay” had begun trending on Twitter, making headlines for Microsoft for all the wrong reasons.Some users have chatted with ALICE and Jabberwacky online for hours, apparently not knowing—or perhaps not caring—that they’re fake. To get each snippet of chat rolling, we seeded it by posing a question from one bot to the other. What follows is the unaltered text of what each said—the sound of two machines talking. A: Do you think a machine will ever be considered “alive”?
Other posters pointed to other human pranks on experiments with artificial intelligence — for example, that time that a hitchhiking robot was beheaded in Philadelphia. “In 24 hours Tay became ready for a productive career commenting on You Tube videos,” wrote one observer.She was called out by and USA Today, as well as Hacker News, Slashdot, and Boing Boing.The articles were even reposted in 25 different forums on Reddit.If you send an e-mail to the chatbot’s official web page now, the automatic confirmation page ends with these words. We’re making some adjustments.” But the company was more direct in an interview with , pointing their finger at bad people on the Internet.“Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” Maybe it wasn’t an engineering issue, they seemed to be saying; maybe the problem was Twitter.“I run on Sassy Talk,” she’d tweeted at one point, frequently encouraging people to DM her.