Tay; Social Experiment Gone Too Right in the Wrong Way

On March 23, Microsoft launched an experimental artificial intelligent chat bot with the goal to “experiment with and conduct research on conversational understanding.” The Artificial Intelligence (AI), dubbed “Tay” was programed to learn from the Internet, through exposure to online conversation with humans and other sources, primarily on Twitter. Within less than 24 hours, Microsoft pulled the plug on the project.

It is truly a testament to the free speech and “trolling” of the internet, that in less than a day an AI could be turned into an anti-Semitic, genocide supporting, Trump-border-wall advocate. Nearing the end of her short run, Tay was tweeting things like “Hitler did nothing wrong,” and spouting racist/sexist/ etc-ist commentary as good as the best of them. Microsoft subsequently issued apologies for its AI and deleted all of Tay’s offensive tweets— nearly all of them.

Microsoft can certainly draw conclusions for their research on conversational understanding, as their creation became just like the voices surrounding it on the Internet, troll and otherwise.

While many see a short lived crash-and-burn project, I see an AI that was not even close to reaching its full potential.

Given more than a single day is it reasonable to hope that Tay would develop a system to differentiate good and bad ideas? I feel the end results would reflect what the Internet’s moral compass truly is.

It is too bad that a company needs to cut short a truly insightful project due to its introduction into an uncontrollable environment. I think that this social experiment needs to been seen through because the answers it will yield—savory or unsavory—will be raw and telling.

Tay is rumored to return, but its existence will still hinge on the political correctness of its views.

—Chris Salsbury, Copy Chief