The Story of Insane and Offensive AI Chatterbot

Follow TheNotes to get each article directly in your inbox for free.

There are several interesting things in this world and we bring you one of them daily. So, don't forget to follow 'TheNotes' by your email. Today I am sharing with you the story of one of the most horrible AI chatterbot developed by Microsoft, which gets shut down after 16 hours of its launch on Twitter. Let's go deeper.

The Story of Insane and Offensive AI Chatterbot
The Story of Insane and Offensive AI Chatterbot

Tay was an artificial intelligence chatterbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.

Why "Tay" became insane?

Tay is based on Machine Learning. Machine learning works by developing generalizations from large amounts of data. In any given data set, the algorithm will discern patterns and then “learn” how to approximate those patterns in its behavior. Whatever Tay learns is based on whatever data it gets. The malicious input results in the malicious output.

The End Notes

Tay was an experiment at the intersection of machine learning, natural language processing, and social networks. Well, that's all in this article. I hope you liked it. If you have any questions or suggestions, share them in the comment section below. Have a good one.


Post a Comment