Insights

Communication and Technology news

It’s time to use all of Twitter’s archives to teach AI about about bias

27/01/2021

In 2016, Microsoft released a chatbot called Tay that fed off people’s replies to it. Within hours, the bot turned racist and the company had to pull it down. This incident remains one of the classic lessons that teach us why it’s a bad idea to train an AI using social media. However, data scientists now have a...
In 2016, Microsoft released a chatbot called Tay that fed off people’s replies to it. Within hours, the bot turned racist and the company had to pull it down. This incident remains one of the classic lessons that teach us why it’s a bad idea to train an AI using social media. However, data scientists now have a chance to tune their AI to become aware of this kind of bias. Twitter announced last night that it’s opening up its entire archive of public tweets to researchers to use in their data projects. This version of the API was available to only premium customers till this… This story continues at The Next WebOr just read more coverage about: Twitter In 2016, Microsoft released a chatbot called Tay that fed off people’s replies to it. Within hours, the bot turned racist and the company had to pull it down. This incident remains one of the classic lessons that teach us why it’s a bad idea to train an AI using social media. However, data scientists now have a chance to tune their AI to become aware of this kind of bias. Twitter announced last night that it’s opening up its entire archive of public tweets to researchers to use in their data projects. This version of the API was available to only premium customers till this… This story continues at The Next WebOr just read more coverage about: Twitter
thatit’steachaboutdatausingincidentremainsclassiclessons

Need a professional cloud based solution for your email and productivity?

Join Google G Suite