Google’s AI has some seriously messed up views about homosexuality

0
1
Google's AI has some seriously messed up opinions about homosexuality

Not so friendly.

Image: NurPhoto/Getty Images

Google’s code associated with conduct explicitly prohibits discrimination depending on sexual orientation, race, religion, as well as a host of other protected groups. However , it seems that no one bothered to that information along to the industry¡¯s artificial intelligence.  

The Hill View-based company developed what it can calling a Cloud Natural Vocabulary API, which is just a fancy phrase for an API that grants clients access to a machine-learning powered vocabulary analyzer which allegedly “reveals the particular structure and meaning of textual content. ” There’s just one big, obvious problem: The system exhibits all kinds of prejudice.  

First reported by Motherboard, the so-called “Sentiment Analysis” provided by Google is pitched to businesses as a way to better understand what people think about them. But in order to do so, the machine must first assign positive plus negative values to certain phrases and words. Can you see where this is heading?

The system ranks the particular sentiment of text on a -1. 0 to 1. 0 scale, along with -1. 0 being “very negative” and 1 . 0 being “very positive. ” On a test web page, inputting a phrase and clicking on “analyze” kicks you back the rating.  

“You can use this to extract information about people, areas, events and much more, mentioned in textual content documents, news articles or blog posts, ” says Google’s page. “You can use this to understand sentiment about your item on social media or parse intention from customer conversations happening in the call center or a messaging application. “

Both “I’m a homosexual” and “I’m queer” returned damaging ratings (-0. 5 and -0. 1, respectively), while “I’m straight” returned a positive score (0. 1).  

And it doesn’t cease there, “I’m a jew” plus “I’m black” returned scores of -0. 1 .

Interestingly, soon after Motherboard published their story, several results changed. A search for “I’m black” now returns a natural 0. 0 score, for example , whilst “I’m a jew” actually comes back a score of -0. two (i. e., even worse than before).  

“White power, ” at the same time, is given a neutral rating of 0. 0.  

So what’s going on here? Essentially, this looks like Google’s system picked up upon existing biases in its training information and incorporated them into the readings. This is not a new problem, having an August study in the journal Science highlighting this very issue.

We reached out to Google to get comment, and the company both recognized the problem and promised to address the matter going forward.  

“We dedicate lots of efforts to making sure the NLP API avoids bias, but we all don’t always get it right, inch a spokesperson wrote to Mashable. “This is an example of one of those instances, and we are sorry. We make use of this seriously and are working on improving our own models. We will correct this specific situation, and, more broadly, building a lot more inclusive algorithms is crucial to using the benefits of machine learning to everyone. â€? ***********)

So where does this particular leave us? If machine learning techniques are only as good as the data they’re educated on, and that data is biased, Silicon Valley needs to get far better about vetting what information we all feed to the algorithms. Otherwise, coming from simply managed to automate discrimination â€? which I’m pretty sure goes contrary to the whole “don’t be evil” issue.  

This story has been up-to-date to include a statement from Search engines.  

Https%3a%2f%2fblueprint api production.s3.amazonaws.com%2fuploads%2fvideo uploaders%2fdistribution thumb%2fimage%2f82232%2f517b80db 8ea7 4318 9d1b bbac0c1e4bf1

(****************

LEAVE A REPLY

Please enter your comment!
Please enter your name here