28 C
Lagos
Saturday, March 28, 2020
  • News
  • Continents

Speech recognition systems from five tech companies are bias towards people of color, study reveals 

Must Read

Coronavirus UK Lockdown: Only ‘immediate family’ at funerals

The reality of Boris Johnson's coronavirus lockdown started to become clear today as boyfriends and girlfriends who do...

Republican Rep. Liz Cheney blasts push to relax stay-at-home measures but doesn’t name Trump

Republican Rep. Liz Cheney is pushing back at President Donald Trump's move away from social distancing measures being...

Gov. Cuomo warns other Americans ‘you will all be New York in weeks’

New York Gov. Andrew Cuomo lambasted President Trump on Tuesday for sending only 400 ventilators to New York from...

Speech recognition systems are deep-rooted with bias towards people of color, a new study reveals.

Stanford researches found these technologies from Amazon, Apple, Google, IBM and Microsoft make twice as many errors when interpreting language from black people than words spoken by whites.

The team fed systems with nearly 2,000 speech samples from 115 individuals, 42 whites and 73 blacks, and found the average error rate for whites was 19 percent and 35 percent for blacks.

Apple was found to perform the worst out of the group with a 45 percent error rate for black speakers and 23 percent for white speakers.

Those involved with the study believed the inaccuracies are due to datasets used to to train the systems are designed predominately by white people.

Scroll down for video 

Stanford researches found AI-powered voice recognition technologies from Amazon, Apple, Google, IBM and Microsoft make twice as many errors when interpreting language from black people than words spoken by whites

Stanford researches found AI-powered voice recognition technologies from Amazon, Apple, Google, IBM and Microsoft make twice as many errors when interpreting language from black people than words spoken by whites

Stanford researches found AI-powered voice recognition technologies from Amazon, Apple, Google, IBM and Microsoft make twice as many errors when interpreting language from black people than words spoken by whites

Stanford University released the study Monday, which used recordings of black speech from the Corpus of Regional African American Language and samples from white people came for Voices of California, which are recorded interviews of residents in different parts of California.

‘Automated speech recognition (ASR) systems are now used in a variety of applications to convert spoken language to text, from virtual assistants, to closed captioning, to hands-free computing,’ wrote the study’s authors. 

‘Our results point to hurdles faced by African Americans in using increasingly widespread tools driven by speech recognition technology.’ 

The study showed Microsoft’s system was most accurate, with a 15 percent error rate for white speakers and 27 percent for black speakers.

The team fed the systems with nearly 2,000 speech samples from 115 individuals, 42 whites and 73 blacks, and found the average error rate for whites was 19 percent and 35 percent for blacks

The team fed the systems with nearly 2,000 speech samples from 115 individuals, 42 whites and 73 blacks, and found the average error rate for whites was 19 percent and 35 percent for blacks

The team fed the systems with nearly 2,000 speech samples from 115 individuals, 42 whites and 73 blacks, and found the average error rate for whites was 19 percent and 35 percent for blacks

And Apple’s technology was found to perform the worst, with a 45 percent error rate for black speakers and 23 percent for white speakers. 

Lead author of the study, Allison Koenecke, said: ‘But one should expect that U.S.-based companies would build products that serve all Americans.’

‘Right now, it seems that they’re not doing that for a whole segment of the population.’

Koenecke and her team suggest the errors from all five of the tech giants are due to the systems being trained on data of the English language as spoke by white Americans.

‘A more equitable approach would be to include databases that reflect a greater diversity of the accents and dialects of other English speakers,’ the researchers shared in a statement.

Sharad Goel, a professor of computational engineering at Stanford who oversaw the work, said the study highlights the need to audit new technologies such as speech recognition for hidden biases that may exclude people who are already marginalized.

Such audits would need to be done by independent external experts, and would require a lot of time and work, but they are important to make sure that this technology is inclusive.

‘We can’t count on companies to regulate themselves,’ Goel said.

‘That’s not what they’re set up to do.’

‘I can imagine that some might voluntarily commit to independent audits if there’s enough public pressure.’

‘But it may also be necessary for government agencies to impose more oversight. People have a right to know how well the technology that affects their lives really works.’

HOW DOES ARTIFICIAL INTELLIGENCE LEARN?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

 

Powered by: Daily Mail

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest News

Coronavirus UK Lockdown: Only ‘immediate family’ at funerals

The reality of Boris Johnson's coronavirus lockdown started to become clear today as boyfriends and girlfriends who do...

Republican Rep. Liz Cheney blasts push to relax stay-at-home measures but doesn’t name Trump

Republican Rep. Liz Cheney is pushing back at President Donald Trump's move away from social distancing measures being used to slow the spread...

Gov. Cuomo warns other Americans ‘you will all be New York in weeks’

New York Gov. Andrew Cuomo lambasted President Trump on Tuesday for sending only 400 ventilators to New York from the federal stockpile of 20,000...

Coronavirus: Greta Thunberg has symptoms and is self-isolating

Greta Thunberg says she may have contracted coronavirus and has self-isolated at home with her father. The teenage climate activist said she appeared to have been...

Speech recognition systems from five tech companies are bias towards people of color, study reveals 

Speech recognition systems are deep-rooted with bias towards people of color, a new study reveals. Stanford researches found these technologies from Amazon, Apple, Google,...
- Advertisement -

More Articles Like This

- Advertisement -