Bias in Machine Learning

By Data Science Renee | Articles about Bias in Machine Learning Algorithms / Data Science, and how to combat it

You Say Data, I Say System

In the fall of 2009, I wrote a pair of algorithms to place nearly 3,000 names on the 9/11 memorial in Manhattan. The crux of the problem was to …

Big Data

Two Giants of AI Team Up to Head Off the Robot Apocalypse

There’s nothing new about worrying that superintelligent machines may endanger humanity, but the idea has lately become hard to avoid.<p>A spurt of …

Artificial Intelligence

A pioneering computer scientist wants algorithms to be regulated like cars, banks, and drugs

It’s convenient when Facebook can tag your friends in photos for you, and it’s fun when Snapchat can apply a filter to your face. Both are examples of algorithms that have been trained to recognize eyes, noses, and mouths with consistent accuracy.<p>When these programs are wrong—like when Facebook …

'A white mask worked better': why algorithms are not colour blind

When Joy Buolamwini found that a robot recognised her face better when she wore a white mask, she knew a problem needed fixing<p>Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in …

Color Blindness

Why UX Design For Machine Learning Matters

How you make machine learning transparent to users is one of the great design challenges of our time—but a necessary one.<p>Machine learning is going to radically change product design. But what is the future of machine learning? Is it the singularity, flying cars, voiceless commands, or an Alexa that …

Python Meets Plato: Why Stanford Should Require Computer Science Students to Study Ethics

by Antigone Xenopoulos<p>When he was 21, Bill Sourour, a programmer and teacher, was hired by a marketing firm to build a website for a pharmaceutical …

Computer Science

Investigating Bias In AI Language Learning

A new study has revealed that AI systems, such as Google Translate, acquire the same cultural biases as humans.While this isn't a surprising finding, …

Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies”

These two female technologists discuss the promises of artificial intelligence — and how to diversify the field.<p>A<b>rtificial intelligence</b> has a …

Taser Will Use Police Body Camera Videos “to Anticipate Criminal Activity”

When civil liberties advocates discuss the dangers of new policing technologies, they often point to sci-fi films like “RoboCop” and “Minority …

Courts Are Using AI to Sentence Criminals. That Must Stop Now

There is a stretch of highway through the Ozark Mountains where being data-driven is a hazard.<p>WIRED OPINION<p>About<p>Jason Tashea (@justicecodes), a …

Intelligence

New England Machine Learning Hackathon: Hacking Bias in ML

Icelandic language at risk; robots, computers can't grasp it

When an Icelander arrives at an office building and sees "Solarfri" posted, they need no further explanation for the empty premises: The word means …

Linguistics

AI programs exhibit racial and gender biases, research reveals

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say<p>An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking …

Machine Learning

Bias test to prevent algorithms discriminating unfairly

COMPUTERS are getting ethical. A new approach for testing whether algorithms contain hidden biases aims to prevent automated systems from …

Estola 5 20-16 ml_conf - when recommendation systems go bad

My talk from MLConf Seattle 2016<p>When Recommendations Systems Go Bad: Machine learning and recommendations systems have changed the way we interact with not just the internet, but some of the basic products and services that we use to run our lives.<p>While the reach and impact of big data and …

Can Alexa Lie?

There was a recent tabloid piece featuring a video of a woman asking Alexa if it was connected to the CIA. At the time, the Echo Dot she was speaking …

Artificial Intelligence

How I'm fighting bias in algorithms

MIT grad student Joy Buolamwini was working with facial recognition software when she noticed a problem: the software didn't recognize her face — …

Machine Learning

Data-driven crime prediction fails to erase human bias

Big data is everywhere these days and police departments are no exception. As law enforcement agencies are tasked with doing more with less, many are …

Machine Learning

Data and Discrimination

Despite significant political and cultural transformations since the Civil Rights movement and other social upheavals of the Sixties and Seventies, …

Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say

The racial bias that ProPublica found in a formula used by courts and parole boards to forecast future criminal behavior arises inevitably from the …

Research

Socially Responsible Algorithms at Data Science DC

Hey Microsoft, the Internet Made My Bot Racist, Too

It all happened so quickly! First, Microsoft reveals an amazing new bot that learns from you, the public! A sophisticated, deep learning twitter bot. …

When an algorithm isn’t…

The popular press is full of articles about “algorithms” and “algorithmic fairness” and “algorithms that discriminate, (or don’t)”. As a computer …

Why Machines Discriminate—and How to Fix Them

IRA FLATOW: This is Science Friday. I’m Ira Flatow.<p>Let’s say you apply for a new job. Would you rather have your resume judged by a person or an …

Face recognition failure: Georgia DMV denies twins' permit 'cause computer sees them as one

Twin sisters attempting to apply for driving permits have exposed a flaw in facial-recognition software used by the state of Georgia. The program …

Keynote: Consequences of an Insightful Algorithm - Ruby Conference 2015

Coders have ethical responsibilities. We can extract remarkably precise intuitions about people. Do we have a right to know what they didn't consent …

[1412.3756] Certifying and removing disparate impact

Statistics > Machine Learning<p>Title: Certifying and removing disparate impact<p>Abstract: What does it mean for an algorithm to be biased? In U.S. law, …

Bias in Sensitivity and Specificity Caused by Data-Driven Selection of Optimal Cutoff Values: Mechanisms, Magnitude, and Solutions | Clinical Chemistry

Abstract<p>Background: Optimal cutoff values for tests results involving continuous variables are often derived in a data-driven way. This approach, …

Precision and recall

In pattern recognition and information retrieval binary classification, <b>precision</b> (also called positive predictive value) is the fraction of relevant …