Media Contact

Danielle Bell

SENIOR PROGRAM MANAGER FOR MEDIA RELATIONS

media@greenlining.org danielle.bell@greenlining.org

By Gissela Moya
The Sacramento Bee

These days, algorithms — sets of rules or instructions used by computer systems to solve a problem or perform a task — decide many things, from what videos YouTube will show us to whether we get a loan or college offer. But the algorithms used by companies to make important decisions in our lives can have racial or gender bias built into them. Happily, a partial solution has just been introduced in the California State Legislature.

Algorithmic bias, which mirrors the conscious or unconscious biases of the humans who design the algorithms, has led to unfair outcomes for people of color, women and disabled individuals. Consumers may blindly trust that algorithms are fair, but bias can be hard to see.

Algorithms can be hugely beneficial. In response to COVID-19 case outbreaks, the health care sector turned to algorithms to manage and predict case outbreaks. A COVID-19 risk prediction algorithm designed by Cleveland Clinic researchers shows an individual’s likelihood of testing positive for COVID-19, which can help tailor patient treatment. In this way, algorithms can help ensure health care resources are used effectively, especially during a pandemic.

In other cases the outcomes are worse. A study recently published in the Journal of General Internal Medicine found that a diagnostic algorithm for estimating kidney function which adjusts for race assigns Black people healthier scores, thereby underestimating the severity of their kidney disease. If the algorithm were corrected, one third of the 2,225 Black patients studied would be classified as having more severe chronic kidney disease and 64 would qualify for a kidney transplant that the algorithm would have denied them.

Algorithmic bias remains prevalent for multiple reasons, from the algorithms’ creators embedding their own bias to the lack of diversity in the field. In addition, biased algorithmic outcomes can stem from the data that the designers use to train algorithms to perform their functions. Data that may seem neutral, like zip codes or income levels, can serve as proxies for race and reflect the consequences of redlining, discrimination and racist policies which are still felt today.

For example, evidence indicates that residents of Black and Brown neighborhoods are more likely to be stopped, searched and arrested than whites. If that data gets fed into a “predictive policing” algorithm, it could well decide that Black and Latino people are more likely to be criminals, when in fact they’re just overpoliced.

So while we acknowledge the benefits algorithms can bring, we still have to be cautious and ensure people understand, in plain language, how they work and what they predict. Biased algorithms in health care, education and employment can wrongfully exclude some groups from resources or opportunities, as we’ve seen in the past. That makes it hard to build an equitable future in California.

Assembly Bill 13, the Automated Decision Systems Accountability Act of 2021 by Assemblymember Ed Chau (D-Monterey Park), seeks to prevent algorithm-driven systems from resulting in discrimination.

The law would ensure that California businesses that use automated decision systems — the technical term for algorithms — proactively put processes in place to test for biases and also submit an impact assessment report to the Department of Financial Protection and Innovation. In addition, the DFPI would establish an Automated Decision Systems Advisory Task Force composed of individuals from the public and private sectors.

AB 13 will start to shed some light on a field that’s way too murky. We need smart laws to increase transparency, ensure companies build fair algorithms and build strong accountability systems for these automated decision-makers that affect us all.

Gissela Moya is the Manny Garcia technology equity fellow at The Greenlining Institute, www.staging3.greenlining.org.