Media Contact

Danielle Bell

SENIOR PROGRAM MANAGER FOR MEDIA RELATIONS

media@greenlining.org danielle.bell@greenlining.org

Human-designed algorithms and artificial intelligence can create redlines and roadblocks to getting a job, receiving healthcare, and investing in neighborhoods

Contact: Bruce Mirken, Greenlining Institute Media Relations Director, brucem@greenlining.org, 415.846.7758

Oakland, CA —  Today, The Greenlining Institute released a report titled “Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination.” The report examines how biased algorithms discriminate against people of color, women, and people who earn lower incomes. Often the discrimination is invisible to its victims. The findings of this research shine a light on what Greenlining calls algorithmic redlining and provides recommendations on how to update laws to address this growing problem.

Decision-making algorithms work by taking the characteristics of an individual, like the age, income, and ZIP code of a loan applicant, and reporting back a prediction of that person’s outcome — for instance, the likelihood they will default on a loan — according to a certain set of rules. That prediction is then used to make a decision — in this case, to approve or deny the loan. But, if the training data is biased then the algorithm can “learn” the pattern of discrimination and replicate it in future decisions. For example, a bank’s historical lending data may show that it routinely and unfairly gives higher interest rates to residents in a majority Black ZIP code. A banking algorithm trained on that biased data could pick up that pattern of discrimination and learn to charge residents in that ZIP code more for their loans even if they don’t know the race of the applicant.

“With this report, Greenlining Institute elevates the harm algorithmic redlining is causing to marginalized communities, and puts forth specific recommendations to promote accountability and transparency,” said Vinhcent Le, Technology Equity Legal Counsel, Greenlining Institute. “We have an opportunity to ensure the decision-making tools our society uses are building equity instead of advancing disparities.”

Despite the massive impact algorithms have on the day to day lives of citizens, there are currently no laws effectively holding governments, companies, and organizations accountable for the development, implementation, and impact of their use.

Algorithms are designed by people. Often, people may have gaps in their knowledge, biases, or want to do things the cheapest, simplest way. That’s been shown to lead to flawed algorithms that make bad decisions. Algorithmic accountability laws would allow us to identify and fix algorithmic harms and to enforce our existing laws against discrimination. Algorithmic transparency and accountability measures can include algorithmic impact assessments, data audits to test for bias, and critically, a set of laws that penalize algorithmic bias, particularly in essential areas like housing, employment, and credit. California’s legislature is now considering a bill, AB 13, which would take the first steps toward regulating algorithmic bias.

We need to update our discrimination laws to reflect the realities of today’s technological world,” said Debra Gore-Mann, President and CEO of Greenlining Institute. “Instead of a defensive strategy aimed at limiting discrimination and preventing disparate impacts, we promote an idea called algorithmic greenlining. This approach emphasizes using automated decision systems in ways that promote equity and help close the racial wealth gap. This means that algorithms go beyond simply not causing harm to addressing systemic barriers to economic opportunity.”

Additional Examples of Biased Algorithms at work:

  • Housing and Development — Over 25 cities use a tool called the Market Value Analysis Algorithm (MVA) to classify neighborhoods by market strength and investment capital. Cities use MVA maps to craft tailored urban development plans for each type of neighborhood. These plans determine which neighborhoods receive housing subsidies, tax breaks, upgraded transit or greater code enforcement. Cities using the MVA are encouraged by its developer to prioritize investments and public subsidies first in stronger markets before investing in weaker, distressed areas as a way to maximize the return on investment for public dollars — essentially repeating the patterns of redlining that discriminated against low-income communities of color. In Detroit, city officials used the MVA to justify the reduction and disconnection of water and sewage utilities as well as the withholding of federal, state, and local redevelopment dollars in Detroit’s “weak markets,” which happened to be its Blackest and poorest neighborhoods.
  • Mortgage Lending — Online banking algorithms can be a way to combat racial discrimination present in traditional, face-to-face lending. However, a UC Berkeley study showed that both traditional and online lenders overcharge Black and Brown borrowers for mortgage loans to the tune of $765 million a year compared to equally qualified White borrowers. Researchers found that banking algorithms still give White borrowers better rates and loans than Black ones. UC Berkeley researchers suggest that this bias is due to geographic and behavioral pricing strategies that charge more in financial deserts or if a customer is unlikely to shop around at competing lenders. This raises serious questions about the fairness and legality of using data unrelated to credit repayment risk, such as shopping behavior, to make decisions about loan terms and rates.
  • Government Programs — When Arkansas implemented a Medicaid access algorithm, hundreds of people saw their benefits cut — losing access to home care, nursing visits and medical treatments. Arkansas Legal Aid filed a federal lawsuit in 2016, arguing that the state failed to notify those affected, and that there was also no way to effectively challenge the system, as those denied benefits couldn’t understand what information factored into the algorithm’s decisions. The process for appealing these decisions was described as “effectively worthless” as less than 5% of appeals were successful. During the court case, the company that created the algorithm found multiple errors due to miscoding and miscalculations. An estimated 19% of Medicaid beneficiaries in the state were harmed one way or another.