Media Contact

Danielle Bell

SENIOR PROGRAM MANAGER FOR MEDIA RELATIONS

media@greenlining.org danielle.bell@greenlining.org

Vinhcent Le

Senior Legal Counsel of Tech Equity

Read Bio

By Vinhcent Le
Governing

Imagine you live in a neighborhood that has long been under-resourced — “redlined” back in the days when such overt discrimination was both legal and encouraged by the federal government, and which has never fully recovered. And suppose your local government provides funding to support neighborhoods with everything from transit upgrades to rehabilitating dilapidated homes. You might expect your troubled neighborhood to be first in line for funding.

In more than two dozen U.S. cities, you could well be wrong. And it would be even more frustrating if you discovered that your neighborhood had been deprioritized not by a human official you could hold accountable but by an algorithm — an automated decision-making system that decided your community was a bad investment.

Yes, that has really happened.

Cities across the United States have begun using urban planning algorithms to classify neighborhoods by market strength and investment value, and then create tailored development plans for each — plans that determine which neighborhoods receive funding for services or infrastructure upgrades. But at least one widely used algorithm encourages users to prioritize investments and public subsidies in stronger, more prosperous markets before investing in weaker, distressed areas.

That is seen as a way to maximize return on investment for public dollars, but it can channel vitally needed funding away from the communities that need it most, typically those that had been subjected to both overt and covert discrimination. In Detroit, for example, city officials used a planning algorithm known as Market Value Analysis (MVA) to justify the reduction and disconnection of water and sewage utilities, plus withholding of federal, state and local redevelopment dollars, in the city’s “weak markets,” which happened to be its Blackest and poorest neighborhoods. In Indianapolis, MVA recommendations made small-business support, home repair and rehabilitation, homebuyer assistance, and foreclosure-prevention programs unavailable to the city’s most distressed neighborhoods.

This illustrates a fundamental pitfall of algorithms, as well as the risks that they can be misused or produce unintended consequences. While the MVA was created to help revitalize distressed neighborhoods, it uses variables like average home prices, vacancy rates, foreclosures and homeownership to determine neighborhood “value,” but those data points are neither ahistorical nor objective. Instead, they reflect a history of systemic bias. Redlining accounts for 30 percent of the gap in homeownership and 40 percent of the gap in home values for Black Americans between 1950 and 1980. Even today, maps of economically disadvantaged or under-resourced areas still bear a startling resemblance to the Federal Housing Administration’s redlining maps from the 1930s. Algorithms can perpetuate or amplify long-standing human biases.

One major source of algorithmic bias can be found in the “training data” used to teach such a system to recognize patterns in bits of information. For example, if a Black or Latino neighborhood is overpoliced, leading to skewed arrest rates, a predictive-policing algorithm could “learn” that Blacks and Latinos are more likely to be criminals, when in fact they’re just more likely to be arrested.

Often, the victims of algorithmic redlining don’t know what happened to them, because information on algorithms and their use is generally not publicly available. California’s Legislature is considering a step to begin to remedy this problem: If passed, AB 13 will bring transparency to the use of algorithms by state agencies and programs. For example, it would require a prospective contractor to submit an “automated decision system impact assessment” to evaluate the privacy and security risks to personal information as well as risks that could result in inaccurate, unfair, biased or discriminatory decisions impacting individuals.

That’s an essential start, but America can do better. We can go from algorithmic redlining to algorithmic greenlining — using the powerful tools of artificial intelligence to promote equity and help close the nation’s yawning racial wealth gap.

In the words of Cathy O’Neil, author of Weapons of Math Destruction, “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead.”

California has modeled a first step with a tool known as CalEnviroScreen. A law that we at The Greenlining Institute helped pass, SB 535, prioritized funds from the state’s cap-and-trade program for communities with the greatest economic and environmental challenges, and directed the state to create a scientific tool to decide which communities to prioritize. CalEnviroScreen, developed with extensive community consultation, examines multiple indicators such as unemployment rates and exposure to pollution. Based on this data, the algorithm outputs a CalEnviroScreen score that quantifies the environmental and socioeconomic burdens within a community and determines its eligibility for targeted investments.

CalEnviroScreen is a simple example of what’s possible if we consciously put equity metrics into algorithms used to make complex decisions. Imagine how much further human creativity could take this idea if we try. Algorithmic greenlining can happen — if we have the will to do it.

Vinhcent Le is the technology equity legal counsel at the Greenlining Institute | vinhcentl@greenlining.org | @VinhcentLe


Governing‘s opinion columns reflect the views of their authors and not necessarily those of Governing‘s editors or management.

Vinhcent Le

Senior Legal Counsel of Tech Equity

Read Bio