Over the last decade, algorithms have replaced decision-makers at all levels of society. Judges, doctors and hiring managers are shifting their responsibilities onto powerful algorithms that promise more data-driven, efficient, accurate and fairer decision-making. However, poorly designed algorithms threaten to amplify systemic racism by reproducing patterns of discrimination and bias that are found in the data algorithms use to learn and make decisions.
The goal of this report is to help advocates and policymakers develop a baseline understanding of algorithmic bias and its impact as it relates to socioeconomic opportunity across multiple sectors. To this end, the report examines biased algorithms in healthcare, at the workplace, within government, in the housing market, in finance, education and in the pricing of goods and services. The report ends by discussing solutions to algorithmic bias, explores the concept of algorithmic greenlining and provides recommendations on how to update our laws to address this growing problem.
What is Algorithmic Bias and Why Does it Matter?
Algorithmic bias occurs when an algorithmic decision creates unfair outcomes that unjustifiably and arbitrarily privilege certain groups over others. This matters because algorithms act as gatekeepers to economic opportunity. Companies and our public institutions use algorithms to decide who gets access to affordable credit, jobs, education, government resources, health care and investment. Addressing algorithmic bias, particularly in critical areas like employment, education, housing and credit, is critical to closing the racial wealth gap.
Recommendations for Fixing Algorithmic Bias
There is a growing body of research outlining the solutions we need to end algorithmic discrimination and build more equitable automated decision systems. This report will provide recommendations on three types of solutions as a starting point:
- Algorithmic transparency and accountability
- Race-aware algorithms
- Algorithmic Greenlining