LAW.com
By Victoria Hudgins

Opponents say HUD’s move to amend the agency’s interpretation of the Fair Housing Act makes it harder to fight discrimination, while advocates argue it helps limit race-based decision-making.

A proposed rule change by the U.S. Department of Housing and Urban Development might make algorithmic decision-making more prominent, despite bias concerns.

Earlier this month, HUD closed the comment period regarding its proposal to amend the agency’s interpretation of the Fair Housing Act’s disparate impact standard. The agency wrote the considerations were to better reflect the U.S. Supreme Court’s 2015 ruling in Texas Department of Housing and Community Affairs v. Inclusive Communities Project.

“[This] would really make it much more difficult to bring a housing discrimination case,” said  Linda Morris, an ACLU Women’s Rights Project attorney. (The American Civil Liberties Union also submitted comments opposing the proposed rule change.)

In addition to other updates, the housing agency added three defenses to shield an algorithm used for a practice or policy from claims of discrimination. Such algorithms can be used to determine credit scores, for example, or create targeted housing advertisements.

Defenses under the proposal include leveraging a “statistically sound” algorithm, using a third party that creates or maintains the algorithm, or using an algorithm whose inputs are not substitutes for a protected characteristic. Such defenses “provide[] a huge loophole” to defendants, Morris said. She noted credit agencies, insurers, housing companies, advertisers and other institutions leverage algorithms that can significantly impact someone’s homebuying or renting abilities.

She added while the debate continues regarding the objectiveness of algorithm-powered decisions, sometimes “there is no discrimination intended but oftentimes the inputs in the algorithm are biased.”

Continued here.