Last September, the Department of Housing and Urban Development (HUD) published a final rule fundamentally changing the framework for pursuing disparate impact claims under the Fair Housing Act. The rule was immediately challenged in court by the Massachusetts Fair Housing Center and Housing Works, and in October a federal judge in Massachusetts issued an order enjoining it from taking effect “pending conclusion of these review proceedings.” Then, in late January, President Biden issued an executive order directing HUD to “examine the effects” of the rule, including the impacts “on HUD’s statutory duty to ensure compliance with the Fair Housing Act.”
Put simply, the new rule is concerning. As we explain in a new law review article published in the Columbia Science and Technology Law Review, it “stands to significantly complicate the challenges faced by FHA plaintiffs, particularly in cases involving algorithms.” Algorithm-driven discrimination is a concern in many fields, and particularly so in housing—a domain with extensive historical and continuing patterns of discrimination. With algorithms playing a growing role in home financing, leasing, marketing, sales, and zoning, there will be inevitable cases where the resulting decisions have a disparate impact on a protected group. When that occurs, litigation under the FHA provides a critically important mechanism for redress.
And that’s where the rule is problematic. It requires plaintiffs at the earliest stages of litigation to “sufficiently plead facts to support” a highly specific set of assertions, including that “that there is a robust causal link between the challenged policy or practice and the adverse effect on members of a protected class.” These sorts of detailed showings can require knowledge of the internal workings of the challenged algorithm—information that will generally be proprietary, and thus unavailable to plaintiffs who have not yet had access to discovery.
The rule also offers defendants a broad set of affirmative defenses, including the opportunity to show that an allegedly discriminatory policy is “reasonably necessary to comply with a third-party requirement.” Given that many algorithms used in the housing sector will be developed at least in part based on such requirements (examples given in the rule include “binding or controlling regulatory, administrative or government guidance”), this will give defendants what amounts to an “it’s not my fault” excuse.
The upshot is that if the rule takes effect, it will undermine the power of the FHA as a mechanism to combat housing discrimination. While it does indeed make sense to rethink the framework for FHA disparate impact litigation in light of the growing use of algorithms, a balanced approach is needed.
Of course, any new rule must place sufficient hurdles in front of plaintiffs to disincentivize speculative litigation. But a more appropriate rule would also avoid requiring plaintiffs to file claims containing information about the internal workings of algorithms to which they do not yet have access, and would ensure that disparate impact litigation maintains its vital role in combating housing discrimination and segregation.