Sections

Commentary

Why a proposed HUD rule could worsen algorithm-driven housing discrimination

Apartment buildings are shown in downtown Los Angeles, California, U.S. October 2, 2018.   REUTERS/Mike Blake - RC1DE752AF80

In 1968 Congress passed and President Lyndon B. Johnson then signed into law the Fair Housing Act (FHA), which prohibits housing-related discrimination on the basis of race, color, religion, sex, disability, familial status, and national origin. Administrative rulemaking and court cases in the decades since the FHA’s enactment have helped shape a framework that, for all of its shortcomings, has nonetheless been a vital tool for combating housing discrimination. But a proposed rule published in August 2019 by the Department of Housing and Urban Development (HUD) threatens to undermine attempts to combat algorithm-driven housing discrimination.

The FHA will play a key role as algorithms for financing, zoning, underwriting, and other housing sector activities gain widespread adoption over the next few years. While some algorithms will be designed and implemented in ways that help mitigate bias and the resulting discriminatory housing patterns, others will have the opposite effect. When that occurs, the FHA should facilitate—not impede—the adjudication of discrimination claims.

The FHA, which is administered by the Department of Housing and Urban Development (HUD), addresses not only intentional discrimination but also unintended discrimination. For instance, intentional discrimination occurs if a property owner refuses “to sell or rent a dwelling” on the basis of a protected characteristic such as race. Unintended discrimination can arise if, for example, a lender has policies that appear neutral with respect to protected characteristics but inadvertently disadvantage loan applicants from a protected group. As the regulations developed under FHA explain, “[l]iability may be established under the Fair Housing Act based on a practice’s discriminatory effect . . . even if the practice was not motivated by a discriminatory intent.”

Frameworks for addressing unintentional discrimination are particularly important in the context of algorithms. While few developers of algorithms for use in the housing sector would purposely engage in unlawful discrimination, the combination of biases in data, and blind spots among even the most well-intentioned programmers means that some algorithms will nonetheless end up producing discriminatory outcomes. This could reinforce or even amplify existing patterns of segregation and discrimination.

The court system will play a vital role in adjudicating claims that an algorithm is biased in ways that violate the FHA. Unfortunately, HUD’s proposed rule stands to dramatically increase the burden on plaintiffs in ways that will severely limit the FHA as a tool for combating discrimination.

The proposed rule lays out a stringent set of requirements—called a “prima facie burden”—that a plaintiff filing a “discriminatory effect” claim must meet. While this burden is not specific to discrimination allegations involving algorithms, it will be particularly hard to satisfy when algorithms are involved. One of the requirements is for a plaintiff to “state facts plausibly alleging” that “there is a robust causal link between the challenged policy or practice and a disparate impact on members of a protected class that shows the specific practice is the direct cause of the discriminatory effect.” This will be very difficult to do without access to the information about the internal workings of an algorithm that would often be necessary to identify a discriminatory “policy.” But that information would generally not be available to a plaintiff at the early stages of litigation given trade secret protections that defendants will almost always invoke.

The plaintiff will also need to “state facts plausibly alleging” that the “challenged policy or practice is arbitrary, artificial, and unnecessary to achieve a valid interest or legitimate objective such as a practical business, profit, policy consideration, or requirement of law.” This can place an insurmountable burden on plaintiffs who, lacking access to the details of the algorithm, may be unable to show that a practice or policy is all three of “arbitrary, artificial, and unnecessary.” To take one example, showing that a policy is unnecessary will typically require sufficient knowledge of the associated algorithm to demonstrate that the same policy goal can be achieved by a different, less discriminatory approach.

In addition to making onerous demands of plaintiffs, the proposed rule provides defendants with an array of affirmative defenses, any one of which is sufficient to lead a court to dismiss the case. With respect to an allegedly biased algorithm, a defendant can escape liability by showing that the “challenged model is produced, maintained, or distributed by a recognized third party that determines industry standards, the inputs and methods within the model are not determined by the defendant, and the defendant is using the model as intended by the third party.” This is what amounts to an “it’s not my fault” defense allowing defendants to deflect blame to entities farther up the algorithm supply chain.

Yet another way that defendants can escape liability is by providing “the material factors that make up the inputs used in the challenged model and show[ing] that these factors do not rely in any material part on factors that are substitutes or close proxies for protected classes under the Fair Housing Act.” A defendant will often be able put forward a theory under which this showing is made, especially when various inputs considered by the algorithm are processed in complex ways.

Under the proposed rule, the combination of 1) the high hurdles facing plaintiffs and 2) the array of affirmative defenses available to defendants will make FHA claims involving alleged algorithmic discrimination exceedingly hard to pursue. A better approach would be to formulate a rule that recognizes the information disadvantages that plaintiffs will face. For example, plaintiffs at the prima facie stage should not be required to identify a particular policy embedded within the algorithm as suspect. Instead, courts should consider statistical evidence regarding whether the algorithm—in its entirety—has a disparate impact on a protected group.

Moreover, at the prima facie stage the plaintiffs should not be required to show that the algorithm operates in a manner unnecessary for the achievement of a legitimate, non-discriminatory goal. Rather, after satisfaction of the prima facie burden by the plaintiff, the burden should fall on the defendants to affirmatively show that the policies within the algorithm serve a legitimate, non-discriminatory purpose.  A more balanced framework for litigating FHA claims would help ensure that algorithms are used to mitigate—as opposed to potentially perpetuate—discrimination in the housing sector.

Authors