Housing Discrimination: Big Data, AI, and Algorithmic Models

With the rise of artificial intelligence, big data, and highly-attuned algorithmic models, more and more insurance providers, landlords, and mortgage brokers are turning to AI programs to screen tenants, determine rates, and offer loans.

However, the prevalence of bias and discrimination within these algorithms is only exacerbating the challenge of safe, affordable housing access for people of color. Screening programs or mortgage application tools appear to provide unbiased, neutral data—however, these tools are imperfect and contribute to housing inequality.

Unpacking AI Tools and Housing Access

Tenant Application Screenings 

In recent years, many tenant screening applications—leveraging AI—have been introduced to the market. These programs are designed to evaluate potential tenants and assess any risk when renting to a particular individual.

Unfortunately, however, algorithms are not unbiased or neutral third-party opinions. Many experts argue that algorithms actually amplify existing biases. One 2021 study by the U.S. National Bureau of Economic Research used bots with names associated with different demographic groups and applied to more than 8,000 landlords. The results uncovered significant discrimination against renters of color, particularly Black Americans.

One example of bias and inaccuracy is Chris Robinson, a 75-year-old California man who applied to move into a senior living community in 2018. The property manager ran his name through an automated screening program, where he was assigned a “high-risk” score—on a past conviction for littering.

This crime of littering was not only entirely irrelevant as to whether Robinson would make a good tenant—but further investigation found that it wasn’t even the right Chris Robinson flagged in the system. Although the error was eventually corrected, Robinson lost the apartment and application fee in the lengthy process.

In response, Robinson filed a class action lawsuit against TransUnion—one of the largest organizations within the multibillion-dollar tenant screening industry—who agreed to pay $11.5 million to resolve claims that its programs violated fair credit reporting laws.

Insurance Application Screenings 

Potential homebuyers are also at risk of discrimination due to insurance application platforms with AI technology. According to the National Association of Insurance Commissioners, 70 percent of home insurers are either using or have an interest in using AI for their businesses. Of those surveyed, 54 percent are using AI for claims, 47 percent for underwriting and marketing, 42 percent for fraud detection, 35 percent for ratings, and 14 percent for loss prevention.

In the wake of George Floyd’s murder in 2020, new reports and studies have begun to examine the potential discriminatory impact that algorithmic bias may have on insurance application screenings. A series of papers from property casualty actuaries delve into the impact of historical and ongoing bias in insurance pricing—and one study explores four specific rating factors used in AI algorithms.

These factors—credit-based insurance scores, geographic location, homeownership, and motor vehicle records—have been heavily scrutinized in light of bias. Unfortunately, basing an insurance price on these four factors is by no means an objective, neutral measure.

For example, the location of a person’s home is historically the result of discriminatory policies and practices—such as redlining by banks or racially restrictive covenants. During the first half of the 20th century, neighborhoods were color-coded and many Black communities were considered “undesirable,” leading to chronic disinvestment. Today, the legacy of redlining lives on—and leveraging geographic location may lead to bias and discrimination.

Another example is basing insurance rates on credit scores, which is an inequitable measure. Decades of discriminatory lending practices have resulted in 54 percent of Black Americans reporting no credit or a poor to fair credit score, compared to 37 percent of white Americans and 18 percent of Asian Americans.

Mortgage Applications

The Black homeownership gap is as wide as it has been since the 1960s, and Black homebuyers are more than twice as likely as white applicants to be denied a mortgage, ending any future possibility of homeownership.

Mortgage lending practices and AI bias only exacerbate these inequities: One investigation found lenders were far more likely to deny home loans to people of color than white people with similar financial characteristics. The study found that eighty percent of Black applicants were more likely to be rejected, along with 40 percent of Latino and 70 percent of Native American applications.

In addition to national statistics of loan denials, researchers also examined cities and towns individually—and found staggering disparities in 89 metropolitan areas. For example, in Charlotte, lenders were 50 percent more likely to deny loans to Black applicants than white applicants with similar financial profiles; in Chicago, Black applicants were 150 percent more likely to be denied compared to their white counterparts.

The Rise of AI—and What’s Next 

With the rise of AI as a potential solution to streamlining and automating daily tasks, more and more companies are embracing the technology. As a result, most major organizations use some form of AI in their decision-making processes.

Although many companies argue that their algorithms are unbiased and objective, the confidentiality of AI models prevents true scrutiny. One of the many reasons that landlords, insurance companies, and mortgage lenders opt for leveraging AI reporting and scoring systems is that they appear to offer neutral, objective information.

While some companies may say that these numbers and scores are only used as a suggestion, one behavioral study on landlords found that they do rely primarily on scores returned. Even when the underlying data provided critical context, automated decision-making systems appear to offer neutral, objective recommendations for landlords.

Regardless of the details, one thread is clear: AI models and algorithms cannot offer unbiased, objective results. Screening programs and reporting tools contribute to the historical legacy of housing discrimination, further exacerbating the staggering housing and wealth gap between white people and people of color.

Many associations and states are focusing on policies and practices to mitigate any bias and discrimination baked into AI models. The National Association of Insurance Commissioners, for example, has been exploring bias, AI, and machine learning through its Committee on Insurance and Race.

Other federal initiatives are also in progress: The White House proposed an AI Bill of Rights, a set of principles designed to protect citizens from algorithm discrimination in areas like housing, health care, finance, and other benefits.

One section, The Algorithmic Discrimination Protections principle, outlines specifics on how companies, non-profits, and federal government agencies can ensure the public is protected from algorithmic bias and discrimination, including combating discrimination in mortgage lending and AI models.

To learn more about new developments in housing accessibility and our work in supporting affordable for-sale homeownership, reach out to us and join the conversation on social media. 

Share this
Published On: October 10, 2023Categories: Housing News