The rapid integration of artificial intelligence into the fabric of society has sparked a national conversation about its potential and its perils. Yet, the critical voices of Black women have been largely absent from this dialogue. As developers, policymakers, and ethicists chart the future of AI, they often operate on the assumption of a neutral, “blank-slate” user, overlooking the complex histories, lived experiences, and institutional mistrust that have all shaped how technology is perceived and adopted. In February and March 2025, the Trustworthy, Intelligent, and Explainable Robotics (TIER) Lab at Hunter College, CUNY ran a preliminary, not-yet-published study with 58 respondents to understand how Black women across the United States perceive and interact with AI, particularly in relation to trust, bias, and lived experience. This marks the initial phase of design and data collection for our broader research studying the impacts of AI on demographic groups. The findings offer a clear preliminary mandate: Building a truly equitable AI ecosystem requires moving beyond technical fixes and toward a fundamental rebuilding of trust among impacted groups.
For Black women, AI emerged within a landscape already marked by a long history of medical malpractice, research abuse, and state-sponsored surveillance. The shadows of the Tuskegee Syphilis Study and the exploitation of another Black woman, Henrietta Lacks, have long fostered a deep and rational skepticism toward institutions that promise progress through scientific innovation. This historical context is not merely academic; it is a living trauma that informs present-day interactions with technology.
The mistrust is compounded by the contemporary reality of algorithmic bias. AI systems, often trained on data sets that underrepresent or misrepresent Black individuals, have become powerful engines for perpetuating and even amplifying systemic inequality. Facial recognition technology systems have demonstrably higher error rates for Black women, leading to wrongful arrests. Hiring algorithms have been found to penalize resumes with “Black-sounding” names or affiliations with historically Black organizations. In health care, algorithms have relied on past hospitalization spending as a proxy for health needs, which has systematically underestimated the severity of illness in Black patients, denying them access to critical care.
For too long, the exclusion of marginalized communities from the design and data pipelines of technology has created a cycle of underrepresentation, biased outcomes, and disengagement. As mentioned, our research surveyed 58 Black women across the United States with the intention of centering their perspectives in the AI debate. The Institutional Review Board (IRB)-approved study conducted by the Marlborough School and the TIER Lab recruited participants through community and personal networks. Eligible participants were at least 18 years old, completed a digital survey, and, for compensation, were entered into a raffle for a $20 gift card. The survey instrument was a modified combination of the General Attitudes towards Artificial Intelligence Scale (GAAIS) and the Historical Intergenerational Trauma Transmission Questionnaire (HITT-Q), both of which use a five-point Likert scale to capture participants’ general attitudes toward AI, levels of institutional mistrust, and the role of intergenerational trauma in shaping their perspectives. The results paint a picture not of technophobia, but of a profound and justified skepticism rooted in these historical and ongoing experiences. The investigation was meant to begin further exploration on Black women’s trust in AI systems, and what variables lead to causes for concern when employing emerging technologies.
A striking contradiction emerged from our data: While an overwhelming 94% of participants are impressed by AI’s capabilities, a significant 53% are anxious about its future use. Such a nuanced response highlights the need to better understand the factors that deter AI use.
This duality is further reflected in Black women’s aspirations and anxieties. A strong majority of respondents (73%) want to use AI in their jobs, and 81% believe it has many beneficial applications. Yet, this optimism is tempered by a deep-seated mistrust of the institutions deploying it and is a direct consequence of lived experience. For example, the study found a strong correlation between the belief that AI will cause suffering to “people like me” and increase unnecessary surveillance. Participants who reported harm from institutions, particularly by law enforcement and data collection groups, were far more likely to express mistrust of AI.
The data also reveals a stark lack of faith in the entities developing and deploying AI. Support for AI use by law enforcement (15%) and the military (42%) is low. Skepticism was also expressed about technology creators. Forty-five percent of respondents believe that the people who create AI do not care about their community’s well-being, and a mere 6% believe organizations use AI ethically.
These numbers offer a reflection of how Black women are interacting with AI and unveil what could be seen as a “trust deficit” deeply rooted in history. 67% of surveyed Black women believe AI is used to spy on people, and 69% say it does not make them feel safe; these responses suggest that the harms of techno-racism and surveillance are not theoretical. Instead, they are lived realities that directly shape the reception of new technologies, which supports our hypothesis that proximity to tech-related trauma, from discriminatory algorithms in hiring to biased facial recognition in policing, is a powerful predictor of negative sentiment toward AI.
Furthermore, the common assumption in research is that age is the primary driver of tech adoption. However, we found no significant correlation between age and overall negative sentiment toward AI, which suggests that the tech industry’s focus on appealing to younger generations might be misplaced. Earning the trust of communities in the use of AI, especially among populations that have been historically and repeatedly harmed by AI, should be a credible policy goal.
Our findings suggest that a shift in paradigm from designing for communities to designing with them should be prioritized, especially among companies working toward inclusive technologies. The insights from our study provide a clear roadmap for policymakers, developers, and civil society around the individualized and collective concerns of Black women. Centering their experiences and deploying the recommendations below might lead to more equitably designed and deployed AI models.
This section begins from the premise that Black women’s dual relationship to AI, seeing both its promise and its threat, must influence the next phase of responsible AI development. These recommendations move beyond theory toward practical steps that companies, regulators, and communities should implement to build systems worthy of public trust.
AI companies must establish community-led governance
Trust cannot be built from the top down. Instead, public accountability mechanisms, such as civilian oversight boards and ethics councils, should be jointly established by AI companies and community organizations to give consumers more agency over AI models. These bodies must also have independent authority to audit, investigate, and halt high-risk systems that can potentially perpetuate harm. Regulators should require their formation, companies should fund and implement their recommendations, and community leaders should co-chair and define the standards of accountability.
Regulatory frameworks should restrict high-risk AI use and mandate audits
Mistrust in law enforcement applications of AI and surveillance fears also demand immediate action. A moratorium should be placed on high-risk government uses of AI, particularly facial recognition and predictive policing, until independent civil rights and equity audits, conducted by certified third-party auditors and overseen by regulators, demonstrate they are safe, accurate, and non-discriminatory. These technologies must also come with a higher level of confidence when applied to communities of color. The documented cases of wrongful arrests, all involving Black individuals misidentified by facial recognition, underscore the urgent need for civil rights-centered audits mandated by legislatures, enforced by regulators, and verified by independent evaluators before such technologies can be deployed in law enforcement.
Invest in inclusive and culturally relevant AI pipelines
It’s important to continue the work of uncovering the “(un)Hidden Figures” initiated by Brookings’ AI Equity Lab. Black women like Joy Buolamwini, Timnit Gebru, Safiya Noble, as well as AI Equity Lab funder and Brookings scholar Nicol Turner Lee are already leading the charge for justice and accountability. Additionally, public and private investments in culturally relevant STEM education, apprenticeships, and upskilling initiatives can ensure better representation. These programs should be co-designed with community organizations and academic partners, ensuring they provide equitable access to the skills and opportunities needed to thrive in the AI economy, and evaluated by independent researchers to measure long-term impact.
Increased transparency and bias reporting should be a requirement
Increased transparency is a major driver of trust. Federal legislation should require AI developers and deploying organizations to provide clear, accessible disclosures on AI training data, intended use cases, and potential equity impacts. Regulators such as the Federal Trade Commission, Equal Employment Opportunity Commission, and Consumer Financial Protection Bureau should enforce disclosure and anti-discrimination standards, while independent auditors and academics should conduct and publish periodic bias assessments. Companies must be compelled, not merely encouraged, to report known biases, errors, and mitigation efforts, creating a public record of accountability and progress.
Design to augment, not replace, human connection
Our study revealed a strong desire for human interaction, with 67% of participants preferring to engage with a human over an AI for routine tasks. This should serve as a powerful reminder that efficiency should not come at the cost of empathy and connection. Policy set by federal and state agencies should protect face-to-face options in essential services like health care, education, and social services, and service providers should ensure AI is used to support human professionals, not replace them.
Fund community-centered AI literacy programs
Building trust requires honest and accessible communication. Trusted community organizations, local nonprofits, libraries, faith-based organizations, educators, and Black-led media outlets should lead AI literacy and outreach efforts to demystify AI and empower individuals’ understanding of their digital rights. These efforts should be supported by public funding, philanthropic grants, and company partnerships and evaluated by academic institutions to ensure effectiveness and inclusion.
Ignoring the insights of Black women in building public trust is not only a failure ton acknowledge their participation in the AI economy, but also a strategic mistake for the broader AI ecosystem. Building an AI future that is truly innovative, equitable, and trustworthy requires moving beyond the myth that all users are homogeneous and intentionally listening to, learning from, and partnering with those who have the most at stake. As one of the major demographic groups most impacted by AI across various sectors, the trust of Black women must be foundational for the creation of just and trustworthy AI.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Beyond the blank slate: Why Black women’s trust is critical for equitable AI
October 20, 2025