Sections

Commentary

Journalism needs better representation to counter AI

December 23, 2024


  • In July 2024, the Brookings AI Equity Lab convened news staff, other content stewards, and technologists to assess the opportunities and threats that AI presents to traditional journalism, with a focus on equity. 
  • Some opportunities have supported interview transcription, data analysis, and automated drafting, while threats include the homogenization of narratives, the spread of misinformation, and further reliance of newsrooms on Big Tech companies. 
  • Experts recommend improving equitable hiring practices alongside technological innovation, providing professional development and training for using AI tools, and developing standards and research across the field for appropriate, ethical AI use. 
      jurnalist
      Source: Shutterstock

      Across the United States, newsrooms are cutting staff as the rippling effects of digitization debilitate traditional operations and revenues. Earlier this year, Politico reported that more than 500 professionals from print, broadcast, and digital media were laid off in January 2024 alone. This number continues to grow as artificial intelligence (AI) and other automated reporting functions see more use in the sector. Journalists of color have been most affected by these cuts. In a 2022 survey of laid off professionals, the Institute for Independent Journalists found that 42% of laid off professionals were people of color, despite comprising only 17% of the total workforce. As newsrooms increasingly turn to AI to manage staff shortages or increase efficiency, how will journalistic integrity be impacted? More importantly, how will newsrooms navigate the underrepresentation of diverse voices who contribute to the universe of more informed news perspectives?  

      Launched in 2023, the Brookings AI Equity Lab is committed to gathering interdisciplinary insights and approaches to AI design and deployment. In July 2024, we convened news staff, other content stewards (e.g. library professionals, academics, and policymakers), and technologists (“contributing experts”) to assess the opportunities and threats that AI presents to traditional journalism. While the debate is far from over, the recommendations from contributing experts were that AI can radically modernize newsrooms, but that its implementation still must be done in support of journalists and other content creators of color, who bring their own lived experiences to news and can quell the growth of mis- and disinformation that emerges in an increasingly digital world. 

      What really is journalism these days? 

      To start, it is important to situate the evolution of journalism within the context of technology and discuss what has become of the field over the years. One of many reasonable definitions of journalism is “the art and science of gathering, compiling, and presenting news via various forms of mass media.” While technology has not radically changed this understanding, it has altered how news and other information are distributed and interpreted by consumers. 

      Compared to Johannes Gutenberg’s printing press during the 15th century where analog copies defined the provenance of materials, news today is accessible over digital platforms, including social media. Journalists and consumers can spread a range of information related to current events in real-time and even share personal and professional experiences. The focus on accuracy, education, and literacy has also shifted with digital innovation, and expanded the universe of storytellers to include civilian journalists. 

      Other concerns when journalism and technology collide are around who decides what constitutes substantive stories and perspectives. When AI is involved, it is difficult to determine where content and/or ideas originate, because generative AI systems often do not attribute the information sources on which they are trained. Further, the sustainability of traditional business models for media is also impacted by the increased use of technology in journalism. AI upends business models as more functions of reporting become automated, reducing the need for human reporters and cultivating a dominance of publishers who may forgo staff diversity in terms of perspective and demographics to enable greater “datafication,” reduce cost, and improve newsroom efficiencies. 

      These and other consequences of more modernized news platforms—particularly the need to always have information available and on any internet-enabled device—have minimized some of the defining values of the American journalism profession, particularly those which may run counter to status-quo or mainstream reporting. One such value is investment in more stringent research practices, including investigative methods. 

      More than a decade ago, the Knight Foundation and the Aspen Institute formed the Knight Commission on the Information Needs of Communities in a Democracy to explore such shifts. The Commission explored the information needs of American communities in the digital age and offered recommendations to strengthen the free flow of information. These recommendations eerily resonate more strongly today. In particular, the Knight Commission’s report emphasized the importance of maximizing the availability of relevant and credible information by advocating for policies that encourage innovation in journalism, support public service media, and promote access to diverse and local news sources. Equally vital was the report’s focus on strengthening individuals’ capacity to engage with information: A goal achieved by embedding digital and media literacy in education systems and leveraging libraries and community centers as hubs for adult learning. The recommendations also called for promoting public engagement by creating opportunities for citizens to actively participate in governance and fostering digital platforms that reflect and connect to the diverse realities of local communities. These recommendations remain highly relevant as we navigate the challenges of maintaining journalistic integrity and civic engagement in the digital age. 

      How AI is advancing newsroom reporting and curation 

      Contributing experts in the roundtable discussion acknowledged these and other realities, as well as detailed ways in which AI is specifically affecting workflows. Whether it is automating routine tasks, or providing new tools for investigative and production processes, AI is becoming a go-to source for journalists. News organizations are increasingly using AI for transcription, data analysis, content personalization, and audience analytics. Tools such as automated transcription services and natural language processing allow journalists to save significant time on tedious tasks, enabling journalists to focus on more strategic and creative efforts. For instance, transcription tools have reduced the time to process interviews, while machine learning algorithms streamline investigative reporting by helping uncover patterns in large datasets.  

      The growing use of generative AI systems, like ChatGPT, is aiding in content production and drafting, summarizing texts, and tailoring content for diverse platforms. However, the challenges have been previously noted: AI outputs require careful vetting to avoid inaccuracies, and its use raises ethical questions regarding autonomy, editorial control, implicit bias, and impact on human labor. Moreover, Big Tech companies, like Google, Meta, Microsoft, and Amazon, are selling newsrooms AI infrastructure, which can introduce concerns about dependency or “platformization” of news media as these companies wield significant influence over the data that newsrooms compile and use.  

      As technology companies gain access to valuable data to train AI systems, journalism’s dependence becomes entrenched in “black box” AI products. This concentration of resources and the opacity of generative AI systems should raise worries about the potential influence of corporate actors on both the editorial outputs and the sustainability of the business models that support independent journalism. 

      For AI to be less opaque and more efficient in newsrooms, it must be designed and deployed thoughtfully with experts from the field at the table, and with awareness about its biases and shortcomings. On this latter point, journalists and other content stewards should be having more discussion on the use of AI before adoption. For example, some experts argue that newsrooms could replace journalists with AI, while others suggest that the technology complements human intuition and judgment necessary for quality journalism. By understanding AI’s implications for the sector, newsrooms can meet the growing demand for varied, accurate, and engaging content, and adapt to the economic pressures of the digital age without compromising journalistic integrity or reducing the human capacity to filter and interrogate these systems.  

      Not all journalists and newsrooms have equal access to these tools or know how to use them. Further, under-resourced organizations and independent or minority-led newsrooms and journalists in these settings may not even have basic accessibility to these advanced tools. This imbalance within the broad sector will challenge their accelerated engagement in an increasingly digital field and limit the perspectives that are available to readers. Further, content ownership models could be compromised when fed into the training data that AI learns from. For example, many Black newspapers have decades upon decades of information that curate various historic events and experiences. But when AI intersects with their archives, minority-led news organizations should be elevating concerns around ownership, compensation, and unfettered use with AI developers and companies.  

      AI and the erosion of journalistic integrity  

      The manipulative deployment of AI can also lead to rapid proliferation of AI-driven misinformation, which poses dual challenges for journalists. For example, deepfake technologies, which use AI to superimpose audio, video, or images to fabricate events or statements, has fundamentally disrupted journalism and created new challenges in maintaining truth and public trust. They have been used to manipulate public opinion, discredit political figures, and sow confusion during pivotal moments like elections and crises. AI tools, such as generative adversarial networks (GANs), can lead to the creation of hyper-realistic synthetic media and fake news, making it increasingly difficult for journalists and consumers of information to distinguish between authentic and fabricated content, while also intensifying concerns about online harassment and weaponized information operations.  

      The sheer speed at which fake content circulates on digital platforms often outpaces traditional verification methods. Fact-checking tools and processes, while increasingly reliant on AI themselves, struggle to keep up with the volume and sophistication of fabricated material. This environment pressures journalists to prioritize speed over accuracy, particularly in “breaking news” scenarios. This can lead to misinformation or superficial storylines that often do not dive into the intensity and depth of stories. Such tensions undermine journalistic integrity as mounting pressures to publish faster can lead to the dissemination of unverified or false reports, further eroding public trust in journalism. 

      Beyond these and other technical challenges, AI use in journalism can lend itself to ethical dilemmas regarding transparency and accountability. In addition to raising questions about originality and editorial responsibility, consumers may not be aware of AI’s role in news production. The reliance on AI in journalism also risks homogenizing narratives, as generative AI uses predictive statistics to generate the most likely outputs and thus trends toward the generic. Furthermore, algorithms often prioritize trends over deeply investigated ideas, reducing the range of perspectives presented to the public. These challenges highlight the urgent need for robust verification tools, ethical guidelines, and media literacy initiatives to combat the erosion of trust in journalism in the AI era. 

      Why diverse voices still matter in journalism to curb negative AI influence 

      Given the shifts in news development, reporting, and distribution, the inclusion of diverse voices remains essential in countering AI’s limitations, biases, and challenges. First, AI systems reflect the datasets for which they are trained. These data are digitized and can under-represent groups culturally and linguistically, and reflect systemic inequalities, especially if the data is based on race or class surveillance or stereotyping. Inherent biases can result in AI amplifying stereotypes or marginalizing underrepresented groups. Without diverse human oversight, the narratives produced or promoted by AI risk perpetuating existing power dynamics rather than challenging them. 

      Second, it is fair to compensate diverse media who have spent lifetimes curating and archiving content. In 2011, Google partnered with one of the oldest Black newspapers, The Afro, based out of Baltimore, Maryland to digitize their collections. Over one million artifacts of news content have been collected and maintained by the publisher. While AI will enable better search and other efficiencies, it is imperative that these collections are not unfairly used to train more sophisticated AI systems without attribution or compensation to an already under-resourced independent newspaper. 

      The field of journalism has also worked hard to remove proxies that may elicit such stereotypes. Diverse voices in journalism act as a critical safeguard against these biases, bringing varied lived experiences, cultural insights, and perspectives to the storytelling process. For example, journalists from marginalized communities may recognize context-specific manipulations, such as culturally coded misinformation or harmful stereotypes, that AI might overlook. Their contributions enrich the reporting process, ensuring that journalism remains a platform for authentic and inclusive storytelling. More diversity among newsmakers and storytellers can help uncover and challenge manipulative narratives designed to exploit societal divisions. 

      Finally, diverse representation within newsrooms matters as audiences grow increasingly skeptical of media influenced by AI. Authentic human perspectives rooted in lived experiences resonate more deeply, which is why newsrooms that reflect the demographics of their audiences foster credibility and connection, demonstrating a commitment to accurately representing varied societal experiences. Diversity also serves as a counterbalance to the homogenization of AI-generated content, preserving the plurality of voices necessary for a healthy public discourse. 

      To truly harness AI’s potential while mitigating its risks, newsrooms must prioritize equitable hiring and retention practices alongside technological innovation. Integrating diverse voices into editorial processes not only enriches journalism but also ensures that AI’s influence aligns with the democratic ideals of inclusivity and fairness. By valuing human creativity and critical thinking, journalism can adapt to the digital age without sacrificing its integrity or commitment to serving the broader public good.  

      The contributing journalists, content stewards, and technologists provided feedback on how to address the persistent diversity challenges in journalism with the balancing act of AI use. They declared that journalism does require a new strategy for imagining the use of AI—one that may not always involve partnerships with Big Media and Big Tech. As large incumbent technology companies infiltrate media, they’re consuming the data generated by news sources, and, in some instances, exploiting them for profit. The viability of diverse journalistic outlets depends to a large extent on the economic viability of their business models, and without a diverse array of journalistic sources, their work risks erasure. This is also problematic for AI itself, since news is a significant source of training data, and without access to human data, AI models can become more homogeneous. 

      Where do we go from here? 

      While some of the expert contributors spoke to how AI assists in the accessibility and widespread distribution of their stories, not having more equitable and fair AI business models that credit and compensate journalists for their content produces power incongruencies. Here are some additional barriers that arise when AI is integrated into the journalistic process. 

      • Language translation is still limited. The limitations of AI in multilingual translation restrict full use by journalists, other content creators, and consumers of information. Large language models (LLMs) are primarily created for English-speaking audiences and consequently have gaps when it comes to interview translation and the messaging of content.
      • Many journalists are not equally adapting and using generative AI in their storytelling. AI is readily available to many of the large newsrooms, like Associated Press and NewsCorp, which also offer training to ensure basic AI fluency. Among smaller newsrooms and independent, more diverse media, this is less likely to be the case. Furthermore, most newsrooms are leaning into more mainstream tools, like ChatGPT, without considering other generative AI resources that might be less costly, more easily available, and more localized.
      • The widespread professional development of journalists is lacking. Journalists have limited opportunities for professional development, due in part to constrained newsroom resources. Furthermore, very few journalists are data scientists, meaning that they lack the background to understand developments in AI. Some content creators are not using these tools at all—even if they would like to—because they have limited expertise. Newsrooms of smaller and more independent media tend to be excluded from leveraging new technologies simply due to budget constraints and/or limited staff and expertise.
      • Diverse newsmakers are often without the resources or support to curate their experiences. Participants representing more diverse journalists shared that, when compared to their colleagues, they are not often able to leverage the same tools. When asked to do so, they are robbed of the ability to tell more culturally nuanced stories that reflect their experiences. These newsmakers—often from some of the oldest publishing houses in the United States—may not have their perspectives included in AI systems. To address this, there should be more consortiums and collaborations for devising less disruptive business models to support Black, Hispanic, Indigenous, and LGBTQ+ journalists who struggle to maintain both voice and presence in a highly digitized media landscape.
      • Journalists and consumers require AI literacy and fluency. On both sides, AI must be formally introduced into both communities with some level of transparency and awareness of the provenance of digital content—in terms of who wrote the relevant training data and whether the content is from an AI or a human—while recognizing that there is no single approach to doing so. Because of the robust contribution that news makes to the AI sphere, it is imperative that everyday consumers and journalists can make rapid decisions on the authenticity of news sources.

      These and other ideas surfaced during our discussion and led to pragmatic solutions for how to address the growing use of AI in newsrooms and improve efforts to balance voice and representation among those crafting and distributing their stories. One such solution is to develop standards among the journalism community that align ethics, integrity, bias mitigation, and scope of where AI is a reliable and legitimate source for reporting. As with other claims around excessive reliance on AI, it was widely agreed that “humans must stay in the loop” to maintain the authenticity of storytelling, which goes back to the original intentions of the profession. Participants also thought that having a uniform and widely adopted style guide could help journalists leaning into AI to perform certain tasks. 

      AI literacy and fluency were important to the group, as well as making sure that equitable resources exist in newsrooms across the country that use or want to use AI. The tech community must be engaged in these conversations with journalists and outlets as equal partners in the design and deployment phases, not as those pushing their products and power on the profession.  

      The group also believed that more research needs to be done to understand the interface between traditional media outlets and this evolving AI-inflected digital ecosystem. Such research would fill in the blind spots or help companies stop engaging in behavior that is detrimental to journalistic ethics and editorial independence 

      This first convening broke ground on bringing together experts who are impacted by AI in journalism and helping the field navigate the latest iteration of the digital revolution. The concern about digital gatekeepers has been a long-standing debate, and with the integration of AI, the conversations have become murkier in terms of where lines should be drawn on ethics and integrity of reporting in journalism. How do we move toward a more participatory framework in the evolution of the field? These and other questions should be addressed, but not just to advance the interests of large and incumbent reporting entities. Instead, the focus on AI in journalism should focus on how to ensure that more outlets and journalists can enjoy the benefits of this technology to address their manifold needs, while ensuring that the experiences of diverse sources and outlets are not erased. 

      • Acknowledgements and disclosures

        Amazon, Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation. The authors would also like to acknowledge the research support of Isabella Panico Hernández and Xavier Freeman-Edwards.

      The Brookings Institution is committed to quality, independence, and impact.
      We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).