Sections

Commentary

The online information ecosystem during the Israel-Gaza crisis

Palestinian journalists cover the conflict between Islamic Jihad movement and Israel in Gaza city in May 2023.
Palestinian journalists cover the conflict between Islamic Jihad movement and Israel in Gaza city in May 2023. Calm returned to Gaza as a fragile ceasefire ending five days of fighting held, leaving Palestinians and Israelis to count the cost of cross-border fire which has killed dozens. (Photo by Atia Darwish/APAimages)

For weeks, the tragic events unfolding in Israel and Gaza have blanketed both social and legacy media. In parallel, the volume of false and unverified information circulating online has ballooned.

Maintaining the integrity of the online information space has always been a challenge in rapidly shifting environments, including most recently Russia’s invasion of Ukraine, where social media has become an important front in the war, and propaganda narratives are rife. In these periods of acute crisis, there is a clear supply and demand problem. Demand for information skyrockets, but the supply of credible, fact-based information lags or may be absent altogether. In this void, false, exaggerated, and decontextualized claims can pervade the information space.

Unsurprisingly, the Israel-Gaza crisis has become the next digital battlefield. But recent shifts in the online ecosystem have made an already difficult arena even more challenging. What changes have further stressed the online information space? And is there anything we can do to reverse course?

Challenges to the online information space

When a blast rocked the area around Al-Ahli Baptist Hospital in Gaza on October 17, 2023, likely killing hundreds, reports out of the region failed to accurately convey the uncertainty around the event. The reasons for this failure are certainly wide-ranging. Among them, covering evolving events in Gaza remains extremely difficult, with only a small number of journalists on the ground. For over a decade, social media provided an imperfect solution to help address this challenge. It operated as a “megaphone” to disseminate real-time information about events unfolding in places where some journalists were unable or unwilling to go en masse. Since then, this type of hyper-localized content has often complemented more traditional reporting, providing a unique window on conflict zones, protests, and other rapidly evolving events that otherwise might go underexplored.

In the past, X, formerly known as Twitter, offered a forum for users on the ground to disseminate these insights at scale, but recent policy changes not only make it harder to identify authoritative content but also may actively boost false claims. With each new policy change at X, experts and commentators warned of potential harm to information integrity. At the time, their cumulative consequences may have been hard to pinpoint. What has become clear over the past few weeks, however, is that the totality of these changes has made accurate, timely reporting about the ongoing crisis exceedingly difficult to come by from a source that once served as a valuable venue for gleaning on-the-ground insight.

Take X’s announcement in October 2022 that it would lean more heavily on a crowdsourced content moderation approach, known as Community Notes. This reliance on a wisdom-of-the-crowd strategy is not inherently problematic when coupled with other safeguards, and it is common across other popular online spaces like Reddit and Wikipedia. Yet features of X’s new approach make content moderation a slow and cumbersome process.

For example, Community Notes relies on consensus from users with different perspectives rather than majority rule when making algorithmic decisions about whether to attach additional crowdsourced information to a tweet. In a highly polarized environment, reaching that consensus is exceedingly difficult, even when this type of content moderation is decoupled from political events that evoke strong reactions. My recently co-authored research found that although the expansion of the Community Notes contributor base increased the percentage of tweets with crowdsourced information attached to them, that number was still fairly low, and the added context seemed to have little impact on users’ subsequent behavior.

This assessment was for the most part a review of the everyday churn of content across X. However, in a crisis such as that taking place in Gaza and Israel, the volume and speed of information flows are amplified. Thus far, corrective information has moved far too slowly and is too limited in scope to effectively contend with the torrent of false and decontextualized claims being made about the conflict online. Worse still, the crowdsourced information sometimes is completely wrong. This highlights the flaws in a purely community-driven content moderation approach and the necessity of having it work in tandem with professionally trained staff.

Other changes, such as the elimination of “verified” accounts for public figures (known as “blue checkmarks”), the provision of “verified” status to anyone willing to pay for it, and monetization policies that pay users for ad impressions embedded within their posts, have led to a torrent of clickbait content with little regard for facts from users who may or may not be at all notable. Recent research found that in the first week of the conflict, of the 250 most engaged with posts containing false claims, 74% came from newly “verified” accounts, whose content is also algorithmically prioritized across the platform.

This is particularly challenging when users purporting to rely on open-source intelligence (or OSINT) capitalize on the viral boost afforded to verified users on X. Broadly, OSINT research relies on the use of publicly available information, in the form of satellites, social media photos, and other sources, to assess important events. In the past, the OSINT community played a critical role in uncovering wrongdoings and challenging official narratives, and it is currently attempting to uncover what exactly happened at the Al-Ahli Baptist Hospital in Gaza. But in this ongoing conflict, accounts claiming to analyze key events on the ground using OSINT may selectively share information and rely on shoddy evidence that further muddies the facts of an already uncertain situation.

In tandem, the dismantling of prominent research programs focused on the online information ecosystem, restrictions on social media data access, and intimidation of researchers working on issues related to disinformation and coordinated inauthentic behavior have made it even more difficult to explore these challenges in-depth and at scale.

An even more fragmented online information ecosystem

Policy shifts at X — as well as very public spats between X’s owner Elon Musk and journalists and the media — have soured some users toward the platform, but it is unclear how many have exited this space entirely. When they do leave, they have moved to a wide array of other online options, from decentralized platforms such as Mastodon or the invitation-only BlueSky, which face their own content moderation challenges, to the Meta-operated Threads, which has vocalized its opposition to becoming a site for politics and hard news.

The movement away from X has left journalists, researchers, and commentators with an even more fragmented information ecosystem and little consensus about where to go for the on-the-ground voices that they formerly could regularly find in a single newsfeed online. As recently as last year, nearly 70% of U.S. journalists cited X as among the top two most useful online platforms for job-related tasks, as compared to just 13% of the general public.

This reliance on X to glean and disseminate insights about the world is not just a journalistic phenomenon. In 2023 alone, compared to other social media sites, searches for X still drew the highest number of Google Scholar citations, despite a concerted effort by academics to research other platforms.

Beyond platform changes and information fragmentation

Although platform shifts have in many ways exacerbated online information ecosystem problems, they are by no means exclusively to blame for the current situation. The nature of the Israel-Gaza crisis, which evokes strong emotional reactions on both sides, makes information consumers more prone to confirmation bias and motivated reasoning. In other words, people search for information that confirms what they already believe and discount information that might go against their prior perceptions, particularly in the absence of verifiable facts. This phenomenon is not new but, when coupled with the difficulties in identifying verifiable information online and the volume of false claims circulating, it has put stress on the information space at a moment when verification, high-quality research and journalism, and trusted on-the-ground voices are sorely needed.

Addressing the degradation of the online information environment is of vital importance. Moving forward, it may be useful for researchers to pool resources to identify and document disinformation campaigns, influence operations, and other coordinated actions that were once more readily discoverable through easy data access.

For the media, thorough scrutiny of source and information credibility, whether gleaned from platforms such as Telegram or even X, will be more important at a time when the boundaries between true and false are hard to determine or altogether unknowable.

For public consumers of information, recognizing their own information processing biases and, as some have counseled, maybe just not hitting “share,” will help to lower the temperature at a time when verifiable information is critically important.

The fragile ecosystem built up around the online information space — and the ways in which it has fueled the media — may be irreparably broken. But perhaps we can build something better, though we must do so quickly, before the next crisis unfolds.