The deadly attacks by Hamas on Israeli towns, bases and civilians on the border of the Gaza Strip on October 7, and the war that broke out across Gaza and the rest of Israel in their wake, exposed in an unprecedented way the destructive use that terrorist organizations and hostile countries make of social media platforms and commercial communication technologies when they turn them into weapons of mass manipulation, sowing fear, spreading terror and expanding the circles of harm affected by terrorist acts in the real world.[1] Hamas broadcast its actions on social networks in real time, distributed disturbing graphic content and published manipulative details about the fate of those they had abducted. It consciously and deliberately used the platforms to expose the Israeli and international audience to the murders and kidnappings, and then soon after to deny the atrocities it committed and spread false information about Israel’s actions in Gaza.
The atrocities committed by Hamas on October 7 and in the days that followed included not only terrorism and physical warfare on the ground, but also an unprecedented digital attack of harsh graphic content (including documentation of murders), disinformation, manipulation, and malicious content distributed using social media platforms, including Facebook, Instagram, X (formerly Twitter), TikTok, YouTube, Telegram, WhatsApp and more. Faced with the difficult events that took place on the ground and on social media on October 7 and in the weeks that followed, it became clear that the social media platforms were struggling with the vast scope and grave nature of the harmful and pro-terror content that was being distributed through them. Digital content platforms and communication technologies, primarily commercial social networks, which in recent years have become central tools in shaping the social and economic fabric of the entire world, and especially in Israel,[2] have proven to be especially fertile ground for harming innocent civilians.[3] Following the October 7 terrorist attack by Hamas on Israel and the outbreak of the Israel-Hamas War, social networks had to deal with a huge and unprecedented amount and variety of harmful content that violated their community rules, harmed the safety and experience of their users, and in certain cases posed real danger to human life. As a result, they had to change and flex their policies concerning the removal and labeling of harmful content, as well as the means and resources employed to enforce community guidelines and address this type of content, especially in the first months of the fighting.
Although the major social media platforms have claimed for years that they do their best to monitor and clear the networks of harmful and illegal content, periodically coming out with public announcements about policies and measures for monitoring and responding to prohibited content, they are still flooded with such content, especially in spaces and countries with unique and relatively marginal languages, like Israel. The networks do not allocate sufficient resources to automatic content filtering and monitoring systems in these languages, and fail to adequately address the content, both in terms of its breadth and the speed at which it is handled.[4] Thus, in the first months of the war, it became clear that social media platforms had become unsafe spaces for users in Israel, due to harmful activity by malicious parties, and due to the exploitation of their algorithmic structure by political, commercial or hostile entities trying to influence the selection of information presented to users. The threat to user safety, present even in normal times, is felt even more strongly in times of emergency. Meanwhile, even prior to the events of October and the current war, the extent to which the Israeli public used social media reporting mechanisms to report offending accounts and content was extremely limited. According to data from the Israel Internet Association as of January 2022, the vast majority of the Israeli public seldom or never uses the reporting mechanisms, primarily due to skepticism that the reports will be taken seriously and insufficient familiarity with their usage.[5]
In contrast to the new legislation that entered into force in 2023 in the European Union countries (DSA–Digital Services Act), and similar legislation in Australia and England, no regulation exists in Israel requiring digital platforms and social networks to respond to the public and authorities or to disclose the measures they employ to protect users. In practice, the various platforms enjoy immunity and are not required to accept responsibility for damages or injuries caused to users due to content distributed through them by independent users or interested parties. This immunity disincentivizes the platforms from working to create a clean and safe online space for users in Israel.[6]
In the absence of any regulation or obligation of transparency, neither the State nor the user is afforded a clear empirical picture regarding the scope of the harmful and pro-terror content distributed on the various platforms in Israel and exposed to users, and it is unclear how the platforms manage the content or to what extent they are involved in its handling. In the face of this reality, the Israel Internet Association has been operating its Safety Hotline since 2013, providing assistance, guidance, information and tools to users of the Internet and social networks in Israel, and responding to a wide range of online threats and network vulnerabilities. The Hotline handles hundreds of monthly requests for help and support, and is recognized as an official Trusted Flagger by leading global platforms and network services such as Facebook, Instagram, TikTok, YouTube, Snapchat, X (Twitter) and others. Since the Internet Safety Hotline is defined and recognized as a long-standing Trusted Flagger, it stands to reason that the response it receives to its carefully verified reports through the various platforms’ Trusted Flagger reporting channels should be better and more efficient than the response received by the ordinary user. The Hotline team’s expertise and deep understanding of platform policies enable them to send hundreds of verified reports annually, mostly based on public inquiries about harmful incidents that the team has checked and filtered. The Hotline sends its reports in strict conformity with each platform’s criteria, thereby minimizing the need for platform moderators to sift through content that does not meet their threshold for removal or labeling. As a result, the Hotline’s reports are generally met with very high response and removal rates. The Hotline team is also able to make use of its familiarity with local events and context to alert the platforms to especially urgent cases, in particular those that involve real and immediate harm or threat to human life.
Against this backdrop, the current study offers an empirical and critical examination of the manner in which various social media platforms responded to inquiries and reports regarding the publication of harmful and prohibited content in violation of their policies and community rules during the first two months of the Israel-Hamas War (October 7-December 7, 2023), and examines whether the content was removed or otherwise dealt with. The characteristics and contents of the reports submitted by the Internet Safety Hotline to the various platforms during the first two months of the war serve as the empirical basis for this examination. As mentioned above, the Hotline’s experienced professional staff thoroughly checked all inquiries before forwarding them to the platforms, drawing on their extensive familiarity with the content removal policies and reporting mechanisms of the various platforms to ensure compliance with each platform’s standards.
According to the 2023 Israel Internet Association Hotline summary report, since the outbreak of the war the monthly inquiries received by the Hotline doubled compared to the usual amount.[7] These inquiries included reports of graphic and pro-terror content, violent content, scams, lies and conspiracies, including: denial of the events of the massacre and their details, accusations that Israel harvests Palestinian organs, accusations that the IDF murdered Israelis at the Nova festival and the surrounding towns, false reports about the kidnapped Israeli children being held in cages or being abused, spreading false information misidentifying one kidnapped person as a senior army officer, tales and false claims about “traitors from within” who assisted Hamas in planning and carrying out the massacre, rumors about Israeli Arab citizens photographing residential buildings in order to plan another attack, and false distribution notices about cyber-attacks and preparations for widespread terrorist attacks. All of these caused panic, tension and anxiety among the public.
Examples of conspiracies and fake content disseminated widely on social media:
The hundreds of public inquiries received through the Hotline’s official channels and reported to the platforms in these two months comprise a sample of the harmful content published on the platforms during the war, and of the response the platforms provided to the public. This report provides an empirical, well-founded and practical glimpse into the conduct of the platforms and the information they refrain from providing to the Israeli public regarding their functioning during crisis and emergency situations in general, and during the current Israel-Hamas War in particular.
The flood of harmful and dangerous content on social media starting from the morning of the October 7 massacre and throughout the months of the war did not go unnoticed by the platforms’ leadership. Since the events of October 7, the platforms have been quick to publish updates on actions taken to improve the protection and safety of users in Israel and the region against harmful, disturbing or unwanted content, alongside efforts to enforce their existing community rules against fake or misleading information.[8] The Meta Company (which owns Facebook, Instagram, WhatsApp, and more) announced on October 13 that it had established a Special Operations Center with Hebrew and Arabic-speaking experts, to increase the rate of removal of harmful content and to protect local users from misleading information that violates its community rules. The company added that in the three days following October 7, 795,000 harmful content items were removed. This figure represents seven times the amount of harmful content removed over the course of the two months prior, due to mass violations of the policy regarding “dangerous organizations and people.”[9] TikTok announced that it has established a special Command Center with local experts to act in real time against violent and controversial content and to enforce its policies against violence, hate and disinformation. On October 25, the company said that as part of its special enforcement operations, more than 775,000 videos were removed and more than 14,000 live broadcasts that promoted violence, terrorism, hatred and false information were stopped.[10] Similarly, the Cyber Department of the State Attorney’s Office–which effectively represents the State of Israel opposite the social media platforms, submitting voluntary demands for the removal of illegal or terror-promoting content–announced on December 12 that following the October 7 attack, the department’s content removal team would be expanded, and is working in cooperation with national security forces to detect harmful content and remove terror-promoting content related to the Israel-Hamas war. The announcement also stated that the team examined approximately 39,000 content items published on social networks and websites, and submitted more than 26,000 removal requests to the various platforms. In response to these requests, over 92% of the problematic content reported to Meta, TikTok, and YouTube was removed.[11]
Despite these achievements, Israel’s regulatory framework requiring platform accountability and ensuring swift and effective action against illegal or dangerous content falls short in comparison with accepted standards worldwide. Many countries and international bodies have redefined the parameters of platforms’ immunity and responsibility, and condition immunity on meeting certain criteria. At the forefront of this trend is the European Union, which enacted the Digital Services Act (DSA) in 2022. The law redefines the dynamic between platforms, the public and government by promoting policies that include several components. Notably, it emphasizes the obligation of fairness toward users and requires maximum transparency regarding platform content rules and the monitoring and removal practices they employ.
By contrast, Israel does not require platform transparency regarding monitoring and content removal practices. The platforms may choose what information and data to publish (if at all), so the State and the public do not have reliable data on user reports, the nature of the content, or how it is handled. This impedes the efficacy of any attempt to oversee the platforms’ activity. Accordingly, in the weeks following the events of October 7, the public and state authorities received no account of how the platforms were handling the flooding of their networks with harmful content: what approach was being used, what resources were being allocated to implement the approach, and what the removed posts and channels had contained.
A review of the first DSA-compliant transparency reports published by leading social media platforms (X, Meta, TikTok, LinkedIn, Snapchat and Google) reveals the number of content managers and supervisors employed and specifies their language abilities. Although the DSA only requires companies to provide information regarding EU languages, X and Snapchat also addressed Hebrew and Arabic. X stated that it employs 12 Arabic-speaking content supervisors (moderators) and two Hebrew speakers. Meanwhile, Snapchat employs 529 Arabic-speaking content supervisors and only one Hebrew-speaking content supervisor. The other platforms made no mention of Israel’s local languages.[12]
The multitude of social-media-related initiatives launched by Israeli citizens and various civil society organizations and technology companies in the days after the October 7 attack reflected the panic and distress of the public in the face of the unprecedented flood of harmful content and psychological terrorism on social media. These initiatives arose due to the overwhelming sense among social media users that the platforms were unable to provide an adequate response, and attempted to fill the void in various ways. Independent collaborations formed between high-tech entrepreneurs, technologists, researchers and social activists resulting in projects like FakeOff,[13] IronTruth,[14] DigitalDome[15] and others. Their shared goal was to develop methods of monitoring, identifying, reporting and removing harmful and hostile anti-Israel content from social media. Most of them established reporting interfaces that would supplement the platforms’ built-in mechanisms and escalate reports to the platforms more effectively with the hope of procuring a more rapid response. These initiatives collected harmful content and accounts and employed various methods to remove them from social media, including working with activists and volunteers to coordinate mass reporting on content, transferring the content to platform safety and security teams through formal channels, or contacting Israelis employed by the various platforms, who then reported the harmful content through internal channels accessible to employees only.
Below is yet another example of the prevailing sense of powerlessness and disappointment in Israel amidst the unchecked deluge of harmful content.
The following WhatsApp message, initially sent by a TikTok employee to fellow parents of children in his son’s kindergarten three days after the massacre, quickly went viral, reaching thousands of Israelis on the same day:
All of these point to users’ sense of frustration and powerlessness regarding the official reporting mechanisms of the social media platforms, exacerbated by the absence of a governmental intermediary to whom they can turn.
Internews study: Disparities in Meta responses to trusted flaggers worldwide[16]
Each of the Internet Safety Hotline’s reporting channels, like those of other Trusted Flaggers around the world, is unique to its receiving platform or network service, and each one differs from the next. For example, Meta’s Trusted Partners program (Meta operates Facebook and Instagram) includes over 400 organizations from 113 countries. The organizations recognized as Trusted Flaggers are described on the company’s website as essential partners who help to understand the local context of communities around the world affected by harmful content on the platform.[17]
In a study conducted by the Internews organization, Meta’s responsiveness to reports from Trusted Partners in various countries was examined based on interviews with the various organizations’ representatives and collection of information about the reports and the platforms’ response. The study found that Meta’s response times varied across different countries and contexts, sometimes failing to meet reasonable standards for handling safety incidents, crises and harmful content. Trusted Partners in various countries said that although some cases were dealt with on the same day, others awaited response for weeks or even months.
For example, Trusted Partners in Ukraine received exceptional and heightened attention from Meta representatives after the Russian invasion in 2022, and in most cases received a response to reports submitted through their official reporting channels within 24 to 72 hours. The rapid response times to reports of harmful content in Ukraine during the war stand in stark contrast to Meta’s conduct and policies in other parts of the world, including many difficult conflict zones, especially in the Global South. For example, in the deadliest war in the 21st century, the Tigray War in Ethiopia, approximately 600,000 civilians were killed in 2021-2022, but according to Trusted Partners, Meta’s platforms responded only after weeks, sometimes months, to the reports they submitted–despite the fact that these reports included, as in Ukraine, threats, disinformation, incitement and promotion of violence.
The positive experience of the Trusted Partners recognized by Meta in Ukraine demonstrates that when enough resources are put at its disposal, the platform has the ability to respond quickly, delivering a consistent and effective response within three days at the most. They met these goals in Ukraine and they can meet them anywhere, if the Meta Company chooses to allocate the necessary resources.
[1] Ring, E (2024, January 22). Op-Ed for Haaretz | How Telegram, Twitter and TikTok have become lethal tools of Hamas psychological warfare–Israel Internet Association–(ISOC-IL). Israel Internet Association. https://en.isoc.org.il/about/news-room/op-ed-for-haaretz-how-telegram-twitter-and-tiktok-have-become-lethal-tools-of-hamas-psychological- warfare
[2] Use of social networks and online services in Israel: 2024 data with demographic segments–Israel Internet Association. (n.d.). The Israel Internet Association. https://www.isoc.org.il/sts-data/online_services_index
[3] Greenwood, S. (2023, June 1). Internet, smartphone and social media use around the world Pew Research Center. Pew Research Center’s Global Attitudes Project. https://www.pewresearch.org/global/2022/12/06/internet-smartphone-and-social-media-use-in-advanced-economies-2022/
[4] The Israel Democracy Institute. (2023). An outline for the regulation of social networks in Israel. https://www.idi.org.il/books/49130
[5] The characteristics of violent discourse and dealing with it in the Internet and social networks in Israel (December 2022)–Israel Internet Association. (n.d.). Israel Internet Association. https://www.isoc.org.il/sts-data/violent_discourse_survey_2022
[6] An outline for the regulation of social networks in Israel (see note 4 above).
[7] https://www.isoc.org.il/sts-data/helpline-isoc-2023
[8] Regulating content and platform policy for the Israeli arena in the Iron Swords War – Israel Internet Association. (2024, February 1). Israel Internet Association. https://www.isoc.org.il/regulating-digital-services/israel/platform-policies-iron-swords
[9] Company, F., & Meta. (2023, December 7). Meta’s ongoing efforts regarding the Israel-Hamas war. Meta. https://about.fb.com/news/2023/10/metas-efforts-regarding-israel-hamas-war/#hebrew-translation
[10] TikTok. (2024, April 7). Our continued actions to protect the TikTok community during the Israel-Hamas war. Newsroom | TikTok. https://newsroom.tiktok.com/en-us/our-continued-actions-to-protect-the-tiktok-community-during-the-israelhamas-war
[11] And computers, M.A. (December 12, 2023). Can the prosecution succeed in removing terrorist content from the internet? -. People and Computers–Hi-Tech News Portal, Computing, Telecom, Technologies. https://www.pc.co.il/featured/399459/
[12] Acarvin. (2023, December 6). Learning more about platforms from the first Digital Services Act transparency disclosures–DFRLab. DFRLab. https://dfrlab.org/2023/12/06/learning-more-about-platforms-from-the-first-digital-services-act-transparency-disclosures/
[13] FakeOff | Cleaning up the social networks. (n.d.). FakeOff. https://www.fake-off.com/
[14] IronTruth. (n.d.). Telegram. https://t.me/irontruthgroup
[15] DigitalDome–a technological platform developed for reporting malicious content from all networks, with an HML who works around the clock to verify them and report to the networks for their removal. (n.d.). Digital Dome. https://www.digitaldome.io/
[16] Internews. (2023, August 2). Safety at stake: How to save Meta’s trusted partner program–Information Saves Lives | Internews. Information Saves Lives Internews. https://internews.org/resource/safety-at-stake-how-to-save-metas-trusted-partner-program/
[17] Bringing local context to our global standards | Transparency Center. (n.d.). https://transparency.fb.com/en-gb/policies/improving/bringing-local-context/