This study is based on data collected by the Israel Internet Association’s Internet Safety Hotline, and includes all the reports sent to the platforms by the Hotline team between October 7 and December 7, 2023, the response received from the platforms to the Hotline reports using official channels, and the actions the platforms performed in response to the reported content. The study does not take into account all the reports received by the Hotline; only reports of content whose problematic nature was checked, verified and escalated to the various platforms[1]. The study deliberately includes reports of various “routine” online offenses, not directly related to the war, such as instances of fraud and sexual harassment reported during the observed timeframe. These are included in order to assess how the platforms handle reports that are not necessarily war-related, but are received during wartime and could be influenced by the ongoing conflict.
During the first two months of the war (October 7–December 7, 2023), the Internet Safety Hotline received 1,004 inquiries and reports from various parties–the general public and social media users, various initiatives, civil society organizations and state authorities. Of these, 581 inquiries concerned offenses, content or accounts on the five platforms examined in this study–Facebook, Instagram, YouTube, TikTok, and X. It is important to clarify that the number of reports sent by the Hotline to the platforms is not solely determined by the number of inquiries received from the public. Inquiries to the Hotline were received in various ways, and some reports included a large number of content items or accounts. These reports underwent inspection and filtering procedures, and the relevant details were forwarded to the platforms’ safety teams. Some of the reports received led to independent follow-up investigations by the Hotline team resulting in the discovery of additional content items or accounts that were then reported to the platforms.
This study will describe and analyze the manner and quality of the response by the various platforms to the 447 reports that the Israel Internet Association’s Hotline forwarded to them in the first two months of the war: Facebook (103 reports), Instagram (253 reports), YouTube (8 reports), TikTok (60 reports), and X (23 reports).[2]
The Hotline team classified the content areas of the reports by thematic categories. Each report falls under at least one category, and there may be reports included in two or more categories:
False info | Misleading information, conspiracies and outright lies |
Graphic content | Content that includes photos and videos of gruesome injuries, dead bodies, acts of violence or kidnappings |
Hate speech | Hateful speech including racism and blatant anti-Semitism |
Terrorism | Content and channels of terrorist organizations, support and encouragement for the actions of terrorist organizations, and content related to the terrorist actions themselves |
Scam / fraud | Use of networks for financial fraud and phishing |
Incitement and call to violence | Content that incites or facilitates violence and credible threats to public or personal safety |
Impersonation or violation of privacy | Stealing photos, impersonating and distributing personal information |
Sexual harassment | Sextortion (sexual extortion), use of personal and intimate images of women and men for sexual fraud and impersonation |
In view of the changes that have taken place on X, such as the reduction of safety teams and moderation and policy teams,[3] the platform has become more permissive and interferes less in the flagging and removal of content, and the dedicated reporting interface for Trusted Flaggers has been closed. These changes have made it more difficult to report problematic and harmful content distributed on X. The Hotline team did send several reports through the reporting mechanism meant for regular users, but this mechanism does not allow the analysis of actions taken by the platform in response to the Hotline’s reports, and therefore it is impossible to conduct a comparative examination of X’s official reporting channel.
Additionally, in order to provide a broader and more complete picture, conversations and interviews were held as part of the research with representatives of civil initiatives and technologies for monitoring the internet, representatives of the platforms and their safety teams in Israel and abroad, fact-checkers working with the platforms and the Israel Internet Association’s Hotline team.
[1] During the first months of the war, the Hotline team received thousands of inquiries from citizens, organizations, and various initiatives, examined each report meticulously against the relevant policy clauses on each platform, and forwarded to the platforms only reports about content or accounts that, upon thorough examination, were found to be indeed contrary to policy clauses, or could constitute a real injury, and therefore there is an expectation that they will be treated.
[2] The reports sent to YouTube are too few to produce a clear picture of the situation.
[3] Report reveals the extent of deep cuts to safety staff and gaps in Twitter/X’s measures to tackle online hate | eSafety Commissioner. (n.d.). eSafety Commissioner. https://www.esafety.gov.au/newsroom/media-releases/report-reveals-the-extent-of-deep-cuts-to-safety-staff-and-gaps-in-twitter/xs-measures- to-tackle-online-hate