The deadly attack by Hamas on towns and bases adjacent to the Gaza Strip on October 7, 2023, and the war that broke out in its aftermath in the Gaza Strip and across Israel, revealed in an unprecedented way the destructive use of social media platforms and online commercial communication technologies by terrorist organizations and malicious actors, turning these platforms into a weapon of mass manipulation, sowing fear, spreading terror, and expanding the circles of impact in the real world.
Social media platforms, which enable and shape the discourse and public information environment of our time, were flooded with a deluge of graphic, toxic, and harmful information during the war, and struggled to cope with the volume and nature of the harmful content to which users were exposed. Within moments of the start of the Hamas attack, the platforms became an unsafe environment for their users due to malicious activity by malicious actors who tried to influence users’ perception of reality. Although social media platforms claimed that they were making efforts to monitor the networks and clean them of this content, the flood of harmful content continued for many more weeks and months, and deepened the anxiety and secondary trauma experienced by many citizens in Israel and around the world due to the war.
In the absence of local regulation or transparency requirements regarding the safety measures the platforms implement in Israel, state institutions and users do not have a factual picture of the scope of the harmful content, and are left in the dark as to how and to what degree the content is addressed by the platforms. Against this backdrop, the Israel Internet Association initiated the present study, whose purpose is to critically, empirically and comparatively examine the methods by which the various platforms handle the harmful content and false information disseminated through them. The study examines the procedures and effectiveness of the actions taken by leading social media platforms in response to inquiries and reports sent to them by the Israel Internet Association’s Internet Safety Hotline regarding instances of policy-violating content during the first two months of the most recent Israel-Hamas War. The Internet Safety Hotline is officially recognized by the platforms as a reliable and trusted reporting entity, also known as a Trusted Partner or Trusted Flagger.
The research findings reveal that the platforms’ functioning and general response throughout this period were lacking: they did not take sufficient enforcement actions against many of the inquiries and reports, their response times to most reports were long and unreasonable considering the state of emergency and the gravity of the violations, and they rarely provided a satisfactory response on weekends. Furthermore, the findings show significant disparities in the manner and speed of response by the different platforms depending on the nature of the content: the response to content that requires human involvement and examination was distinctly deficient, along with the response to disinformation and false information which was also particularly low.
The analysis of the findings indicates that the platforms in general did not do enough to provide a transparent, detailed and comprehensive response to reports of harmful content and activity that violated their policies and community rules during the war. Most of them provided a partial, slow and irregular response even to urgent and verified reports sent to them by the Internet Safety Hotline, even though they officially recognize it as a reliable, experienced and preferred reporting party. This is particularly alarming when one considers how the platforms must have treated reports submitted by the general public through the platforms’ built-in reporting mechanisms. And yet, the data shows that in the first week of the war the platforms acted quickly and handled harmful content in a satisfactory manner, demonstrating that when the platforms dedicate the appropriate resources, personnel and efforts to the removal of content and the creation of a safer environment–they are able to meet the task.
Based on the conclusions of the research, several important steps must be taken to keep users protected both in regular circumstances and in future emergencies. Tackling the challenges uncovered by this study will require the involvement of all players in the field – social media platforms, the State and legislators, as well as research organizations, media outlets and civil society groups. If the platforms augment the resources invested in moderation and improve moderation transparency, and if wise legislation is passed regulating the platforms’ activity, we could see a dramatic improvement in the safety of social media users in Israel. These recommendations hold true for everyday life, and even more so for times of emergency, such as wars and military and security conflicts, which unfortunately characterize life in Israel and the region, along with many other countries around the world.