This section presents the findings that arise from the analysis of the platforms’ responses to the Israel Internet Association’s Internet Safety Hotline reports during the first two months of the Israel-Hamas War (October 7 – December 7, 2023). The analysis categorizes the different types of harmful content, notes the response times and days of the week on which responses were provided, and examines the nature and extent of the treatment of these reports by each platform.
C.1: Volume and Classification of Reports Received During the War
The reports sent by the Hotline to the platforms were sorted by type of harm: sexual harm (105), incitement and call to violence (93), terrorist content (80), hate speech (78), false information (69), graphic content (58), violation of privacy and impersonation (23), scams and fraud (8).
The following chart highlights the differences between the most frequently reported categories across the various platforms, with false information, incitement, and calls for violence most common on Facebook; on TikTok hate speech, incitement, and calls for violence; and on Instagram sexual harassment, graphic content, and terrorism are most commonly reported.
C.2: Examining Response Times to Hotline Reports
The average response time to the Hotline’s reports for all platforms combined was approximately 5 days during the first two months of the war. This calculation is based exclusively on reports that received a response up until January 11, 2024, at which date the study closed its processing period, and does not include reports that were not answered at all. Therefore, when considering the average response time, one must also take into account the percentage of reports that received no response.
The following are the percentages of unanswered Hotline inquiries as of mid-January 2024 (over 50 days past the end of the evaluated timeframe) for the various platforms: Facebook – 1 out of 103 (1%), Instagram – 9 out of 253 (3.6%), TikTok – 6 out of 60 (10%), YouTube – all inquiries received a response, and X – Automated response. Time measurement is meaningless in the latter case, so X has been excluded from this comparison.
The graph above (“Average Response Time by Platform”) demonstrates considerable differences between the respective response times of the various platforms. While TikTok responded to Hotline reports within three days on average, Facebook’s average response time was over a week. It is interesting to note that Facebook and Instagram differ significantly in their average response times. Although both platforms are owned by the Meta Company, and are apparently subject to the same content policy, Facebook’s response time to Hotline reports was almost three days longer on average–a noteworthy disparity for different platforms with the same ownership.
Response Times to Hotline Reports During the First Weeks of the War
In the first week of the war, Facebook, Instagram, and TikTok responded to the Hotline’s reports within 24-72 hours. TikTok continued to respond to the Hotline’s reports within 24-48 hours in the following two weeks of the war, but Instagram and Facebook’s average response times lengthened to about 7 days in the second week of the war. The following diagram shows the response time of each platform segmented by the nature of the content reported:
On Facebook, reports about misleading information, hate speech, and issues related to impersonation and privacy received a response after a particularly long time–over 10 days on average. On Instagram, inquiries about incitement and calls to violence were handled relatively slowly compared to other issues, and the response time was over a week. On YouTube, reports on graphic content and terrorism received a relatively quick response, within 48 hours, but the rest of the reports waited over a week to receive a response. On TikTok, the average response times were short, and mostly hovered around the 72-hour mark.
Platforms Unresponsive on Weekends
In contrast with the platforms’ pledges to open special operations centers that would treat the emergency situation created by the war with special consideration and improve the rate and speed of response, the response volume from the platforms on weekends dropped to about 10% of their average volume of responses on weekdays. The average number of reports submitted by the Internet Safety Hotline to the platforms dropped during the weekends in Israel as well (Friday-Saturday), but only by 60%, which does not explain the dramatic 90% drop exhibited by the platforms.
The above chart demonstrates that unlike the other platforms, TikTok and Youtube provide a partial response even on weekends. In contrast, Facebook and Instagram provide zero response on weekends, and it is clear that their moderation and safety teams are available only on standard working days in North America and Europe.
C.3: Assessment of Platform Response Effectiveness
As part of the study, a comparative examination was conducted of the percentage of cases on each platform in which adequate enforcement actions were not carried out (out of all the reports sent to the platforms throughout the first two months of the war) as compared with the percentage of cases that did receive proper handling. “Proper handling” is defined as removal or labeling of the reported content, or alternatively providing a satisfactory explanation as to why the reported content is not considered a policy violation. Since each report is examined by the Hotline’s experienced representatives before being transferred to the platforms in correspondence with their stated policy rules, failure of a platform to remove or flag the reported content while offering no sufficient explanation or justification is defined here as failure to take adequate enforcement actions in response to reports of harmful content.
The following chart shows the percentage of reports that did not receive a proper enforcement response, broken down by platform:
In a comparative view of the percentages of poorly handled reports, Facebook stands out negatively with 25% poor handling (that is, only 75% proper handling). In contrast, the other platforms failed to take sufficient enforcement actions for 11-13% of the reports (that is, 87-89% proper handling).
When comparing between types of harmful content, it turns out that false information is the content category in which the percentage of poor handling is the highest on average, with the platforms failing to take sufficient enforcement actions for 42% of the reports (see next subsection for further elaboration). The percentages of poor handling are also high for reports of invasion of privacy and impersonation, hate speech, and incitement and calls for violence (12-19% of the reports). On the other hand, the majority of reports on graphic content and terrorist-supporting content received proper handling (95-99%).
Platform Responses to Hate Speech
Hate speech includes publicizing posts or direct attacks against people based on protected characteristics such as race, religion, gender, sexual preference, etc., using aggressive language, slander, offensive stereotypes, statements about innate inferiority and the like. Hate speech creates a discourse environment of intimidation and polarization and sometimes even encourages direct physical violence. Most social media networks prohibit the promotion of content defined as hate speech on their platforms.[1][2]
TikTok has the lowest rate of proper handling of reports on hate speech, with only 26% of reports in this category resulting in enforcement actions. Instagram took enforcement actions in response to only 15% of the reports, and Facebook only 5%.
Platform Responses to Graphic Content
Graphic content includes disturbing images or videos that contain extreme violence or abuse, such as gruesome injuries, dead bodies, acts of violence, infliction of severe suffering, or alternatively sexually explicit or graphic content not suitable for general audiences. Since the outbreak of the war, social media platforms have been flooded with this kind of horrific content, documenting the events of the massacres and kidnappings on October 7 in Israeli settlements and the subsequent IDF bombings in the Gaza Strip.
An examination of the platforms’ treatment of graphic content shows that, unlike other platforms, Instagram has a varying response to this category of posts. Sometimes it removes the content, and in cases where the content is not removed, it is usually labeled with a warning sign, alerting users to the graphic content displayed in the post. 94% of the reports that were forwarded to Instagram regarding graphic content were properly handled. Of these, half were removed from the platform, and the other half had a flagging and a viewing warning label added.
It should be noted that on the other platforms the sample of reports on posts containing graphic content was small.
C.4: Platforms’ Failures in Handling Misinformation
Facebook’s handling of reports concerning false content is significantly deficient–the platform failed to properly handle 57% of the reports it received from the Hotline regarding false and misleading information.
When focusing solely on reports categorized only as False Info (excluding those belonging to multiple content categories, such as false information also involving incitement to violence), Facebook’s percentage of poor handling notably rises to 72% of cases. In contrast, the percentages of Instagram and TikTok’s instances of poor handling remained consistent with those observed for the broader group of false information reports, including those associated with additional content categories. This figure reinforces the impression that, in relation to the other platforms, Facebook has difficulty handling or avoids systematically handling reports on content defined as disinformation or false content.
C.5: Quality of Platform Responses to Trusted Flagger Reports
The Israel Internet Association’s comparative analysis also addresses the nature and content of the platforms’ verbal response to reports received from the Hotline, an entity that they recognize as a Trusted Reporter in recent years. The Hotline relies on the platforms’ response to reports to monitor the report’s progress and fate, and thus to manage the inquiries the Hotline receives, to close the inquiries that have been resolved, and to learn from the feedback in improving the inquiry filtering process and refine the forwarding of reports according to each platform’s relevant policy sections. As a general rule, the responses are sent to the Hotline’s dedicated email address, the same address used to submit reports to the platforms. Apart from the nature of the response (accepted/rejected) and the action taken (removed/labeled, etc.), the content and wording of the platforms’ responses were also reviewed, in order to examine their attitude toward reports they receive from official reporting bodies defined as Trusted Flaggers.
In general, the comparative examination found that Meta (owner of Facebook and Instagram) provides fixed, templated and superficial answers in response to reports, which do not include information or refer to the specific content of the report.
Meta’s responses to the Internet Safety Hotline’s reports of harmful content:
The wording of Meta’s email response in cases where content reported by the Hotline was removed or labeled:
Thank you for your report. Please note that we have now reviewed the related content and taken appropriate action. Do not hesitate to let us know if you require further assistance.
The wording of the Meta email response typically received when Meta decides not to act against the reported content:
Our team has done an in-depth investigation of the content in question, but has found that it does not violate our Community Standards.
The wording of the Meta email response typically received if the reported content was removed before the teams had addressed the Hotline’s report:
It looks like the content you reported is no longer accessible on our site. If you’re still seeing this content on the site, please reply to this email with a web address (URL) that links directly to the potentially violating material, in order for our teams to review it.
The wording of Meta’s response to Hotline reports in cases where the reported content was forwarded to independent fact-checkers:
We have reviewed your report. It unfortunately isn’t clear to us that any of the reported content violates Facebook’s Community Standards or Instagram’s Community Guidelines. We have passed the content to third party fact-checkers (3PFC) for review for potential misinformation.
Similarly, TikTok also responded with templated and laconic responses to Hotline reports, in spite of its recognition of the Hotline as a reliable reporter.[3]
In contrast, YouTube provides a detailed and specific response to each report, emphasizing the policy section that was violated and the action taken accordingly. The comments from YouTube’s safety teams are noteworthy for their level of detail and personal tone, acknowledging the reported content and relevant policy sections. They include a personal response to the reporter (Hotline representative) by name, and the email is signed by one of the platform’s employees. Such comments help the Hotline team improve, characterize, understand, and clarify future reports.
Examples of Responses Received from YouTube Regarding Hotline Reports:
C.6: The “Warning Label” Approach in Response to Graphic Content and False Information
Platforms take various approaches in dealing with graphic content that violates their policies and poses potential harm or risk to users. Sometimes a platform will opt not to remove the content but rather to label the content in some way. One method employed by platforms is to blur the content and add a warning label or contextual note before viewing, which serves as an alternative to complete removal and allows users to make an informed decision about whether or not to view it.
Examples of graphic content that was labeled rather than removed (in response to 22 reports on Instagram and two on YouTube):
Example of false information labeled by external fact-checkers (in response to a single report on Instagram):
Unlike the added comments and labels on content deemed problematic, when content is removed from a platform a message is displayed to the user stating that they have reached unavailable content. However, most platforms do not state what the removed content was, why it was removed, based on which policy section, and whether its removal was an enforcement action or an action taken by the user who uploaded it, such as privacy restrictions and viewing permissions or post deletion.
The minimal detail and absence of transparency prevent users from knowing what happened to the content and why, impeding public accountability for accounts that face enforcement measures, as well as for users who delete regrettable posts. Furthermore, it hampers awareness among followers and other users about illegitimate content that has been removed.
The lack of detail and transparency also prevents Trusted Flaggers from examining the enforcement actions taken by the platforms and the practical response to the reports they submitted. This affects the work processes of these Trusted Flaggers, especially in light of the meager, templated, and generic response to reports via email.
In general, the comparative examination found that TikTok and the Meta platforms provide vague information about the fate of reported content. It is impossible to know whether the content was removed by the platform or by the user who posted it, whether the entire profile was removed or only the specific content, and—if enforcement action was taken—what policy section served as grounds for the action. TikTok actually takes advantage of the opportunity to encourage users to continue consuming more content on its platform. In contrast, YouTube offers slightly more detail and specifies that enforcement action has been taken against the content or account, but does not specify the nature of the violations or the policy sections that define them as such.
Examples of How Removed Content is Displayed on Each Platform
Instagram – Sorry, this page is not available (the link you clicked may be broken or the page has been removed)
Facebook – This content is currently unavailable (usually because the owner has shared the content with a small group of people, redefined who can see the content, or because the content has been deleted)
TikTok – The video is currently unavailable (Looking for videos? Try browsing our creators, hashtags and trending sounds)
YouTube – Clarification that the account has been closed and details of the possible reasons for this
[1] TikTok | Countering hate on TikTok – link
[2] Hate speech | Meta Transparency Center – link
[3] The TikTok response template for content removal reports:
Thank you for bringing this matter to our attention. We have reviewed your report and have now removed the content for violating our Community Guidelines.
The TikTok email template indicating content removal or labeling:
We have reviewed your report and have taken appropriate action on the content.
Response indicating that the reported content was removed before the teams had a chance to address the report:
We have reviewed your report and found that the content in question has already been removed. If there are further concerns regarding this report, please respond to this email.