Recommendations for Social Media Platforms
This study’s analysis of the platforms’ responses to harmful content reports during the war yields critical insights and highlights several important changes that must be made in order to ensure the protection and safety of social media users amidst the current threats and challenges. The recommendations presented here are of utmost consequence in times of emergency (violent conflicts, wars and disasters), though the platforms would do well to implement them as standard practice even under normal circumstances. They have been formulated based on experience accumulated over the recent months and years in Israel and in the region, but it is likely that they are just as suitable for other regions and contexts across the world and they should be solemnly considered and implemented as soon as possible.
-
Special plans for conflict and war
Emergencies, wars and violent conflicts require social media platforms to prepare for action and response in a faster, more decisive and more comprehensive manner than during normal times. The preparations should include:
- Expansion of days and hours of operation: Resources must be allocated to expand operating hours and days, in order to provide a continuous response to the increase in the amount and variety of content during an emergency through the addition of local and skilled personnel who understand the language, contexts and nuances of the situation in real time.
- Adapting moderation mechanisms to deal with a continuous flow of content: The enforcement mechanisms and handling of harmful content must be adapted to deal with a sharp and continuous increase in the volume and severity of this content during a crisis. As the findings show, the platforms quickly returned to their normal mode of operation and did not allocate sufficient resources to the reporting channel designated for the Internet Safety Hotline and other official reporters.
- Shortening response times: Response times must be shortened and a regular, consistent and quick response provided, even on weekends, in order to handle problems and crises in real time. This real-time moderation approach must include both platform-initiated, proactive monitoring and removal of harmful content, and swift reaction to reports from the public and Trusted Flaggers.
- Prioritizing Trusted Flaggers: High priority should be given to the reporting channels of recognized and reliable Trusted Flaggers. This should be manifest in higher processing speed, higher degree of reliability attributed to reports, and the level of transparency demonstrated in the response.
- Improving user reporting mechanisms: The mechanisms for reporting current events and emergency situations must be improved and adapted, with an emphasis on content that qualifies as false information, incitement and support for terrorism. Users must also be offered a clear explanation as to why content has been labeled or removed, to enable more effective monitoring of enforcement processes.
-
Dealing with disinformation and fact-checking
The study found that false and misleading information (especially dangerous during times of war or violent conflict) posed a significant and widespread challenge for all the platforms examined. Steps to improve the handling of this issue must include:
- Expansion of cooperation with local entities: The platforms must form and expand collaborations with a variety of local fact-checking bodies and independent experts who understand the local language and context.
- Improving reporting mechanisms and response to emergency-related false content in real time: The current reporting mechanisms do not allow precise and specific reporting of false content related to current events, and must be adapted to allow the reporting of disinformation specific to a state of war, disaster or crisis.
- Making information and tools accessible to research and academic institutions: It is imperative that the platforms make their information and data analysis tools accessible to research organizations that monitor and analyze content patterns and coordinated and inauthentic behavior (CIB) in the realm of false and misleading content. The platforms must disclose to the research community and to the public the internal studies they conduct. This will foster mutual learning and improve common understanding of the approach necessary to address the issue, and will allow researchers access to data that will enable external and independent examination of the current state of affairs.
-
Strengthening the relationship with recognized and local Trusted Flaggers
In view of the important role played by local Trusted Flaggers like the Internet Safety Hotline, the platforms must invest resources in strengthening relationships with them by: imparting training, providing data tools, issuing updates in times of emergency, prioritizing their reports, enhancing response transparency, and adhering to fair procedures and good practices.
- Training and feedback for local Trusted Flaggers: The platforms must expand their circle of Trusted Flaggers by establishing relationships with other civil society organizations in the field and conducting periodic training sessions for Trusted Flaggers as a matter of routine to keep them up-to-date on their enforcement policies and guidelines. In addition, platforms need to provide regular feedback on rejected reports clarifying the reasons for rejection, and constant updates on policy changes and enforcement in times of emergency.
- Development, enhancement and provision of technological tools: The platforms must provide Trusted Flaggers with sophisticated reporting tools and interfaces that allow them to monitor and analyze reports and responses, which will allow them to optimize their work. The reporting interfaces need to provide accessible and simple-to-fill-out online forms (rather than email reporting which is extremely difficult to organize and monitor). The interfaces should also include a feature that allows reporters to monitor the handling process for their reports and to appeal decisions.
- Increased attention to reports: The platforms must establish binding, well-publicized time targets for responding to Trusted Flagger reports, and each Trusted Flagger must receive a specific, named representative who is to serve as its designated contact in emergency cases. Additionally, the platforms should clearly define expected response times for regular users and publish clear instructions as to courses of action available to them in the event of a delayed or unsatisfactory response.
- Increasing public transparency: The platforms must publish clear policies delineating the procedures for handling reports, including expected response times and well-defined handling goals. In addition, the platforms must follow up on reports by updating reporters with the actions taken in response to each report and the relevant policy section on which the decision stands.
Recommendations for Lawmakers and Government Authorities
In order for the State to ensure the public’s safety on social media platforms, it must be able to monitor the platforms’ operations and require them to meet standards and demands established by local legislation.
- Require platforms to maintain high levels of service and availability: The study shows that the levels of responsiveness and availability of the platforms in times of emergency are insufficient, causing harm to public welfare. Lawmakers should obligate the platforms to provide high levels of service and availability, both on weekdays and on weekends, especially during periods of emergency.
- Require platforms to understand local language and context: The platforms should be obligated to employ qualified local personnel with a deep understanding of Israeli language and context as a prerequisite for their operations in the country.
- Demand transparency: Lawmakers should require the platforms to maintain transparency regarding the scope and caliber of resources devoted to safeguarding users in Israel, including details on enforcement and removal actions in both emergency and routine situations.
- Investing in education and digital literacy awareness: Since it would be imprudent to rely solely on the platforms, the State should systematically invest in education and promotion of smart and safe internet use. Safe internet advocacy should target the general public as well as government bodies, academia, and the media. This will strengthen public resilience in both routine and emergency situations.
Recommendations for Academia, Media and Civil Society
Not only can the State and the platforms themselves work to enhance online user safety, so can communication and academic researchers, research institutes, nonprofits, foundations, activists, media outlets, and fact-checking organizations.
- Strengthen information collection and monitoring capabilities: It is important that academic, media, and civil society bodies develop advanced skills and tools for fact-checking, digital research and monitoring of content and disinformation–capabilities that are currently in short supply in Israel. Improved communication and collaboration are needed among technology companies developing tools for social media information collection and monitoring, academic research bodies, and civil society organizations.
- Collecting performance data and tracking platform performance: Emphasis should be placed on encouraging independent research initiatives in the academic, research and civil society sectors that deal with the monitoring and collection of data on platform behavior within the Israeli context. Suitable research methods should be developed and consortiums dedicated to this purpose should be cultivated, and the resulting insights and information should be disseminated to the public and to relevant decision makers.
- Increase pressure on the platforms and representing public demands: Civil society and research bodies must take an active and critical stance toward the global platforms operating in Israel, as well as toward legislators and decision makers. They must emphasize the public demand for improving the platforms and adapting them to the situation in Israel by increasing transparency vis-à-vis users and the authorities, fortifying protective mechanisms, and raising the standard of action and responsiveness when it comes to protecting their users in Israel.
- Intensify civil and media activity against false content on social media: The civil and media landscape responding to the dissemination of false and misleading content on social networks must be strengthened and expanded. This can be achieved by providing support to independent media organizations, research entities, and civil bodies dedicated to internet research and fact-checking. Strengthening existing players on the field, while also establishing additional independent organizations and media units devoted to fact-checking and internet research, will result in more comprehensive and effective solutions.