Subscribe to Our Newsletter
While the Internet has enabled countless advances in education, workforce productivity, consumer services, social networking, and community connectedness, like all technologies, it has the potential to be misused in a variety of ways, including elder abuse, security attacks, fraudulent activities, and harassment. The use of websites, applications, and other technologies to distribute and publicize sexually intimate or explicit images of individuals without their consent and often with the intent to do emotional, financial, or personal harm is one such abuse. The use of the Internet for this purpose is abhorrent to the technology industry and has led many industry leaders to ban the practice and create mechanisms to assist victims. This document aims to chronicle these efforts and provide a broad overview of these practices to help all industry participants adopt best practices in this area.
At the outset, it is important to note that in responding to this abhorrent behavior industry leaders have looked to develop an aggressive response that is mindful of the powers and limitations of technology, while demonstrating their lack of tolerance for this behavior and their commitment to the victims of this abuse. The following overarching themes have informed all of the best practices cataloged here.
The first step in working to combat the non-consensual distribution of intimate images is to develop a strong, clear, and readily discoverable policy to address the topic. Policies will vary depending on the nature of the product, good or service provided, but should focus on prohibiting the conduct.
For content providers or hosts, sharing platforms, or social media products, the policy should prohibit the public storage or distribution of intimate images intended to be private on the device, platform, application, or website. There are some differences regarding how such conduct is banned, but the most common policies are typically sub-sets of broader prohibitions on nudity, pornography, or harassment. Policies should be easy to find, clear to follow and understand, and include specific instructions for victims. For violations of the policy, the content should be removed and user-level or even device-level bans for actors that violate these terms should be included.
For search engines and content aggregators, policies should, where possible, focus on assisting the victim of the conduct and reducing the harm. These policies may include removing offending URLs from search algorithms and/or banning content providers that feature these images from aggregator sites and engines.
As a compliment to strong, clear terms of service that prohibit such conduct, best practices would include consumer-facing tools that are easy to access and use, with multiple access points and avenues to report harassment or abuse and to request available remedies. These consumer tools could include:
Given the practical and legal challenges identified above, abuse-reporting and response systems must be complaint-driven. The request-for-removal process should be designed for ease of use and quick resolution; keep the complainant informed of progress, and note target timeframes for resolution. Practices vary, but timelines as short as 48 hours have been adopted with efforts to exceed that expectation. While additional communications may be needed or helpful, a sufficient process would include updates to the complainant during any unexpected or necessary delays and would also include a notice of final resolution.
Best practices include consumer and policy maker awareness-raising to increase knowledge and educate the public about the issue, challenges, and difficulties, and to encourage appropriate policy responses and coordination with law enforcement officials. Typical activities, such as stakeholder roundtables, awareness-raising and educational campaigns, non-profit outreach and partnerships, and training activities are all considered valuable community-related activities.
The complaint process will require a level of verification before images are blocked, removed, or identified as violating the terms of use for a particular company. Some companies have agreed to a self-identification and verification system, which is the easiest for a victim to navigate. Other suggestions include tagging a filed police report as sufficient verification.
Some companies have budgeted resources to assist victims in understanding their rights, minimizing their exposure, and addressing their issues with webmasters and website operators.
The following companies contributed to the content and review of this guide: Facebook, Google, Microsoft, Pinterest, Twitter, and Yahoo!
A product of the Attorney General’s Cyber Exploitation Task Force http://oag.ca.gov/cyberexploitation/