How AI can protect children from online abuse

The magnitude of online abuse problems
Of the 4.5 billion Internet users today, 1 in 3 are under the age of 18 and often unsupervised (Sonia et al., 2016). While being online, children and young people face dangers from online activity such as cyberbullying, online abuse, exposure to disturbing content, or unwanted contact from adults. From 1998 to 2018, there were 6000 times increase from 3,000 to 18.4 million reports on suspected online abuse of children (Julie, 2019). The rapid growth of technology has facilitated online abuse by providing low-cost platforms for peer-to-peer sharing of files, fast live streaming capabilities, end-to-end user encryption, and easier access to target victims. A survey from communications regulator Ofcom indicated that 79% of internet users aged 12 to 15 claimed “to have had at least one potentially harmful experience online” in the past year.

The application of Artificial intelligence (AI)
While technology is the source of the problem, advances in AI and mobile hardware can provide solutions with opportunities to build intelligent child-safeguarding solutions. Based on images, language, and data inputs, there are several AI techniques that can help combat online child sexual abuse such as text and image analytics, natural language processing, computer vision, and predictive AI. AI is an emerging tool and being piloted to use in preventing, detecting, and prosecuting crimes more effectively and efficiently, assisting the human resources in early detection at a large scale and reducing the burden of manual work of investigating child sexual abuse materials (CSAM).

Firstly, image analytics tools help detects abusive images and videos by analyzing underlying pixels or metadata based on its unique digital fingerprint. Once a new CSAM is reported, it will be compared with the dataset and classified into categories based on patterns detected in the image. If it falls into categories of CSAM, relevant authorities will be contacted to take it down. Computer vision tools include image classification which allows scan for nudity, age, and abuse. Through skin tone detection, nudity can be detected while facial recognition and textural analysis is used to identify whether the victims are children or adults. Programs also assess body motion to determine if the videos are explicit. After the processing, output score is generated to classify whether it is a CSAM for further investigation.

Secondly, text and visual analytics tools analyze language which help identify potential online grooming behaviors or suspicious trafficking events. While the messages are often encrypted to avoid detection, abuse and advertising happen online on the web surface. By analyzing text, natural language processing tools can alert investigators on suspicious abusive language online. Language generation techniques such as chatbots mimic human’s communication behavior in engaging online dialogues. This chatbot has been used to decode online abuse behaviors and prosecute pedophiles who often seek to groom children online.

Thirdly, network analytics and predictive AI allows analyze and triangulate heterogeneous datasets to make predictions on suspected cases of abuse. Network analysis reveals key relationships within criminal structures as the best chance for police to disrupt or disintegrate the trafficking network. Or social networks of at-risk persons are used to determine which contacts have critical influence over others, thus enabling early identification of victims. Predictive AI helps develop models based on unstructured data to quickly search and identify specific victims and offenders across the websites and raise flags for law enforcement officials.

Practical use of AI-tools
An example is, the world’s first AI platform to automatic observe conversations and intercept if child abuse content is included. Using chatbots, it searches for sex trafficking patterns on online escort services, gathers information such as pricing, location, and making alerts if this is likely the transaction between buyers and providers of sexual services of minors. The law enforcement official can base on these automated conversations to detect suspicious cases of unlawful child sex trafficking cases.

Thorn is known as its efforts on collaboration with national and international law enforcement officials to develop the technology to eliminate and detect online child sexual abuse material. Its Spotlight tool analyses web traffic and escort services to identify minor victims, monitor trafficking networks, and flags leads for law enforcement officials. Experiences from the US shows that child sex ads contain cues such as using a third-person voice, obfuscated faces in images, use of certain keywords, etc., which help train the model to detect sex services involving minors. With the unprecedented surge of CSAM, the tool helps optimize sacred human resources while targeting the investigation effectively.

Tellfinder, a tool used by the law enforcement officials, combats child sex trafficking using another technique. By building dataset of online sex ads and visually mapping them by information such as phone numbers, e-mails, addresses, location, the investigators can identify groups of same traffickers and monitor their activities over time. This provides important clues to link evidences across commercial sex services and is used on jurisdiction of offences.

Possessing those powerful features, AI is assisting significantly the work of law enforcement officials, NGOs, and government bodies in prosecuting the offenders and protecting the victims. Though AI is potentially promising, its application in practice is still limited in the developed countries. Mapping the current landscape of AI solutions shows that most of the initiatives focus in countries such as USA, UK, Canada and Europe while the remaining areas – such as sub-Sahara Africa and Southeast Asia – where the online abuse is most prevalent, is still neglected. On the other hand, high-tech companies such as Microsoft, Facebook, Google, and Twitter have actively launched CSAM-focused and AI-powered tools for early detection and elimination of online child abuse.  

The battle against online child abuse requires joint collaboration across sectors from academia, the private technology company, NGOs to school, family and individual levels. Children, family, and community members need to raise awareness on the risks of sexual abuse and can self-develop mechanism to protect themselves on the online environment and voluntarily report incidents when happen. Regional collaboration between law enforcement officials and international task force allows cross-border investigation and prosecution of the transnational criminal network. Academia, NGOs, and international organizations play a significant role in research, generating evidence, and building the victims supporting system. Strengthening the legal framework and laws on cybersecurity ensure a secured safe online environment and lay appropriate punishable penalties on such offences. All in all, AI-power tools are only beneficial if stakeholders are making joint efforts across sectors.

1. Sonia, L., John, C., & Jasmina, B. (2016). One in Three: Internet Governance and Children’s Rights. UNICEF Office of Research – Innocenti.

 2. Julie, C. (2019). A Bold Goal: Eliminating Child Sexual Abuse from the Internet. Thorn.