Technology has undoubtedly revolutionized education, from the introduction of the internet and online learning to social media and video platforms like YouTube to educational apps.
Today, the rise of generative artificial intelligence (AI) is the latest innovation that’s top-of-mind for educators globally.
AI, like many technological advancements before it, can be a powerful tool for learning. However, we cannot ignore the ways it can be misused by predators and students themselves, creating a landscape where child sexual abuse material (CSAM) and explicit content are more prevalent and easily accessible than ever before.
A recent report from Qoria, titled “Addressing the Risks of AI-Enabled CSAM and Explicit Content in Education,” explores the challenges of AI – both inside and outside the classroom – and provides potential response strategies for schools. With a goal to better understand how this technology is impacting students’ safety and wellbeing, the report includes data and insights from more than 600 of Qoria’s school partners around the world and uses this insight to begin carving out a path forward.
How Predators Are Using AI
AI, with its ability to mimic human behavior, opens a new door for child grooming.
Predators can use AI-powered tools to create fake profiles, posing as children and teens to connect with and manipulate vulnerable young people. This technology can learn from data, allowing predators to tailor their language, behavior and tactics to specific student demographics and interests. The result is a chillingly realistic online persona that can exploit trust and gain access to a student’s personal world.
This manipulation can progress the predator/child relationship quickly, leading to the exchange of explicit content or worse. The anonymity afforded by the internet emboldens predators, while AI tools further remove the human element.
According to Qoria’s report, a staggering 91.4% of U.S. respondents are concerned about online predators using AI to groom students. Educators are rightfully alarmed by this new weapon in the hands of those with malicious intent.
Schools can proactively combat AI-powered grooming by implementing comprehensive digital citizenship education and safety protocols for students. Additionally, providing robust mental health support will help students feel comfortable reporting inappropriate behavior without fearing the consequences, stopping these interactions before they get too far.
Early Exposure to Explicit Content
The report reveals another trend: the shockingly young age at which students are encountering and sharing explicit content. 67.6% of U.S. schools reported witnessing students as young as 11 to 13 years old possessing, requesting and sharing nude content online. This exposure can have a profound impact on young minds, potentially distorting their understanding of healthy relationships and desensitizing them to harmful or abusive situations.
The preferred platform for sharing content is Snapchat, which is known for its disappearing messages. This type of content has a fleeting nature which creates a false sense of security and often impacts perceived responsibility while also making it harder for parents and educators to monitor and intervene. The ease of access and false sense of privacy fueled by Snapchat and similar platforms create a perfect storm for the normalization of sharing explicit content amongst young people.
By utilizing proper monitoring tools, schools can detect – and empower parents to detect – this behavior in its beginning stages. For example, parental control applications enable parents and guardians to monitor their children’s personal devices, including how they’re using apps like Snapchat.
Undetected Harmful Behavior
Qoria’s report also underscores how difficult it is for schools to detect harmful behavior involving AI, CSAM and explicit content, with 73.5% of U.S. respondents citing detection as the most significant challenge in addressing these issues.
We know that deepfake technology enables the creation of highly realistic fake images and videos by manipulating existing photos or footage. Often created by students, deepfakes can depict peers or even school staff members. With limited resources and the widespread inability to accurately identify deepfakes, schools struggle to stay ahead of this dangerous threat.
Unsurprisingly, deepfakes can have severe emotional and long-term social repercussions for young people. Just imagine a student falling victim to the creation and distribution of a compromising image – the psychological damage caused by such an experience can be lasting.
To address this, EdTech companies are constantly developing and releasing new and advanced tools designed to help schools combat the misuse of new technology like AI-generated explicit content.
Parental Involvement
Despite the importance of parental engagement, 70.6% of U.S. schools reported a lack of awareness amongst parents when it comes to AI, CSAM and explicit content. While parents can and should play a crucial role in safeguarding their children online, it's difficult to do so without the proper knowledge of how AI is being used by predators and young people alike.
To bridge this gap, open and honest communication between schools and parents, alongside education is essential. Schools must play an active role in educating parents about the online landscape, its potential dangers and how to implement safe digital practices at home.
The Right Framework
It’s clear that the growing presence of AI in our daily lives and the normalization of explicit content online can create a dangerous environment for children and teens.
At the same time, with the right tools and strategies in place, AI holds immense potential for learning and safeguarding.
This is why schools must find effective ways to equip students, parents and school staff with the knowledge and skills they need to navigate the online world and mitigate risks, including:
● Policy Reviews and Updates: Schools should prioritize reviewing and updating their policies and procedures for addressing AI-based incidents on a consistent basis.
● Staff Training: Teachers and school staff need ongoing training on how to identify and address online threats, particularly those involving AI and CSAM.
● Parental Engagement: Workshops and seminars for parents are necessary to educate them about online predators, CSAM and the dangers of AI-generated content so they can provide support at home.
● Student Education: Schools must also educate students on the benefits and risks of AI to foster a culture of good digital citizenship.
● EdTech Tools: Digital monitoring, content filtering, classroom management and student check-in tools are more important than ever to help school staff identify and mitigate digital risks.
When schools, parents, students and broader communities work together, we can foster a safer and more positive learning environment for young people and ensure that future generations are well-positioned to face the ever-evolving digital world.
About the author
Yasmin London is a Global Online Safety Expert with Qoria