Deepfake pornography is becoming a disturbing problem in schools, with one student reportedly so horrified by what she saw that it made her physically ill.

Deepfake pornography is becoming a disturbing problem in schools, with one student reportedly so horrified by what she saw that it made her physically ill.

This CSS code defines a custom font called “Guardian Headline Full” with multiple styles and weights. It includes light, regular, medium, and semibold versions, each available in both normal and italic styles. The font files are provided in three formats—WOFF2, WOFF, and TrueType—and are hosted on the Guardian’s servers.@font-face {
font-family: Guardian Headline Full;
src: url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Bold.woff2) format(“woff2”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Bold.woff) format(“woff”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Bold.ttf) format(“truetype”);
font-weight: 700;
font-style: normal;
}

@font-face {
font-family: Guardian Headline Full;
src: url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BoldItalic.woff2) format(“woff2”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BoldItalic.woff) format(“woff”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BoldItalic.ttf) format(“truetype”);
font-weight: 700;
font-style: italic;
}

@font-face {
font-family: Guardian Headline Full;
src: url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Black.woff2) format(“woff2”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Black.woff) format(“woff”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-Black.ttf) format(“truetype”);
font-weight: 900;
font-style: normal;
}

@font-face {
font-family: Guardian Headline Full;
src: url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BlackItalic.woff2) format(“woff2”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BlackItalic.woff) format(“woff”),
url(https://assets.guim.co.uk/static/frontend/fonts/guardian-headline/noalts-not-hinted/GHGuardianHeadline-BlackItalic.ttf) format(“truetype”);
font-weight: 900;
font-style: italic;
}

@font-face {
font-family: Guardian Titlepiece;
src: url(https://interactive.guim.co.uk/fonts/garnett/GTGuardianTitlepiece-Bold.woff2) format(“woff2”),
url(https://interactive.guim.co.uk/fonts/garnett/GTGuardianTitlepiece-Bold.woff) format(“woff”),
url(https://interactive.guim.co.uk/fonts/garnett/GTGuardianTitlepiece-Bold.ttf) format(“truetype”);
font-weight: 700;
font-style: normal;
}

@media (min-width: 71.25em) {
.content__main-column–interactive {
margin-left: 160px;
}
}

@media (min-width: 81.25em) {
.content__main-column–interactive {
margin-left: 240px;
}
}

.content__main-column–interactive .element-atom {
max-width: 620px;
}

@media (max-width: 46.24em) {
.content__main-column–interactive .element-atom {
max-width: 100%;
}
}

.content__main-column–interactive .element-showcase {
margin-left: 0;
}

@media (min-width: 46.25em) {
.content__main-column–interactive .element-showcase {
max-width: 620px;
}
}

@media (min-width: 71.25em) {
.content__main-column–interactive .element-showcase {
max-width: 860px;
}
}

.content__main-column–interactive .element-immersive {
max-width: 1100px;
}

@media (max-width: 46.24em) {
.content__main-column–interactive .element-immersive {
width: calc(100vw – var(–scrollbar-width));
position: relative;
left: 50%;
right: 50%;
margin-left: calc(-50vw + var(–half-scrollbar-width)) !important;
margin-right: calc(-50vw + var(–half-scrollbar-width)) !important;
}
}

@media (min-width: 46.25em) {
.content__main-column–interactive .element-immersive {
transform: translate(-20px);
width: calc(100% + 60px);
}
}

@media (max-width: 71.24em) {
.content__main-column–interactive .element-immersive {
margin-left: 0;
margin-right: 0;
}
}

@media (min-width: 71.25em) {
.content__main-column–interactive .element-immersive {
transform: translate(0);
width: auto;
}
}

@media (min-width: 81.25em) {
.content__main-column–interactive .element-immersive {
max-width: 1260px;
}
}

.content__main-column–interactive p,
.content__main-column–interactive ul {
max-width: 620px;
}

.content__main-column–interactive:before {
position: absolute;
top: 0;
height: calc(100% + 15px);
min-height: 100px;
content: “”;
}

@media (min-width: 71.25em) {
.content__main-column–interactive:before {
border-left: 1px solid #dcdcdc;
z-index: -1;
left: -10px;
}
}The interactive content column has a left border and specific spacing for elements. Paragraphs following certain elements get extra top padding, and the first letter of these paragraphs is styled as a large, colored drop cap. Inline elements and figures are limited to a maximum width. Various color variables are defined for consistent styling across the page.Pullquotes within specific containers should have a maximum width of 620 pixels.

Captions for showcase elements in main content and article containers should be positioned statically, with a full width and a maximum width of 620 pixels.

Immersive elements should span the full viewport width, minus the scrollbar. On larger screens up to 71.24em, their maximum width is 978px, with caption padding adjusted at different breakpoints. On medium screens between 46.25em and 61.24em, the maximum width is 738px. On smaller screens up to 46.24em, immersive elements should align to the left edge with adjusted margins and caption padding.

For furniture wrappers on screens larger than 61.25em, a grid layout is used with defined columns and rows. Headlines have a top border, meta sections are positioned relatively, and standfirst sections have specific styling for links and paragraphs, including borders and underlines. Figures within the wrapper have no left margin and a maximum width of 630px when inline. On screens larger than 71.25em, the grid columns are reconfigured, and the top border on the first paragraph of standfirst sections is removed.The CSS defines a grid layout for an article header with specific areas for the title, headline, standfirst, meta information, and portrait (main media). It sets styles for these elements, including fonts, borders, and positioning. The layout adjusts at different screen widths: for wider screens (over 81.25em), it uses a more detailed grid and adjusts element widths; for medium screens (over 71.25em), the headline font size increases; and for smaller screens (under 46.24em), the main media adjusts to full viewport width. Borders and lines use a custom color variable, and some elements are hidden or repositioned at certain breakpoints.The furniture wrapper sets a dark background and adjusts margins and padding for different screen sizes. On larger screens, it adds sidebars to extend the background. Headlines and titles use a specific accent color, while text is light gray. Social media buttons have a colored border and icon, turning solid on hover. Captions are styled with a toggle button, and certain layout elements are hidden.This CSS code defines styles for a webpage component, likely a sidebar or content wrapper. It sets text colors, link appearances, and layout adjustments for different screen sizes. Links are styled with specific colors and underlines, and the layout includes background elements that adjust based on the viewport width. The code also handles social media and comment section styling within the component.The CSS code defines styles for article elements and loads custom fonts. It sets a light border color for comments and adjusts heading styles: h2 headings in articles have a light font weight by default, but become bold if they contain a strong tag. Additionally, it imports the Guardian Headline Full font family in various weights and styles (light, regular, medium, semibold, each with normal and italic versions) from specific web addresses.The text defines several custom fonts for the Guardian website. It specifies the “Guardian Headline Full” font in various weights and styles (like bold, italic, and black), each with links to different file formats (woff2, woff, ttf). It also defines a “Guardian Titlepiece” font.

Additionally, it sets some color variables for dark mode on iOS and Android devices, and includes specific CSS rules to style the first letter of paragraphs in articles on those mobile platforms.For Android devices, the first letter of the first paragraph in standard or comment articles is styled with a secondary pillar color. On both iOS and Android, article headers are hidden, and the furniture wrapper has specific padding. Labels within this wrapper use a bold, capitalized font with a new pillar color. Headlines are set to 32px, bold, with bottom padding and a dark color. Image figures are positioned relatively, with full-width margins and automatic height.For Android devices, images within article containers have a transparent background and a width that adjusts to the viewport, while their height is set to auto. On both iOS and Android, the standfirst section in articles has specific padding and margins, with its text using the Guardian’s headline font family. Links within the standfirst are styled with an underline in a designated color, which changes on hover. Additionally, the meta section in articles on iOS and Android devices receives specific styling.This CSS code sets styles for article containers on Android and iOS devices. It adjusts margins, colors, padding, and button displays for different article types and elements.On iOS and Android devices, for feature, standard, and comment article containers, images that are not thumbnails or immersive are styled to have no margin, a width calculated from the viewport minus 24 pixels and the scrollbar width, and an automatic height. Their captions have no padding.

Immersive images in these containers are set to a width calculated from the viewport minus the scrollbar width.

Within the article body’s prose, quoted blockquotes display a colored marker using a custom CSS variable. Links are styled with an underline, using specific colors for the link and underline, which change on hover.

In dark mode, the furniture wrapper’s background color is set to a dark gray (#1a1a1a).For iOS and Android devices, the content labels in feature, standard, and comment articles use the new pillar color. Headlines in these articles have no background and use the header border color. The standfirst text also uses the header border color, while links within it and author bylines use the new pillar color. Icons in the meta section are styled with the new pillar color. Captions for showcase images use the dateline color. Additionally, quoted blocks within the article body for iOS devices are styled accordingly.For iOS and Android devices, blockquotes within article bodies should use the new pillar color. Additionally, the main content areas in feature, standard, and comment articles should have a dark background. The first letter after certain elements in these sections should also be styled with the new pillar color.This appears to be a CSS selector targeting the first letter of paragraphs in specific article containers on iOS and Android devices. The selector applies to various article types (standard, feature, comment) and accounts for different page structures and sign-in gate elements.This CSS code sets styles for specific elements on Android and iOS devices. It defines colors, padding, margins, and other visual properties for various article containers, buttons, and text elements. It also includes dark mode preferences and sets some header elements to be invisible.The CSS code sets styles for article containers on iOS and Android devices. It removes margins from furniture wrappers and adjusts colors for labels, headlines, and links based on the device type and article container. Headlines are set to a light gray color, while labels and certain links use a custom color variable. A gradient background is applied to meta sections, and bylines are also styled in light gray.For iOS and Android devices, links within the meta section of feature, standard, and comment articles should use the new pillar colour, defaulting to the dark mode feature colour.

Similarly, SVG icons within the meta section’s miscellaneous area for these article types on both iOS and Android should have their stroke set to the same colour.

Labels for alerts in the meta section across all these article containers on iOS and Android should be coloured #dcdcdc, with this rule taking priority.

Additionally, any span elements with a data-icon attribute in the meta section of these articles on both platforms should also adopt the new pillar colour, defaulting to the dark mode feature colour.For iOS and Android devices, the icon color within the meta section of feature, standard, and comment article containers is set to a custom or default dark mode feature color.

On larger screens, the meta section in these containers displays a top border using the same color variable. The margin for the meta’s miscellaneous content is adjusted, removing the default and adding a left margin.

Additionally, paragraphs and unordered lists within the article body of these containers have a maximum width of 620 pixels. Blockquotes styled with the “quoted” class also receive specific styling.For the body element, when a blockquote with the class “quoted” appears within the prose section, its before pseudo-element should use the secondary pillar color. This also applies specifically to Android devices for comment articles.

On iOS and Android, for feature, standard, and comment articles, links within the prose section should be styled with the primary pillar color. They should have no background image, be underlined with an offset of 6px, and use #dcdcdc for the underline color. When hovered over, the background image should remain none, and the underline color should change to the secondary pillar color.

In dark mode, the color for the quoted blockquote’s before pseudo-element should switch to the darkmode pillar color for all specified article containers on both iOS and Android. Similarly, the link color should also change to the darkmode pillar color. On hover, the underline color for these links should also become the darkmode pillar color.

For the root element when the rendering target is apps, the following custom properties are defined:
* `–follow-text`: #dcdcdc
* `–follow-icon-fill`: uses the new pillar colour
* `–standfirst-text`: #dcdcdc
* `–standfirst-link-text`: uses the new pillar colour
* `–standfirst-link-border`: uses the new pillar colour
* `–byline`: uses the new pillar colour
* `–article-meta-lines`: uses the header border color
* `–byline` is also set to use the darkmode pillar colour

Within this apps context, any span inside an element with the class “byline” should use the new pillar colour for its text. Furthermore, a div that is immediately followed by an element with the name “FollowWrapper” should have a background image defined by gradients using the header border color.

In light mode for the apps rendering target, if an anchor tag within the meta section that is sponsored by “guardian.org” contains a picture with an image, that image should be inverted.

“It worries me that it’s so normalised. He obviously wasn’t hiding it. He didn’t feel this was something he shouldn’t be doing. It was in the open and people saw it. That’s what was quite shocking.”

A headteacher is describing how a teenage boy, sitting on a bus on his way home from school, casually pulled out his phone, selected a picture from social media of a girl at a neighbouring school and used a “nudifying” app to doctor her image.

Ten years ago, it was sexting and nudes causing havoc in classrooms. Today, advances in artificial intelligence (AI) have made it child’s play to generate deepfake nude images or videos, featuring what appear to be your friends, your classmates, even your teachers. This may involve removingA boy on a bus used an app to create a sexually explicit image of a girl from his school. The headteacher does not know why this particular student was targeted, whether the boy knew her, or if it was random. Another pupil saw what was happening and reported it. The school contacted the parents, traced the boy, and called the police. However, due to the stigma and shame associated with image-based sexual abuse, it was decided not to tell the girl who was targeted. “The girl doesn’t actually even know,” the head said. “I talked to the parents and the parents didn’t want her to know.”

This incident is just one example of how deepfakes and easily accessed “nudifying” technology are affecting schoolchildren, often with devastating consequences. In Spain last year, 15 boys were given probation for using AI to create fake naked images of female classmates and sharing them on WhatsApp, affecting about 20 girls, most aged 14, with the youngest being 11. In Australia, about 50 high school students reported their images had been faked and shared; one mother said her daughter was so horrified she vomited. In the US, more than 30 female students discovered deepfake pornographic images of them had been shared among male classmates on Snapchat.

This is also happening in the UK. A new poll of 4,300 secondary school teachers in England found about one in ten were aware of students creating “deepfake, sexually explicit videos” in the last school year. Three-quarters of these incidents involved children aged 14 or younger, with one in ten involving 11-year-olds, and 3% involving even younger children, showing how easy the technology is to access and use. Among teachers, 7% were aware of a single incident, 1% said it happened twice, and a similar proportion said it happened three times or more.

Earlier this year, a Girlguiding survey found one in four respondents aged 13 to 18 had seen a sexually explicit deepfake image of a celebrity, a friend, a teacher, or themselves.

“A year ago I was using examples from the US and Spain to talk about these issues,” says Margaret Mulholland, a specialist at the Association of School and College Leaders. “Now it’s happening on our doorstep and it’s really worrying.”

Last year, The Times reported that two UK private schools were at the centre of a police investigation into the alleged making and sharing of deepfake pornographic images. Police were investigating claims that the deepfakes were created at a boys’ school using images from the social media accounts of pupils at a girls’ school.

The Children’s Commissioner for England, Dame Rachel de Souza, has called for nudification apps like ClothOff to be banned. “Children have told me they are frightened by the very idea of this technology even being available, let alone used,” she says.

It is difficult to find teachers willing to speak about these incidents. Those who agreed to be interviewed insisted on strict anonymity. Other accounts were provided by academics.Researching deepfakes in schools and sex education providers, Tanya Horeck, a professor of film and feminist media studies at Anglia Ruskin University, has been speaking with headteachers to understand the scale of the problem. “All of them had incidents of deepfakes in their schools and they saw this as an emerging problem,” she says. In one case, a 15-year-old girl who was new to a school was targeted by male students who created a pornographic deepfake video of her. She was so distressed she initially refused to go to school. “Almost all the examples they told me about were boys making deepfakes of girls,” Horeck notes.

“There’s also a real tension around how to handle these issues,” Horeck adds. “Some teachers said, ‘We just get the police in right away and students are expelled’—that kind of approach. Then other teachers said, ‘Well, that’s not the way to handle it. We need more of a restorative justice approach, where we talk to these young people and find out why they’re doing these things.’ So there seems to be inconsistency and uncertainty on how to deal with these cases—but I think it’s really hard for teachers because they’re not getting clear guidance.”

Laura Bates, founder of the Everyday Sexism Project, says deepfake images are particularly shocking. In her book The New Age of Sexism: How the AI Revolution Is Reinventing Misogyny, she writes: “Of all the forms of abuse I receive, they are the ones that hurt most deeply—the ones that stay with me. It’s hard to describe why, except to say that it feels like you. It feels like someone has taken you and done something to you and there is nothing you can do about it. Watching a video of yourself being violated without your consent is an almost out-of-body experience.”

Among school-age children, the impact can be huge. Girls and young women are left feeling violated and humiliated. School friendship groups are shattered, and there can be a deep sense of betrayal when one student discovers another has created a sexualized deepfake image of them and shared it around the school. Girls may avoid lessons, while teachers with little training do their best to support and educate. Meanwhile, boys and young men are being drawn into criminal behavior, often because they don’t understand the consequences of their actions.

“We do see students who are very upset and feel betrayed and horrified by this kind of abuse,” says Dolly Padalia, CEO of the School of Sexuality Education, a charity providing sex education in schools and universities. “One example is where a school got in touch with us. A student had taken images of lots of students within the year group and was making deepfakes. These had then been leaked, and the fallout was quite significant. Students were really upset. They felt very betrayed and violated. It’s a form of abuse. The police were involved. The student was removed from school, and we were asked to come in and support. The school responded very quickly, but I would say that’s not enough. To really prevent sexual violence, we need to be more proactive.”

It is estimated that 99% of sexually explicit deepfakes accessible online are of women and girls, but there are cases of boys being targeted. The charity Everyone’s Invited (EI), which collects testimonies from survivors of sexual abuse, has encountered at least one such case: “One student shared with the EI education team that a boy in their year group, who was well-liked and friends with many of the girls, was targeted when another boy created an AI-generated sexual image of him. That image was then circulated around…”The school has experienced significant distress and trauma due to these incidents. EI also highlights how these tools are being trivialized and used in disturbing ways, such as filters that “change your friend into your boyfriend.” On platforms like TikTok and Snapchat, they are increasingly accessible and normalized. While this may seem playful or harmless to some, it reflects and reinforces a culture where consent and respect for personal boundaries are undermined.

Against a backdrop of widespread misogyny in schools, a growing number of teachers are also being targeted, according to EI and others. “This is something we urgently need to confront as a society. Education must stay ahead of technology, and adults need to feel equipped to lead these conversations rather than shy away from them.”

Seth James, a designated safeguarding lead and author of the DSL Blog, says, “For everyone working in schools, it feels like new challenges and risks are constantly emerging from technological developments. AI in general—and particularly deepfakes and nudify apps—feel like the next train coming down the track. ‘More education’ is an appealing solution because it’s intuitive and relatively easy, but on its own, it’s like trying to hold back a forest fire with a water pistol. Likewise, the police seem completely overwhelmed by the scale of these issues. As a society, we need broader solutions and better strategy.”

He adds, “Imagine how we would have felt 20 years ago if someone suggested inventing a handheld device that could create realistic pornographic material featuring people you know in real life—and then giving one to every child. That’s basically where we are now. We’re letting these things become ‘normal’ on our watch.”

Jessica Ringrose, a professor of sociology of gender and education at University College London’s Institute of Education, has worked in schools on issues including masculinity, gender inequality, and sexual violence. She is also co-author of Teens, Social Media, and Image-Based Abuse and is now researching tech-facilitated gender-based violence.

“The way young people are using these technologies isn’t necessarily all bad,” she says, “but they need better media literacy.” She welcomes the government’s updated relationships, sex, and health education guidance, which “recognized that misogyny is a problem that needs to be tackled in the school system.” However, she adds, “They need to connect the dots. They must link concerns about gender and sexual-based violence with technology. You can’t rely on Ofcom or regulators to protect young people. We need proactive, preventive education.”

When asked about the government’s role, a Department for Education spokesperson said, “Our new relationships, sex, and health education guidance will ensure all young people understand healthy relationships, sexual ethics, and the dangers of online content such as pornography and deepfakes. As part of our Plan for Change mission to halve violence against women and girls, we are also providing schools with new funded resources to help teachers explain the law and harms related to online content in age-appropriate lessons.”

Ringrose stresses the urgency: “These issues are happening—non-consensual creation and distribution of images are occurring. These technologies are at people’s fingertips. It’s super-easy for any kid to access them.”She is skeptical about banning smartphones in schools, concerned that such bans could make it harder for young people targeted with abusive imagery to seek help. “Abstinence doesn’t work with things like technology,” she says. “You have to teach people how to use it properly. We need to treat this as a vital part of the curriculum.”

This brings us back to the boy on the bus, where this story started. He was stopped because a girl on the same bus, who had recently learned about online safety in her PSHE (personal, social, health and economic) class, recognized what he was doing and told her teachers. Education works.

Support is available. In the UK, the NSPCC offers help for children on 0800 1111, and for adults concerned about a child on 0808 800 5000. Adult survivors of childhood abuse can contact the National Association for People Abused in Childhood (Napac) on 0808 801 0331. In the US, call or text the Childhelp abuse hotline at 800-422-4453. In Australia, children, young adults, parents, and teachers can contact the Kids Helpline on 1800 55 1800, or Bravehearts on 1800 272 831. Adult survivors can contact the Blue Knot Foundation on 1300 657 380. Further resources can be found at Child Helplines International.

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email for possible publication in our letters section, please click here.

Frequently Asked Questions
Of course Here is a list of FAQs about the disturbing problem of deepfake pornography in schools framed in a natural tone with clear direct answers

Understanding the Basics

Q What exactly is a deepfake in this context
A Its a video image or audio clip that uses artificial intelligence to convincingly swap one persons face andor body onto another person in a pornographic video It can make it look like someone is doing something they never did

Q Why is this suddenly a problem in schools
A The AI tools to create convincing deepfakes have become cheap easy to use and widely available online This means students can target classmates with just a few photos from social media

Q Is this illegal
A In many places yes it is becoming illegal Laws are rapidly changing Creating or sharing deepfake pornography without consent is increasingly being prosecuted as a form of imagebased sexual abuse harassment or child pornography

Impact and Harm

Q Why is it so harmful Its just a fake video
A The psychological harm is very real Victims experience severe trauma including anxiety depression shame and social isolation Its a profound violation of privacy and autonomy as seen in the case of the student who was physically illher body reacted to the extreme distress

Q Who is usually targeted
A While anyone can be a target it disproportionately affects girls and young women Its often used as a tool for bullying harassment and retaliation

Q What are the common problems it causes at school
A It creates a toxic environment of fear and mistrust It leads to bullying social ostracization for victims severe distraction from learning and major disciplinary issues for perpetrators It can also spark wider online harassment

For Students Parents Practical Questions

Q What should I do if I see a deepfake of me or a friend circulating
A 1 Dont share it further 2 Tell a trusted adult immediately 3 Report it to the platform and save evidence 4 Consider reporting to the police especially if you are a minor