Of course, teachers are already incredibly busy. And of course, with every societal issue, there’s an immediate call for it to be included in education. And yes, there’s already a lot of fuss about sexual education in schools. So, should this really be added? Yes, it should. By ‘this,’ I mean ‘deepfakes porn’ And the only thing that shouldn’t find a place in the educational curriculum is how to make them. Because that’s precisely the problem; many young people already know how to do that. And their numbers are only increasing.
A few weeks ago, all the Dutch news bulletins were filled with the latest details about ‘banga lists.’ Students from Utrecht had created a PowerPoint (which in itself is quite cute actually) discussing the sexual attractiveness of several female fellow students. It was very intense, and rightly so, there was a strong intervention. But imagine if it wasn’t about young female students, but about high school students. And imagine it wasn’t a PowerPoint, but an indistinguishable fake pornvideo.
It’s not that there’s no attention to the potential dangers surrounding deepfake images and videos. Generally, it’s about how malefactors, on behalf of foreign states, could produce and spread fake news with such materials. At the same time, it’s noticeable that this hardly happens. The main reason is that it’s not necessary to produce deepfake materials to spread disinformation and garner support for far-fetched conspiracy theories on a large scale. At the end of such an article or item, you might go to bed slightly disillusioned about your fellow human beings, but still, you go to bed peacefully.
But with deepfake porn, it’s different. Where misinformation is an -unnecessary- means to produce deepfake imagery, in pornography, it’s the goal. And that goal is increasingly easier to achieve. With an end result that’s harder and harder to distinguish from reality. In 2022, presenter Welmoed Sijtsma made a documentary about a deepfake porn film that was made of her. The documentary received a lot of attention, but afterward, it became quite quiet. While technological development made huge progress. Without too much publicity, there is now an enormous amount of deepfakes to be found. On specially built websites and in Telegram groups. But also by simply doing a Google search, or via social media like Reddit and Tumblr.
In many cases, it’s no longer about laborious processes like in ‘the good old fake porn days,’ when the creator had to sweat it out night after night in a dark room with video editing software. If you’re not satisfied with the apps you can download from your app store, a simple search is enough to find dozens, if not hundreds, of AI tools with which you can easily create your deepfake porn material. Tools that, like ChatGPT and Microsoft’s Copilot, use models from Open.ai, guaranteeing quality for a pittance. And if you want something specific and are a bit handier, you can also look for open-source code on GitHub.
In 2018, American actress Scarlett Johansson already sighed that the fight against deepfake porn is a lost cause. And that was even before the AI revolution. The victims were celebrities with a lot of still and moving images available. Now, the victim can be any random person, as it takes hardly any time or effort to produce it. And just as little effort to share your craft anonymously with the world. So, it’s a big, growing problem. But I can very well imagine that you now think there are much better ways to tackle this problem than through education. This can’t all be legal, can it? Why don’t we send the police after them?
Like all AI developments, deepfake has come about so quickly that it’s not so easy to do something about it legally, at least for now. In the case of Welmoed Sijtsma, the perpetrator was indeed arrested and convicted, but that was unfortunately the exception. The Dutch government concluded after research in 2022 that not so much legislation, but especially enforcement, will be a problem. And that research took place before the AI revolution. Not only did this revolution lead to much more material being developed, but it also made it possible for completely random characters to be developed for a deepfake video using AI. And then try to prove that your likeness has been placed in such a video and not a fictional character that just happens to look like you.
Besides the creators of deepfake porn, you could also try to tackle the creators of the technology. But that turns out not to be so simple. Tech multinationals have deep pockets for a legal battle and are also located on different continents, making it even harder to tackle them. And besides, they can naturally hide behind the stance that they only develop the technology and cannot be held responsible for what users subsequently do with it.
It seems a more salutary route to ask tech companies to voluntarily build safety measures into their systems (guardrails). However, if they are willing to do so -which is very doubtful, it remains a man’s world- it is almost impossible to make these guardrails sufficiently effective. Such restrictions only work as long as no one has been creative enough to discover how to bypass that restriction.
So, if it ever becomes possible to prevent deepfakes from being made, it will certainly take many years. And in the meantime, we must do what we can to prevent as many victims as possible. Education is an important route in that fight. Through it, we can teach young people, before they become perpetrators, that it may seem innocent to them, but it certainly isn’t for the victims. It can feel like a victimless crime, as it’s not a real video, but the consequences for the victims are often just as severe.
At the same time, we must be realistic enough to understand that the production of deepfakes will probably never completely disappear. The demand will continue to exist, and there will continue to be ways to meet that demand. And then it’s also important that we learn together how to best deal with it when such an incident occurs. How we react to a perpetrator, but especially: how we ensure that the damage to the victim remains as minimal as possible. By acting supportively and understandingly, for example.
One of the most counterproductive ways to do something against deepfakes is to promote the far-reaching oppression of women. In ultra-conservative circles, women are sometimes already discouraged from being findable online at all. That would be the only way to prevent you from ever ending up in a deepfake. But in a time when you can’t avoid being heard, and you also need to do that online or at least via mass media; it means that these women are effectively being silenced. And then the cure is worse than the disease.
And so, AI-generated deepfakes are indeed a threat to democracy and universal human rights. Not directly but, not for the first time, through a detour. Along the taboo surrounding sexuality.
Image created with Adobe FireFly

