AI for Civil Society

This article was originally written for and published by Disrupt Development.

You have probably noticed that this ‘Artificial Intelligence’ thing has become quite a big deal over the past few years. Everywhere you look there seems to be a scientist, tech entrepreneur, journalist or futurist telling us about how it’s going to change our world. In good ways and not-so-good ways. While many of us are either dreaming about how AI will bring more equal opportunities and solve complex social issues or are having nightmares about how AI will make an end to privacy or become a tool for oppression, CIVICUS* also had a pragmatical question; ‘How can our communications department make use of AI right now?’. Together with the CIVICUS communications team, Disrupt Development consultants Irene Siaw and myself put together an online (as the CIVICUS team is also spread out across the globe) half days introductory workshop to explore this question.

What is AI really?
While at first sight it appears a pretty straightforward question, there is actually quite bit more to it. It actually already begins with the definition of Artificial Intelligence. What do we mean by it? Even in the scientific AI sector this debate has not been settled and this also showed in the outcomes of the survey we sent around to the CIVICUS team members. Unfortunately, we were not able to settle the debate once and for all. But we did come to the conclusion that even though during the workshop it would be useful to provide an overview of the different types of AI, we would zoom in on generative AI; AI tools that are able to create text, images, audio and video.

Opportunities or threats, which wins?
Obviously, for an NGO like CIVICUS there are many worries about potential risks attached to AI, like surveillance, increasing economic inequality or a concentration of power in the hands of non-democratic entities. By zooming in on generative AI, of which there are many tools and uses known and freely available, the workshop gained focus and it would actually become a workshop, instead of a lecture or a debate session. Not that generative AI does not have its own unique set of hopes and fears. It is well known that generative AI has bias issues, where its outcomes tend to be more favorable to the groups that currently hold more economic and political power. There are also worries about generative AI providing inaccurate outcomes, violating copyrights, or being an IT security threat. The question is then, do these threats and worries outweigh the potential benefits? In other words: should we be using generative AI at all?

Understanding instead of prohibiting
Instead of prohibiting the use of generative AI until all threats and dangers are known and contained, you can also take the route of helping people understand where these threats and dangers come from, which will provide them with knowledge to make better decisions themselves. So during the workshop, we gave a lot of attention to explaining how generative AI works. How is a Large Language Model (LLM) able to take a question (prompt) from a user and then come back with an output (in the form of content) that is so impressive that it make you think that ‘the machine understands you’? Obviously, it does not make sense to explain that in a technical way. The biggest challenge there is to decide, what does a professional working at and NGO or civil society organisation need to know, so that they understand enough of the technology behind the computer screen? So, for example, they don’t just know that they should always check ChatGPT’s answers (even though ChatGPT’s makers do everything they can to make you feel like it’s not necessary), but they know why they should always check. And, on the more positive side, when you understand how generative AI works, it helps to think of potential uses for it, without being dependent on, continuously outdated, lists of new tools that have been developed by others. So, thinking about potential uses during the workshop ourselves was a valuable part of the workshop that provided added insight but also deepened the learning experience.

Learning and doing
It’s one thing to tell people about how generative AI tools try to convince you that they are an actual person, it’s another thing to have them experience it. By having the team work with ChatGPT (as the best accessible tool with the most well-known user interface), trying out various assignments with different prompts and evaluating their experiences together, the theoretic part of the workshop came to life. It’s a bit like explaining how a really cool magic trick works and then seeing it performed before your eyes, you might have a better understanding of how it works, it’s still amazing.

After the workshop, we created a document for the team so they could go over what we had worked on together, but also share with other interested colleagues. We thought it was a great journey and we are confident that CIVICUS is a bit better equipped for the future.

Three Reasons Why You Shouldn’t Use DeepSeek

A new AI product from China, called DeepSeek, has been making waves. With DeepSeek R1 as its chatbot, it’s certainly an unfortunate name, but that’s a cultural thing. DeepSeek is getting a lot of attention because AI and because China. Additionally, it’s claimed that DeepSeek has much lower development costs compared to mostly American competitors like OpenAI (the creators of ChatGPT). Whether this is truly the case remains to be seen, quite likely the Chinese government lent a helping hand. Anyway, three reasons not use DeepSeek;

Data Risks 

The Chinese government pursues a different societal order than what we are accustomed to in the Western hemisphere. That’s their prerogative, but it has certain consequences that we’re not entirely comfortable with. For example, when you install the Temu app, you most likely agree to the terms without much thought. If there was something wrong with it, it would be banned, right? Wrong. Other than the privacy violations we’ve gotten used to from all our apps such as allowing access to your camera and microphone and viewing your contacts and photos, you also agree that Temu can activate your phone (at night), read your messages, and take screenshots of them. When reporters asked questions about this Temu’s response was; “Well, we don’t do that”. Which is obviously not convincing at all. If you don’t plan on using it, why ask for permission? At the very least, this clearly shows that if they want to, they can take over your phone (which, let’s face it, contains your enire life nowadays) from the other side of the world. If Temu ever decides to stop selling stuff, they have all the information they need to take everything from you—your money, your house, your identity. They could even do this on a large scale with a bit of help from AI, like DeepSeek’s AI.

DeepSeek emphasizes that their model is fully open-source and free to download on your own device, theoretically meaning your data isn’t shared with them. But asTemy has demonstrated, the data you share while interacting with the app might not be the most important thing you have to worry about.

App builders can take control over your smartphone, which effectively means your life, from the other side of the world.

Censorship 

There are no completely objective AI chatbots; they all have some bias in a specific direction. Usually, this bias is quite subtle, but not in the case of DeepSeek. The DeepSeek chatbot is clearly censored on topics the Chinese government prefers not to be open about. Things like ‘Taiwan’, the events at Tiananmen Square in 1989, or the protests in Hong Kong. It’s a bit clumsy to apply such crude and visible censorship on topics users don’t need a chatbot for and journalists will definitely look into. If such obvious censorship is applied, it’s very likely that the DeepSeek bot will also be biased favorably for the Chinese government in other ways.

This will not have an enormous short term impact. But as we are seeing with the rising of Extremist right wing populism in Western societies; the revolution will not happen with one big event, but with a continuous stream of small events.

Cheaper. Not better.

From various online reports, it seems that DeepSeek performs better in calculations than OpenAI and that OpenAI beats DeepSeek in creative tasks and news generation.
However, DeepSeek is significantly cheaper for users compared to OpenAI. DeepSeek charges about $0.14 for the same amount of text (around 750,000 words) for which OpenAI charges $7.50. Business users with a $20 per month OpenAI subscription could manage with a free DeepSeek subscription in terms of text volume but with less creativity. In the end, this still seems like a small price to pay.

However, I often see self-proclaimed keynote speakers (seriously, what sane person would refer themselves as a keynote speaker?) and business gurus on LinkedIn praising the likes of Temu and other Chinese business models. Completely ignoring (or being completely ignorant which is also plausible) the human rights and environmental violations, the consumer manipulation and poor product quality. From the many enthusiastic responses they receive, it is clear that there are many people shortsighted enough to give up everything of longterm value, to make an extra Euro today.

Mistral

Just likely mostly everyone, I have been comparing DeepSeek to OpenAI here. OpenAI’s ChatGPT made AI known to the general public and for many people it is synonymous with AI. But obviously, it isn’t, there are more options out there.

Among all these options, I wouldn’t choose OpenAI either. In recent weeks, we’ve clearly seen that European interests and ideals aren’t necessarily well-protected by the American governmental or Big Tech companies. even though Europe is far from perfect, it is closest to the democratic, fair and free world where I would want to live.

Fortunately, there’s a wonderful European alternative to OpenAI: the French Mistral. Their AI chatbot, Mistral 7B, is just as open-source and free as DeepSeek R1, just not as scary.

AI? Better worry about super-wealthy, creepy white men

Image created by Adobe Firefly

Me: Copilot, can you please translate this article to English?
Copilot: I am sorry, I can not translate the article since it may be copyrighted
Me: Sorry. Can you please translate this text I wrote myself into English?
Copilot: Sure, here is the translation:

We don’t need artificial intelligence to destroy ourselves. Our own mediocre intelligence is more than enough.

Many people have a rather unrealistic view of artificial intelligence (AI), and this doesn’t necessarily apply to people who know little about AI. Programmers who are deeply involved with it, or professors who deal with it daily, can also have unrealistic expectations. The idea is that through AGI (artificial general intelligence), where AI systems can learn things on their own and machines become more intelligent than humans, the machine will develop self-awareness. Once that happens, it won’t be long before the AGI decides that humans are undesirable or at least redundant. And this AGI will be so advanced that no human will be able to stop it from wiping us all out.

An extremely advanced calculator

We’re not even talking about the distant future. Sam Altman, the head of OpenAI (the creators of ChatGPT), predicts that this year will be the year of AGI. If that’s not scary, I don’t know what is. What’s really happening is that the amount of data, probability calculations, and computing power that AI throws at a task is so staggering that we can’t even imagine it. So much is being calculated so incredibly fast that it’s easier for us to believe there’s consciousness involved rather than an extremely advanced calculator. Through movies and (once) books, we’ve been familiarized with the idea of self-aware robots for about a hundred years. We can imagine an AI with consciousness, but not the other way around. But that doesn’t make it true.

Being right is not bias

AI models only get ‘smarter’ by adding even more data, more rules, and more computing power. And that’s happening on a large scale now. And that brings dangers with it. I’m not talking about the environmental impact of all the energy AI consumes. I’m talking about that enormous computing power in the hands of a small group of super-rich, creepy white men.

Recently, Mark Zuckerberg, the head of Meta (including Facebook, Instagram, WhatsApp, Meta Quest), was in the news because he announced that Meta’s social media platforms, like Musk’s Twitter, will stop moderation by fact-checkers. With a weak story that mainly highlighted his social shortcomings once again, he tried to sell the idea that he was driven by lofty ideals. Free speech must triumph over the left-wing bias suggested by him regarding fact-checkers. Now it’s logical that from an (extreme) right-wing populist perspective, fact-checkers seem to have a left-wing bias. The movement of types like Trump and Wilders relies on lies, distrust, and division, which often don’t pass the test of unbiased fact-checking. What Zuckerberg wants to dismiss as ‘bias’ actually means that the left-wing progressive movement is often right. For profit’s sake, Meta chooses to approach facts as ‘just another opinion.’

Independent fact-checkers correcting ‘right-wing’ messages more often than ‘left-wing’ ones is not bias. It’s called ‘being right.’

A sympathetic online slut-database

Besides social media, Meta is also involved in AI. Their AI model is called Llama, which is a misleadingly cute name. Llama is an open-source model because they hope to gain more sympathy this way than closed AI models like OpenAI’s (ChatGPT). Meta knows all too well how important sympathy is because they became so rich with the initial sympathy for Facebook, Instagram, and WhatsApp. What doesn’t help here is that Meta knowingly uses copyrighted content to train Llama without permission. Using user data without consent to create their model is unsympathetic but still legal. Somewhere in the thousand pages of terms and conditions was your consent hidden.

Despite this, Meta has chosen blatant theft with Zuckerberg’s permission. It’s important to emphasize that Meta originated from Facebook and that Facebook started as an online slut-list for Harvard students. From these misogynistic roots, Facebook became a ridiculous, accidental success. After this, Meta never had another good idea on its own. All other successes were bought together with other people’s money (investors). Bezos and Musk also owe their fortunes not to their own brilliant ideas but mainly to their ability to make money from others’ ideas without too many scruples about customers’ and users’ interests. What these three Techketeers finally seem to have in common is how easily they give in to the whims of bullies as if they are reliving their schoolyard days from their youth. In short; not people you want to entrust your well-being to.

Money is computing power is power Once upon a time when you participated in a demonstration with thousands of people your privacy was naturally protected by the sheer number of people around you. All those faces provided so much data that hardly any information could be extracted from it anymore. But due to enormous investments in data and computing power this argument has almost completely disappeared now A company like Clearview can compile your entire profile based on a supplied photo; age place of residence income political and sexual preference whether you’ve ever cheated and with whom where you went on vacation at which party you got drunk But this still happens based on supplied photos Or names The only reason why this hasn’t happened yet with all faces in a video of a few thousand pro-Palestine demonstrators? Computing power So it’s very sensible to wear a mask when you’re presenting you never know who might otherwise run off with your identity

Even if you’re completely uninteresting an undesirable framed profile can be made of you if there’s enough computing power

Who has never created a profile at one of Meta’s companies? Facebook Instagram or WhatsApp? And while Meta watched along on your smartphone and laptop without you knowing you’ve watched extreme YouTube videos created Second Life accounts visited porn sites insulted people on internet forums googled tips on how to evade taxes searched for an abortion clinic ordered a rainbow flag from an online store and done all sorts of other things that may not be illegal but your neighbors don’t necessarily need to know All this metadata about what you did online (and when you sent a WhatsApp message to whom and how often) has been stored at appropriately named Meta And you never worried about it Because you thought: “there’s so much data there I’m so uninteresting they’re really not going to catch me” But soon this argument will disappear Because profiles can soon be made even of all uninteresting people it was always just a matter of ‘enough computing power’

Oh they won’t do that

Such a profile of you isn’t just annoying if criminals get hold of it Profiles in government hands can also turn out quite unfavorable think for example: benefits scandal And that’s just ‘our own’ government All this data lies with American companies Subject to American laws It seems fair when you wonder if Americans are still our friends random citizens from other countries who weren’t on their friends list have sometimes ended up in Guantanamo Bay

Now such a gruesome scenario isn’t really expected yet But with enough computing power everyone can be followed where you are when and how long Continuously looking for a moment when you do something that can be labeled as ‘suspicious’ And anyone who potentially becomes slightly troublesome can soon even without pressing one button character assassination can be committed All data from everyone can selectively be bundled interpreted and presented as news fact And if the story doesn’t add up? No one finds out because content moderation no longer exists and through extreme-right populists concept ‘truth’ has long been hollowed out after which they filled this gap with unfounded general distrust And for those who don’t fear government it’s still naive to think American Big Tech would miss any opportunity to make extra money off you And there’s more money to be made off you if you’re angry and confused Chaos and division are lucrative

I’m not worried at all about an AI with self-awareness Super-rich creepy white men with far too much computing power: that’s what keeps me awake at night


Why we need to start teaching about deepfake porn.

Of course, teachers are already incredibly busy. And of course, with every societal issue, there’s an immediate call for it to be included in education. And yes, there’s already a lot of fuss about sexual education in schools. So, should this really be added? Yes, it should. By ‘this,’ I mean ‘deepfakes porn’ And the only thing that shouldn’t find a place in the educational curriculum is how to make them. Because that’s precisely the problem; many young people already know how to do that. And their numbers are only increasing.

A few weeks ago, all the Dutch news bulletins were filled with the latest details about ‘banga lists.’ Students from Utrecht had created a PowerPoint (which in itself is quite cute actually) discussing the sexual attractiveness of several female fellow students. It was very intense, and rightly so, there was a strong intervention. But imagine if it wasn’t about young female students, but about high school students. And imagine it wasn’t a PowerPoint, but an indistinguishable fake pornvideo.

It’s not that there’s no attention to the potential dangers surrounding deepfake images and videos. Generally, it’s about how malefactors, on behalf of foreign states, could produce and spread fake news with such materials. At the same time, it’s noticeable that this hardly happens. The main reason is that it’s not necessary to produce deepfake materials to spread disinformation and garner support for far-fetched conspiracy theories on a large scale. At the end of such an article or item, you might go to bed slightly disillusioned about your fellow human beings, but still, you go to bed peacefully.

But with deepfake porn, it’s different. Where misinformation is an -unnecessary- means to produce deepfake imagery, in pornography, it’s the goal. And that goal is increasingly easier to achieve. With an end result that’s harder and harder to distinguish from reality. In 2022, presenter Welmoed Sijtsma made a documentary about a deepfake porn film that was made of her. The documentary received a lot of attention, but afterward, it became quite quiet. While technological development made huge progress. Without too much publicity, there is now an enormous amount of deepfakes to be found. On specially built websites and in Telegram groups. But also by simply doing a Google search, or via social media like Reddit and Tumblr.

In many cases, it’s no longer about laborious processes like in ‘the good old fake porn days,’ when the creator had to sweat it out night after night in a dark room with video editing software. If you’re not satisfied with the apps you can download from your app store, a simple search is enough to find dozens, if not hundreds, of AI tools with which you can easily create your deepfake porn material. Tools that, like ChatGPT and Microsoft’s Copilot, use models from Open.ai, guaranteeing quality for a pittance. And if you want something specific and are a bit handier, you can also look for open-source code on GitHub.

In 2018, American actress Scarlett Johansson already sighed that the fight against deepfake porn is a lost cause. And that was even before the AI revolution. The victims were celebrities with a lot of still and moving images available. Now, the victim can be any random person, as it takes hardly any time or effort to produce it. And just as little effort to share your craft anonymously with the world. So, it’s a big, growing problem. But I can very well imagine that you now think there are much better ways to tackle this problem than through education. This can’t all be legal, can it? Why don’t we send the police after them?

Like all AI developments, deepfake has come about so quickly that it’s not so easy to do something about it legally, at least for now. In the case of Welmoed Sijtsma, the perpetrator was indeed arrested and convicted, but that was unfortunately the exception. The Dutch government concluded after research in 2022 that not so much legislation, but especially enforcement, will be a problem. And that research took place before the AI revolution. Not only did this revolution lead to much more material being developed, but it also made it possible for completely random characters to be developed for a deepfake video using AI. And then try to prove that your likeness has been placed in such a video and not a fictional character that just happens to look like you.

Besides the creators of deepfake porn, you could also try to tackle the creators of the technology. But that turns out not to be so simple. Tech multinationals have deep pockets for a legal battle and are also located on different continents, making it even harder to tackle them. And besides, they can naturally hide behind the stance that they only develop the technology and cannot be held responsible for what users subsequently do with it.

It seems a more salutary route to ask tech companies to voluntarily build safety measures into their systems (guardrails). However, if they are willing to do so -which is very doubtful, it remains a man’s world- it is almost impossible to make these guardrails sufficiently effective. Such restrictions only work as long as no one has been creative enough to discover how to bypass that restriction.

So, if it ever becomes possible to prevent deepfakes from being made, it will certainly take many years. And in the meantime, we must do what we can to prevent as many victims as possible. Education is an important route in that fight. Through it, we can teach young people, before they become perpetrators, that it may seem innocent to them, but it certainly isn’t for the victims. It can feel like a victimless crime, as it’s not a real video, but the consequences for the victims are often just as severe.

At the same time, we must be realistic enough to understand that the production of deepfakes will probably never completely disappear. The demand will continue to exist, and there will continue to be ways to meet that demand. And then it’s also important that we learn together how to best deal with it when such an incident occurs. How we react to a perpetrator, but especially: how we ensure that the damage to the victim remains as minimal as possible. By acting supportively and understandingly, for example.

One of the most counterproductive ways to do something against deepfakes is to promote the far-reaching oppression of women. In ultra-conservative circles, women are sometimes already discouraged from being findable online at all. That would be the only way to prevent you from ever ending up in a deepfake. But in a time when you can’t avoid being heard, and you also need to do that online or at least via mass media; it means that these women are effectively being silenced. And then the cure is worse than the disease.

And so, AI-generated deepfakes are indeed a threat to democracy and universal human rights. Not directly but, not for the first time, through a detour. Along the taboo surrounding sexuality.

Image created with Adobe FireFly

SRHR, behaviour change & using AI. Today.

In the field of SRHR (Sexual & Reproductive Health and Rights) we should be used to the concept of “Everyone says they’re doing it, but they’re actually not. Or not really.”.  But we often forget that mechanisms like this are neither specific to teenagers, nor to sex. It is common human behaviour and we also see it when it comes to AI in our own professional domains like health promotion, the NGO sector or SBCC (Social & Behaviour Change Communication). Just like the cool kids in High School are the ones who claim sexual successes in Hollywood scripts, cool NGO’s and social enterprises claim AI successes in presentations and LinkedIn posts.

The holy grail of SBCC
If there were a holy grail of SBCC , I think that holy grail would be ‘mass personalisation’. Traditionally, interventions either target individual behaviour or behaviour of large groups. Those individual interventions can be much more effective in achieving actual and lasting change, but they are also hugely expensive in case you want to achieve change within groups or even societies. Interventions using mass media on the other hand are much more affordable if you want to reach an entire population, but the effect on individuals is much lower (if you can measure it at all). It’s SBCC’s own uncertainty principle; it’s either one or the other, never both.

At least; that always used to be the case. Artificial Intelligence, in theory, makes it possible to create a personal behaviour change intervention for everyone. The only limitations seems to be sufficient computing power and sufficient energy. And both those conditions will most likely be met in the near future.

It’s probably not AI
Healthcare systems all over world face enormous challenges, especially when it comes to capacity. Self care and prevention will only become more important if we want to keep healthcare accessible for people who need it. Considering both this, and the theoretical potential of AI, it’s not surprising that so many efforts are being made to try and develop mass personalized, AI-based interventions. And seeing how much is at stake, it is also not surprising that many claim successes, which on second glance are false, incomplete and/or premature.

There is a step that comes before mass personalisation, which is mass customization. A mass customized intervention is not for ‘everyone’,  but it is tailored for a specific group. Typical examples of mass customized interventions use social media (and/or ‘influencers’) or chatbots. It is extremely likely that when you hear of AI being successfully used in SRHR promotion, it is for either one of these two. ‘We have developed a chatbot that uses AI to provide personal coaching’ or ‘We are using AI to directly communicate with high-risk populations on social media.’ It is also extremely likely that the preventative health intervention that is being talked about is not really using AI at all. It’s probably machine learning.

Machine learning
If you are more of an AI professional than an SBCC and/or SRHR professional, then this blog is probably not for you anyway. But for everyone else, even though the terms ‘AI’ and ‘Machine Learning’ are being used interchangeably a lot nowadays, they are not the same thing. AI imitates actual intelligence.

Machine learning is one of several methods that AI can use to achieve that. Machine learning works by recognizing patterns from data. And while this machine learning is still extremely complex and sophisticated, the way machine learning is being used in SBCC is similar to the way YouTube uses it to recommend you a video. Predicting what you might watch, based on your measurable past behaviour. So, if you ask a chatbot several questions about a certain topic, you will most likely be interested in certain other information we have to offer. And the longer you use the chatbot, the more of your actions are collected. And the better the chatbot will be able to predict what kind of information you need and want.

Impressive as this already is, predicting new content on the basis of your interaction with past content, is not always sufficient anymore to get an oral presentation, LinkedIn likes or even funding. Machine Learning is yesterday’s news, using AI sounds way more interesting.

ChatGPT & Large Language Models
What is generally seen as the most advanced application of AI, ChatGPT, is actually also not ‘AI’. ChatGPT is based on a so-called Large Language Model (LLM), which is a form of Machine Learning. LLM’s are very advanced, and specifically based on text.  They make billions of calculations to correctly interpret text and also generate textual responses. In fact, ChatGPT is so good at this that it is almost impossible to believe that it has no idea what it is doing. LLM’s do not understand your question and they do not understand their own answers. It is just math.

That is also where the fundamental problem is for using tools like ChatGPT, or Bard, or Claude or whatever. They mimic understanding so well, and generate responses with so much confidence that it is easy to be fooled and forget that the models themselves have no idea what they are typing. And since they are trained with openly available online datasets, mostly conversations on social media, it is impossible, even for themselves to know for sure that they are citing credible sources. LLM’s have access to all information, but have none of the wisdom.

If it is not possible to guarantee that you are providing correct and up-to-date information, a tool is not suitable to be implemented. This shortcoming means that it will be quite a while before a reliable AI health chat will be released. The only example so far was not much of a success.

AI is not ready
So, you can relax. The other NGO’s are not way ahead of you. Most likely they just have more confidence (or ignorance, or both actually) than you in selling their digital interventions as ‘AI’.

This does not mean that AI is not already affecting the way you (should) work. First of all, just because SRHR professionals can currently not use AI powered chatbots yet because of quality issues, does not mean that our target audiences also don’t use them. This might mean that they are getting incorrect information. And different AI techniques make it possible to easily spread reliable looking disinformation as well. It’s important for health promoting organisations to work together to form trusted brands where people know that they can turn to get help and information of the highest quality.

But not everything about AI is currently alarming. There are many good uses for AI in general, but specifically ChatGPT if you are working in the SRHR/NGO sector. I will tell you how you can start using it in my next blog.

Headerimage by Open AI Images