SRHR, behaviour change & using AI. Today.

In the field of SRHR (Sexual & Reproductive Health and Rights) we should be used to the concept of “Everyone says they’re doing it, but they’re actually not. Or not really.”.  But we often forget that mechanisms like this are neither specific to teenagers, nor to sex. It is common human behaviour and we also see it when it comes to AI in our own professional domains like health promotion, the NGO sector or SBCC (Social & Behaviour Change Communication). Just like the cool kids in High School are the ones who claim sexual successes in Hollywood scripts, cool NGO’s and social enterprises claim AI successes in presentations and LinkedIn posts.

The holy grail of SBCC
If there were a holy grail of SBCC , I think that holy grail would be ‘mass personalisation’. Traditionally, interventions either target individual behaviour or behaviour of large groups. Those individual interventions can be much more effective in achieving actual and lasting change, but they are also hugely expensive in case you want to achieve change within groups or even societies. Interventions using mass media on the other hand are much more affordable if you want to reach an entire population, but the effect on individuals is much lower (if you can measure it at all). It’s SBCC’s own uncertainty principle; it’s either one or the other, never both.

At least; that always used to be the case. Artificial Intelligence, in theory, makes it possible to create a personal behaviour change intervention for everyone. The only limitations seems to be sufficient computing power and sufficient energy. And both those conditions will most likely be met in the near future.

It’s probably not AI
Healthcare systems all over world face enormous challenges, especially when it comes to capacity. Self care and prevention will only become more important if we want to keep healthcare accessible for people who need it. Considering both this, and the theoretical potential of AI, it’s not surprising that so many efforts are being made to try and develop mass personalized, AI-based interventions. And seeing how much is at stake, it is also not surprising that many claim successes, which on second glance are false, incomplete and/or premature.

There is a step that comes before mass personalisation, which is mass customization. A mass customized intervention is not for ‘everyone’,  but it is tailored for a specific group. Typical examples of mass customized interventions use social media (and/or ‘influencers’) or chatbots. It is extremely likely that when you hear of AI being successfully used in SRHR promotion, it is for either one of these two. ‘We have developed a chatbot that uses AI to provide personal coaching’ or ‘We are using AI to directly communicate with high-risk populations on social media.’ It is also extremely likely that the preventative health intervention that is being talked about is not really using AI at all. It’s probably machine learning.

Machine learning
If you are more of an AI professional than an SBCC and/or SRHR professional, then this blog is probably not for you anyway. But for everyone else, even though the terms ‘AI’ and ‘Machine Learning’ are being used interchangeably a lot nowadays, they are not the same thing. AI imitates actual intelligence.

Machine learning is one of several methods that AI can use to achieve that. Machine learning works by recognizing patterns from data. And while this machine learning is still extremely complex and sophisticated, the way machine learning is being used in SBCC is similar to the way YouTube uses it to recommend you a video. Predicting what you might watch, based on your measurable past behaviour. So, if you ask a chatbot several questions about a certain topic, you will most likely be interested in certain other information we have to offer. And the longer you use the chatbot, the more of your actions are collected. And the better the chatbot will be able to predict what kind of information you need and want.

Impressive as this already is, predicting new content on the basis of your interaction with past content, is not always sufficient anymore to get an oral presentation, LinkedIn likes or even funding. Machine Learning is yesterday’s news, using AI sounds way more interesting.

ChatGPT & Large Language Models
What is generally seen as the most advanced application of AI, ChatGPT, is actually also not ‘AI’. ChatGPT is based on a so-called Large Language Model (LLM), which is a form of Machine Learning. LLM’s are very advanced, and specifically based on text.  They make billions of calculations to correctly interpret text and also generate textual responses. In fact, ChatGPT is so good at this that it is almost impossible to believe that it has no idea what it is doing. LLM’s do not understand your question and they do not understand their own answers. It is just math.

That is also where the fundamental problem is for using tools like ChatGPT, or Bard, or Claude or whatever. They mimic understanding so well, and generate responses with so much confidence that it is easy to be fooled and forget that the models themselves have no idea what they are typing. And since they are trained with openly available online datasets, mostly conversations on social media, it is impossible, even for themselves to know for sure that they are citing credible sources. LLM’s have access to all information, but have none of the wisdom.

If it is not possible to guarantee that you are providing correct and up-to-date information, a tool is not suitable to be implemented. This shortcoming means that it will be quite a while before a reliable AI health chat will be released. The only example so far was not much of a success.

AI is not ready
So, you can relax. The other NGO’s are not way ahead of you. Most likely they just have more confidence (or ignorance, or both actually) than you in selling their digital interventions as ‘AI’.

This does not mean that AI is not already affecting the way you (should) work. First of all, just because SRHR professionals can currently not use AI powered chatbots yet because of quality issues, does not mean that our target audiences also don’t use them. This might mean that they are getting incorrect information. And different AI techniques make it possible to easily spread reliable looking disinformation as well. It’s important for health promoting organisations to work together to form trusted brands where people know that they can turn to get help and information of the highest quality.

But not everything about AI is currently alarming. There are many good uses for AI in general, but specifically ChatGPT if you are working in the SRHR/NGO sector. I will tell you how you can start using it in my next blog.

Headerimage by Open AI Images

Leave a comment