Note: This post was originally published back in 2022 in German. I decided to translate the text with the help of DeepL in order to raise awareness of the issues before the upcoming vote in the EU council on thursday.

Imagine a world in which there are no more confidential conversations. In this world, every private communication between two or more people is read and scrutinised by government agents. Automated processes record every message and look at every single image sent - billions and billions of files and words, every day. If something is found that is categorised as problematic, the local authorities are automatically notified and will arrive at your door shortly afterwards, search warrant in hand.

What sounds like a dystopian nightmare is a world that the European Commission currently actively actually tries to implement. In May of 2022, the Commission proposed a new legislation that aim to be "laying down rules to prevent and combat child sexual abuse", which forces providers of social networks and messenger services to monitor their users' communications without cause and, if necessary, to report them automatically if certain offences are detected. The proposed procedure is known as client-side scanning, which has also become colloquially known as chat control. Essentially, the aim is for the state to gain insight into end-to-end encrypted communications that are currently inaccessible, in order to use AI algorithms to automatically detect offences against children and young people like the distribution of child pornography and cyber grooming.

The proposed procedure has been sharply criticised by many parties - and rightly so. Critics include civil rights organisations, the German Bar Association, IT security researchers, digital activists and even the legal service of the EU Council of Ministers. A good summary of the numerous reasons that make chat control so problematic can be found on the campaign website Stop Scanning Me. Despite this criticism the project has broad political support, with many member states of the European Union supporting the proposal.

Despite its promises, chat control does little to protect children and instead threatens many groups, not least the very people it is supposed to protect. There is one group though that has not been considered at all in the debate about the negative effects of chat control, namely the group of paedophiles. It is hardly surprising that paedophiles have not yet been mentioned when it comes to the dangerous consequences of chat control. All too often, paedophiles are seen as the natural enemies of children and young people, and therefore any measure that is "directed against paedophiles" is automatically seen as an effective measure for child protection. This follows from the widespread prejudice that paedophilia and abuse are strongly connected. 

In fact, however, the proposed measures hit those hardest who actually deserve support and help: namely people who behave in accordance with the law. In this post I want to explain some of the implications and problems that arise from chat control, especially for paedophiles.

How would chat control actually work?

Chat control is intended to be an instrument for automatically recognising and reporting criminally relevant behaviour. Among other things, the idea is to automatically recognise when files with child pornography content are sent via a messaging service. In principle, this is technically possible without analyzing the content of the communication in detail by comparing digital fingerprints of the files sent with a directory of known illegal files. 

However, the EU proposal goes even further. Not only already known, but also new and therefore unknown abusive image shall be detected. Additionally any attempt at grooming, i.e. messaging minors with sexual intentions, should be automatically recognised and reported. For this you have to analyze and understand the content and context of the conversations to a certain extent.

This means that according to the EU Commission's planys, every message sent via a social platform has to be checked by an algorithm that decides whether the message is harmless or contains potentially criminal content. For this to work, this algorithm must contain a form of artificial intelligence that can understand and evaluate the content of messages.

Most AI algorithms work in such a way that, in the first step, they learn to distinguish between questionable and harmless content using a massive amount of labelled training messages. The algorithm analyses complex patterns in the training data and is then able to apply these patterns to new, unknown data. This means that any decision the algorithm later makes essentially depends on the training data with which the algorithm was fed in the first step. And it is precisely at this point that there is a huge problem that is difficult or impossible to solve.

Automatically reproduced prejudice

An AI learns patterns in its training data and is thus able to recognise them later, even with unknown data. But what happens if the training data already contains implicit or explicit biases?  Correct - in that case an AI that has been trained with this data will end up reproducing these prejudices.

This is already a huge problem in areas where AIs are used. A real-life example of this: in 2015, Amazon discovered that an internally developed AI for pre-filtering CVs of applicants was predominantly rejecting female applicants. The reason for this: the positions had mostly been filled by men in the past. As a result, the "successful" CVs used to train the AI were mainly from men. The AI recognised this pattern and learned from it to reject CVs from women earlier than those from men.

Since I originally wrote the article, many more examples of algorithmic discrimination has surfaced as AI has exploded, following the release of ChatGPT. In the meantime the EU has also adopted the AI Act, the world's first law specifically for artificial intelligence, which is meant to regulate the usage of AI in areas sensitive to descrimination.

Another example that is even more relevant to the topic of chat control: in the USA, an AI system is used to assess the reoffending risk of offenders. As it turns out, this system also has discrimination virtually built in and consistently rates the risk of recidivism higher for people of color compared to white people.

It is a racist prejudice that black people are more likely to be criminals, which the AI has learnt and reproduced. Another prejudice that is probably even more deeply rooted in society is that paedophiles are all abusers.

While discrimination by biased AIs is a fundamental problem that affects all minorities and socially disadvantaged groups, this is particularly problematic when it comes to paedophilia. No other minority is in the eyes of the public more strongly associated with offences against minors. Many people do not even distinguish at all between paedophilia and child sexual abuse, believing they are synonymous. Paedophiles are constantly under suspicion of committing the worst possible offences against children and young people and claimed to be a constant danger.

These prejudices are present at every level of social discourse: from discussions on the internet to political discourse, in media reports and even in therapy programmes and educational projects who claim to support paedophiles. It can be assumed that AIs, which are developed to protect children from sexual assault, will soon learn this association and automatically classify paedophiles as a massive danger.

Incidentally, automated classification procedures are never perfect, but always have a certain margin of error. According to a leaked document, the EU Commission expects up to 10% of all reports to be wrong. This would mean that billions of reports would have to be sent to already overstretched authorities and processed, with countless innocent people being targeted by police and prosecutors. Due to the prevalent prejudice against paedophiles, it is very possible that this error rate will be significantly higher for paedophiles.

Machines running AI algorithms are not neutral or impartial. They are what some have called "stochastic parrots", reproducing (and reinforcing) discrimination that is already prevalent in society. However, as machines they are often perceived as neutral and their decisions as factual and objectively correct. In addition, the decisions made by AI algorithms are usually not transparent and can often not be understood by humans, making them difficult to criticise. This harbours the risk that prejudices and stigmatisation are not only reproduced, but also seemingly legitimised through reproduction and thus become even more deeply rooted in society.

When seeking help becomes dangerous

It is therefore not unlikely that chat control would massively discriminate against paedophiles, for instance by classifying the same chat history between an adult and an underage relative much more quickly as problematic if the adult was identified as a paedophile. At this stage you may be asking how an algorithm implementing chat control is supposed to determine whether an adult is a paedophile. One possibility: the paedophile tells it themselves.

Let's imagine that a paedophile comes out to someone they trust, their best friend maybe. This is followed by further questions or conversations about the subject. Right now it is still reasonably safe to have conversations like this via messenger, as most messengers today are end-to-end encrypted and therefore no-one other than the people taking part in the conversation can read the chat. Investigators, state agents, and even the operator providing the messaging service have no access to the content of these messages, if they are properly encrypted.

So as long as it is possible to communicate securely, it is not possible to analyse the content of conversations and scan them for potential criminal offences. Supporters of the idea of chat control try to "solve" this connundrum by abolishing secure end-to-end encryption. Instead the operators of communication platforms are supposed to install backdoors for checking messages for criminal content on users' end devices, before they are encrypted. They also like to claim that this somehow does not violate secure end-to-end encryption, when in reality you have broken secure and confidential communication the moment you allow anyone other then the participants to access and read messages.

When these ideas are implements, they would make it very dangerous for paedophiles to use digital communication to talk about their feelings, seek support or ask for help. Conversations with your trusted best friend, to whom you have come out, suddenly become an existential security risk as every message you send is scanned by a biased AI, which does not differentiate sufficiently between paedophilia and criminal offences. So the smartest thing a paedophile can do in that situation is to remain silent. In this way, non-offending paedophiles, who already form an almost invisible fringe group, are pushed further into the shadows and marginalised to the point of being unable talk to anyone in a confidential and safe environment.

Elimination of safe spaces

However, not only private 1:1 chats and personal conversations with close friends and family members about paedophilia are attacked by the idea of chat control. There are currently several (far too few) resources for non-offending paedophiles offering help and support online. This works well, as you can remain anonymous online, making the barrier of entry relatively low. However this work will also be de facto impossible if the EU Commission's plans are implemented as planned.

Another part that comes with chat control is age verification. In order for grooming to be recognised and reported in the way the EU Commission envisions it, providers shall be forced to verify the of their user's. In practice, this means that the operators of self-help platforms would be obliged to verify the identity of their users, for example by checking their ID or credit card. This would effectively eliminate anonymity on social platforms and communication channels through the back door, and with it the only effective protection paedophiles have against stigmatisation, discrimination and abuse.

Meanfully, this means that any law-abiding self-help platform for at paedophiles might have to cease operation once chat controls come into force. In particular, this includes platforms whose members want to support each other in not committing offences: Forums such VirPed, or chats such as Die P-Punkte and the Map Support Club. Given the massive and omnipresent hatred that paedophiles face, registering on a platform for paedophiles effectively with your real name can only be described as suicidal. This effectively deprives paedophiles of the last safe spaces in which they can talk confidentially and without fear of rejection. For many, this means that they no longer have any opportunity at all to receive support, understanding and sympathy or to talk to somebody about their problems.

As a bitter irony, this of course only applies to legitimate self-help groups whose members behave in accordance with the law. Criminal groups that organise themselves on the Internet for the purpose of exchanging illegal images or planning crimes will certainly not be impressed by the planned EU directives and will simply continue to communicate anonymously via encrypted channels. The planned directive will therefore affect legal organisations and legitimate offers of help, while criminal groups will continue to exist as before. This alone should raise critical questions about the effectiveness of the directive.

Obstruction of therapies

These issues affect not only self-help, but also professional help. For instance, if you live in or around Germany and want to go to Don't offend for therapy, you might attempt the first contact via email. This is made significantly more difficult if it can be assumed that every email is scanned and potentially reported to the authorities. The creation of anonymous email inboxes, which many paedophiles use for email communication with therapists, would also be made impossible due to mandatory age verification. Truly anonymous and confidential contact would no longer be possible. This significantly increases the inhibition threshold for seeking help.

Even if initial contact is made, the permanent monitoring of all messages without cause would hang like a sword of Damocles over every communication. Imagine a critical situation where a patient in a state of extreme psychological distress contacts their therapists and admits to having consumed CSAM as a coping strategy in the crisis situation. What if this admission is scanned and causes an automated report to the police authorities? What if the patient decides not to contact their therapists for fear of this, instead doubling down on their maladaptive oping strategy?  Establishing contact promptly and without barriers can prove to be very important for the further course of therapy and prevent an escalation of consumer behaviour.

Confidentiality is the prerequisite of any successful therapy. This should also apply to online conversations with therapists. Breaking this confidentialitywill make communication between clients and therapists much more difficult. It seems likely that many patients will no longer mention many things to their therapists for self-protection, at least not in electronic communication. Switching to alternative communication channels (telephone, letter post) is also not an option, as most patients are treated completely anonymously by Don't Offend meaning their real name, postal address or telephone number are unknown. This leaves electronic communication, which is then monitored by the state, as the only medium for contact between therapist and patient.

Of course, this does not only affect paedophiles. Other patients and, especially problematic, also people who are undergoing treatment due to experiences of abuse are also affected. Chat control makes it practically impossible to maintain doctor-patient confidentiality in electronic communication.

Backdoors for bad actors

All of the above is based on the assumption that chat control works as it should and only the providers subjected to reporting requirements and the responsible state authorities will gain access to the data. This idea is naive and unrealistic. In reality the internet is swarmed with bad actors with malicious intentions on the Internet. Chat control eliminates some of the most effective measures of digital self-defense by weakening encryption technologies.

There can no longer be secure encryption in digital communication if chat control is to become a reality, as otherwise the automated scanning of messages is not possible. Encryption, however, is one of the few truly binary things in life: either nobody (except the participating parties) can look read a chat, or everyone can, at least potentially. There are no compromises here. Softening encryption so that third parties can read encrypted content always requires the installation of backdoors that can also be found and exploited by unauthorised parties and criminals.

So there is a also risk that unauthorised persons can also gain access to it. And in the case of paedophiles in particular, hacker groups and vigilantes are very interested in discovering their real identities for extortion or public shaming. The hacker collective Anonymous, for example, has repeatedly boasted of hunting down paedophiles. There is a real danger that such groups will exploit the same backdoors that state-legitimised actors are supposed to use for client-side scanning to decrypt confidential conversations and use them to compile and publish lists of paedophiles. There are already precedents in which innocent people have been forcibly outed publicly. The consequences of such outings can be existential and life-threatening, like the loss of jobs, social relationships or even physical assault. It would not be the first time that groups have organised themselves to attack and inflict life-threatening injuries on alleged paedophiles

Chat control as a tool for censorship

Another serious danger of chat control is the potential for censorship it offers. The proposed guidelines will allow state institutions to manage centralised blocklists of content (files and textual thoughts) and suppress their dissemination across the EU using a new and unprecedented technical infrastructure.

The pretext of wanting to protect children is already being used to suppress and censor unwanted social groups, especially LGBT organisations. Poland has declared LGBT-free zones under the pretext of wanting to stop "paedophilia". Hungary, on the other hand, has passed a so-called anti-paedophilia laws, which in reality restrict the rights of queer individuals. And Texas has started banning books from school libraries that deal with queer themes, again justifying it with a need to protect children.


There are also precedents in Germany. In 2009, the then Minister for Family Affairs, Ursula von der Leyen, enforced the installation of internet blocks for a short time, which blocked access to many websites on a state-provided block lists. This was meant to be applied to websites that were caught distributing CSAM. This highly controversial case gave her the nickname "Zensursula" (literally censorship-Ursula). Today, a good decade later, she is the President of the European Commission, and as such one of the main people responsible for the chat control proposal.

Many online platforms are already implementing censorship measures against paedophiles on their own initiative. The underlying justification is that even just allowing paedophiles to voice their opinions is already endangering children.  If all operators of social platforms are forced to filter and report communication according to government guidelines, there is a risk that there will soon be no platform at all on which paedophiles can express their opinions.

In addition, freedom of expression is also restricted by more subtle methods. It is likely that people will be very careful what they say, especially in relation to paedophilia, if they know that every word will be scanned and could lead to them being classified as potential sex offenders. When in doubt, people with more nuanced attitudes are more likely to remain silent or join in drastic fantasies of violence against paedophiles in order to avoid attracting attention or making themselves suspect. 

This is the chilling effect of chat control: people are inhibited in the exercise of their basic communication rights if they can no longer trust the integrity and confidentiality of the communication process. The result is that certain opinions and attitudes, especially those that are already barely visible in public discourse, are pushed even further underground and are effectively eradicated.

Final comment

In order to justify massive surveillance many people argue that "if you have nothing to hide, you have nothing to fear". As a paedophile, you have a lot to fear, even if you behave in a legally and morally impeccable manner. Privacy and confidentiality are part of the fundamental and human rights in the digital space. Nobody should have to justify making use of their basic rights. 

Paedophiles in particular are a vulnerable group and are more at risk of exclusion, discrimination and violent attacks than any other minority in Europe. For all paedophiles living in the EU, chat control is therefore probably the most threatening and dangerous idea that has been discussed at a political level for a long time. If it is adopted as it is currently being presented, it threatens to abolish the last truly safe spaces for paedophiles, while restricting opportunities for help and therapy and leading to even more severe discrimination and criminalisation.

Tomorrow we will see a very important vote. After months of mostly fruitless negotiations, Belgium has presented a modified version that actually has a chance of getting accepted by the European Council. Despite some changes to the original proposal the critical points still stand, despite some attempts at reframing the issues to make them seem more acceptable. If you are living in the EU and want to help stop this madness, you can find more information on what you can do here.