For millions of people, and particularly for women and young users, such non-consensual sexual material has become a disturbing reality. This trend, known as cyberflashing, constitutes the process where sexual images are sent out without consent and are accessed through online platforms such as dating apps and social media. Although considered a nuisance for online activities, it has also gained recognition as a form of harassment. From January 2026, a new measure has been implemented in the UK to respond to such a concern. As reported by Reuters and the UK Government, as part of improved regulations for the Online Safety Act, companies have been required to prevent users from receiving such material. This constitutes a significant shift from how online harassment and related concerns have traditionally been handled.
UK cyberflashing rules tighten: How new online safety laws target tech companies
Cyberflashing is a crime in England and Wales, punishable by prison sentences since 2024. But the recent legal update raises its status from a crime to a ‘priority offence’ for the Online Safety Act. This is significant because only technology companies face the most stringent legal requirements when it comes to treating this crime. This means that technology companies will have to take the initiative to block these undesired sexual images before they reach the intended target.
This is because there has been an increasing understanding that reactive moderation is no longer enough. As soon as the clear image arrives, the harm may already be caused. The law intends to make this point never arrive. Dating sites, social media, and content-sharing sites should also develop mechanisms to prevent cases of cyberflashing. Such sites should develop mechanisms such as image detection services and new messaging services to curb cases of cyberflashing. The authorities have made it clear that content-sharing sites should take safety mechanisms seriously and not continue with mere compliance.
As reported by the UK Government, failure to comply with these responsibilities comes with significant penalties. It could result in fines of up to 10% of global revenue for a company every year or, in some cases, a limitation or ban of services in the UK. The tone is clear: online safety is no longer a choice.
How UK tech platforms are already adapting to tougher cyberflashing laws
Some of these platforms have already begun to implement changes that are consistent with the approach of the new law. This is seen with Bumble, which has developed technology that uses artificial intelligence to detect and blur nude photos that are sent through private messages. Such images are accompanied by warnings that allow users to decide what to do.
These platforms use the work of well-trained artificial intelligence that can detect a difference between consensual and non-consensual nudity. Although they are not perfect, they offer a way to ensure that technology is used to safeguard, not endanger, individuals. The tightened legislation is a part of a larger government plan to deal with violence against women and girls, both online and offline. Studies have repeatedly revealed that women and teenage girls are the most affected victims of cyberflashing. Surveys have revealed that one in three teenage girls has received unwarranted sexually explicit images.
With the responsibility passed on to technology companies, it is hoped that victims, who have always had to make reports after abuse, will no longer have to shoulder as great a burden. This is because technology companies are required to ensure a safe online space.
UK regulator Ofcom to define platform safety duties
The UK communications regulator Ofcom will now begin consulting on the detailed codes of practice that will set out what the platforms are expected to do in order to fulfil their new obligations. The new development has further put the UK at the top of the list of countries fighting online abuse, with a focus on the rising use of AI-generated images of a sexual nature. The increase in enforcement powers will help control a certain type of online abuse that has long existed, ensuring stronger accountability, faster takedowns, victim protection, and safer digital spaces.