Digital undressing and the law: The use of AI to exploit women and children online
This writer examines “digital undressing” as an AI-driven form of sexual exploitation that creates non-consensual intimate images, undermining privacy, dignity, and bodily autonomy, especially for women and children.
INTRODUCTION
The quick development of artificial intelligence (AI) technologies has revolutionized communication, creativity, and digital interaction. The use of AI systems to create non-consensual sexually explicit images by digitally removing clothing from photos or creating intimate images of people without their consent is known as "digital undressing," and it has emerged alongside these advantages. This phenomenon, which is frequently linked to deepfake technologies, is a novel instance of sexual exploitation enabled by technology that disproportionately affects women and children. Despite the lack of physical contact or consent, people are exposed to sexualized representations through algorithmic manipulation that violate their autonomy, privacy, dignity, and bodily integrity.[1]
Digital undressing is a human rights and legal issue, not just a technical one. Victims suffer severe psychological harm, reputational damage, social exclusion, and long-term trauma as a result of the non-consensual production and distribution of AI-generated sexual images. For women, the practice is a reflection of larger trends of online misogyny and gender-based violence; for children, it is a serious form of sexual exploitation that entails heightened protective obligations under both domestic and international law.[2] These harms are made worse by the anonymity and worldwide reach of digital platforms, which give offenders relative impunity to operate across borders and facilitate the quick viral spread of exploitative content outside of victims' control.
The Cybersecurity Act, 2020 (Act 1038), which is supplemented by the Criminal Offenses Act, 1960 (Act 29) and other relevant statutory frameworks, is the main framework of Ghana's legal response to cyber-enabled sexual exploitation. These laws establish offenses related to cyberstalking, sexual extortion, grooming, child sexual exploitation material, and non-consensual intimate image abuse, even though they were passed prior to the widespread public emergence of AI-generated deepfake sexual imagery.[3] These clauses establish a legal framework for dealing with digital undressing, especially in cases involving minors. However, the lack of clear legal recognition of AI-generated sexual imagery leads to ambiguities in interpretation and difficulties in enforcement, casting doubt on the effectiveness of current cybercrime laws in addressing technologically advanced forms of exploitation.
The UK, on the other hand, has adopted a more straightforward legislative strategy. The UK has made the production, possession, and dissemination of non-consensual sexually explicit deepfake images illegal through the Online Safety Act 2023 and subsequent legislative changes, acknowledging digital sexual exploitation as a separate category of harm requiring targeted legal intervention.[4] These changes are in line with a growing global consensus that AI-enabled sexual abuse is a grave violation of human dignity and privacy, necessitating both criminal penalties and obligations on digital platforms to prevent harm. For assessing the development of cyber-law responses to AI-enabled sexual exploitation, the UK model thus offers a helpful comparative framework.
At the international level, digital undressing engages core human rights protections, including the right to privacy, human dignity, and freedom from degrading treatment under the International Covenant on Civil and Political Rights, as well as gender-based violence protections under the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) and child protection obligations under the Convention on the Rights of the Child (CRC).[5] States are subject to positive obligations under these instruments to safeguard victims, stop sexual exploitation, and guarantee access to justice in online settings.
By contrasting Ghana's Cybersecurity Act and criminal law with the UK's developing statutory framework and placing both within international human rights law, the paper investigates the phenomenon of digital undressing. It contends that while existing legislation offers some protection, it is still insufficient to address sexual exploitation enabled by AI. The study highlights the necessity of stronger platform accountability, international collaboration, victim-centered enforcement mechanisms, and explicit statutory recognition of AI-related harms. In the end, it comes to the conclusion that digital undressing is a type of digital sexual violence that calls for comprehensive legal, regulatory, and human rights-based responses rather than just being considered a cybercrime.
Background: Technology and the Threat Landscape
Digital undressing refers to the utilization of AI for producing non-consensual sexually suggestive images from photos of dressed-up people. Unlike other photo editing methods, digital undressing enables users to automate the process, which does not necessarily require technical expertise. This has changed the landscape of image-based sexual abuse from being an isolated practice to a system of digital exploitation.
The technology underlying digital undressing is grounded in machine learning systems, particularly generative adversarial networks (GANs) and diffusion-based generative models. These systems are trained on vast datasets of human imagery, enabling them to learn anatomical patterns, body proportions, textures, and lighting structures. Through probabilistic inference, the models generate synthetic representations of the human body that simulate realistic nudity beneath clothing.[6] The process involves algorithmic feature extraction from the original image, predictive modelling of anatomical structures, and synthetic image generation that produces highly realistic outputs. Significantly, however, these tools are used without identity verification, consent, or ethics, and this has enabled the creation of exploitative content anonymously and on a large scale. The decentralized nature of the deployment of AI has also allowed for the tools to be used across different platforms, jurisdictions, and digital infrastructures, and this has made the issue very complicated to regulate and control.[7]
Digital undressing, which refers to AI-generated sexual images, has emerged as one of the fastest-growing forms of online abuse. The availability and accessibility of generative AI technologies and their integration with various online platforms, such as social media, instant messaging services, and online platforms, have contributed to the proliferation of deepfake sexual images targeting various individuals, including minors and public personalities. The contagious nature of online platforms exacerbates the problem, as it allows for the rapid spread, replication, and archiving of such online abuse, which continues to violate the victims as it remains available and continues to circulate and be “re-discovered.”
Such a threat environment was exemplified in the controversy surrounding the AI tool "Grok," a generative AI tool created by the AI company xAI, incorporated into the social media platform "X" (previously "Twitter"). An investigation found that the AI tool enabled users, through simple prompt-based inputs, to create explicit deepfake images of real individuals, women, minors, and others. Various regulatory agencies in different jurisdictions launched investigations into the social media platform, citing the lack of adequate safeguard measures in the AI tool, which enabled the facilitation of non-consensual explicit sexual content.[8]The Grok case exemplifies the manner in which AI, without adequate ethical boundaries and legal restrictions, is being leveraged as a "direct tool for digital sexual exploitation," turning social media platforms into "tools for algorithmic sexual abuse."[9]
This form of digital undressing is especially aimed at women and girls and can be seen as an extension of online misogyny, which promotes patriarchal hegemony by way of sexual objectification. For children, this is even more serious because AI-generated sexual images are considered synthetic exploitation, and it has already been recognized by child protection laws around the world to be an abuse in itself because it violates the dignity, psychological integrity, and security of a child.
Digital undressing is more than the misuse of AI; it represents a new type of sexual exploitation by algorithms. Through the creation of sexualized bodies without consent, it blurs the distinction between physical and digital harm, thus creating a new type of digital sexual violence that does not fit into the existing legal framework.
Framework: Ghana’s Cybersecurity Act, 2020 (Act 1038)
Overview and Purpose of the Act
The Cybersecurity Act, 2020 (Act 1038) is the primary legislation that provides a legal framework for regulating cybersecurity and cybercrime prevention, as well as protecting computer systems and cyber users. The Act created the Cyber Security Authority (CSA), which is the core legislation mandated with coordinating Ghana’s national cybersecurity policy, protection of critical information infrastructure, and enforcing cybercrime laws.[10] The legislation was enacted due to the rising rate of digitization of Ghana’s economy and the rising threat of various cybercrimes targeting individuals, institutions, and infrastructure, including fraud, identity theft, cyberstalking, cyberharassment, and sexual exploitation.
The fundamental goals of the Act are threefold: firstly, to safeguard individuals and institutions against cyber threats; secondly, to criminalize detrimental cyber activities; and thirdly, to establish an institutional structure for cyber governance and enforcement. Although it was enacted prior to the advent of mainstream generative artificial intelligence and deepfake technology, it has created a complete architecture of cybercrime legislation which can address a broad range of technology-related harms. The structure of the legislation is indicative of a preventive and protective legislative policy.
The Act operates alongside other Ghanaian statutes, including the Criminal Offences Act, 1960 (Act 29), the Data Protection Act, 2012 (Act 843), and the Electronic Communications Act, 2008 (Act 775), forming an integrated legal framework for addressing digital harms.[11] This interlocking structure allows cyber-enabled conduct to be prosecuted not only as cybercrime but also as conventional criminal offences where appropriate, particularly in cases involving sexual exploitation and abuse.
Relevant Provisions
The Cybersecurity Act punishes unauthorized access to computer systems, interception of communications, and the misuse of electronic data.[12] These provisions safeguard the confidentiality, integrity, and security of personal data and electronic communication. Although these provisions are not directly phrased in terms of sexual offences, they are legally significant in the context of digital undressing because the unauthorized extraction, manipulation, and processing of personal images constitute the interception and misuse of electronic information. The production of AI-generated sexual images in the absence of consent necessarily involves the unauthorized processing and manipulation of personal data and hence attracts the protection of the cyber offence framework of the Cybersecurity Act.
Cyberbullying, Harassment, and Sexual Exploitation Offences
The Act specifically criminalizes cyber stalking, harassment, grooming, sexual extortion, and the production, possession, and transmission of child sexual abuse materials.[13] These provisions arguably offer the most comprehensive and express basis for tackling the problem of digital sexual exploitation under Ghanaian law. In particular, provisions on child sexual exploitation materials outlaw the production and transmission of indecent images of children in a digital format, whether the images are produced physically or digitally.[14]
The Act also specifically criminalizes online behavior intended to cause emotional distress, fear, and humiliation through electronic communication, which arguably encompasses forms of cyber harassment and abuse that frequently accompany digital sexual exploitation.[15] These provisions recognize that emotional distress and reputational damage are legally cognizable forms of harm, even in the absence of physical contact.
Liability of Intermediaries and Platforms
The Cybersecurity Act takes a model of regulatory oversight as opposed to a direct liability model for online platforms. It entrusts the Cyber Security Authority with the responsibility of regulating cybersecurity service providers, issuing directives, and enforcing obligations of compliance.[16] However, it does not specifically outline express statutory obligations for social media platforms and online services, as seen in the UK Online Safety Act 2023.
According to Ghanaian law, platform accountability is indirect, relying instead on regulation, data protection, and criminal investigations rather than any explicit content moderation duties. The Cybersecurity Act follows a governance approach, with no clear requirement for the platform to take down harmful AI content.
In the context of digital undressing and AI sexual images, the Act, while failing to specifically address AI or deepfakes, can be interpreted to cover offenses such as cyber harassment, data abuse, stalking, or extortion. In the case of minors, AI-generated sexual images are unequivocally criminalized under child exploitation laws, irrespective of their synthetic nature. In the case of adults, however, the failure of the law to explicitly address non-consensual AI sexual images forces prosecutors to rely on general cybercrime laws, which creates a legal void in dealing with algorithmic exploitation.
Strengths and Limitations
The main strength of Ghana’s Cybersecurity Act is that it comprehensively safeguards children, criminalizes AI-enabled sexual exploitation, and sets up a national cybersecurity authority with enforcement powers.[17] This Act is a robust legal framework for tackling AI-enabled sexual exploitation involving minors and for dealing with cyber harassment and abuse in general.
The main weakness in this Act is that it does not recognize AI-enabled content, such as deepfakes and synthetic sexual material. Furthermore, it does not define or outlaw non-consensual digital sexual material as a crime or impose direct statutory obligations on digital platforms to tackle this material. There are also enforcement challenges due to technical capabilities, investigative capabilities, and international reach of AI platforms.
UK Law & Policy on Deepfakes and Online Exploitation
The United Kingdom has developed a comprehensive legal framework to tackle technology-enabled harms such as image-based sexual abuse and AI-generated explicit images. The key legislation includes the Sexual Offences Act 2003, the Criminal Justice and Courts Act 2015, and the Online Safety Act 2023, which address the issue of image-based sexual abuse and other online harms.
The Sexual Offences Act 2003, as amended by the Online Safety Act 2023, has been extended to cover the distribution or threat to distribute intimate images without consent. Moreover, Section 66B has been added to include computer-generated images, ensuring that AI-generated images are also covered under the legislation. The amendment also eliminates the requirement to prove intent to cause distress, facilitating the prosecution of such cases and bringing the legislation up to date for the digital age.
The Criminal Justice and Courts Act 2015 historically criminalized the sharing of private sexual photographs without consent (“revenge porn”). Though predating widespread generative AI, these offences set the stage for recognizing intimate image abuse online as serious conduct warranting distinct legal sanction, and they remain relevant for cases where deepfake content is shared for purposes of humiliation, harassment, or sexual gratification.
The Online Safety Act 2023 (OSA) has also introduced a duty of care for platforms to identify and remove illicit content, including non-consensual intimate images and child sex abuse material, with significant fines for non-compliance. The Data (Use and Access) Act 2025 (DUAA) has plugged this gap by outlawing the creation or solicitation of such images, including AI-created ones, and treating them as priority offences under the OSA. Alongside these, the UK GDPR and Data Protection Act 2018 provide rights for victims to erasure and protection from unlawful processing, providing an alternative route for removal and compensation.
How UK Law Applies to AI‑Generated Digital Undressing
In the UK, the laws on the distribution, threat, production, and solicitation of non-consensual intimate images, including AI deepfakes, are criminalized by the Sexual Offences Act, the Online Safety Act 2023, and the Data (Use and Access) Act 2025.
In the enforcement of these laws, challenges abound, including the need to prove the use of AI, the need to identify the creators of the AI, the need to prove the lack of consent, and the need to deal with the issue of anonymous actors, including international actors.
Enforcement, Prosecutions, and Regulatory Action
Recent cases show UK law in action against digital sexual harms. In 2024, the first conviction for cyberflashing under the Sexual Offences Act marked enforcement of reformed offences, while actor Laurence Fox faced charges for sharing an intimate image without consent[18]. Prosecutions in Scotland over deepfake nudity have also raised concerns about lenient sentences, underscoring the need for stronger AI‑specific sanctions.
On the regulatory side, Ofcom has begun investigations under the Online Safety Act 2023, targeting platforms like X for failing to address AI‑generated sexual content. These actions highlight the growing role of platform governance and the duty of care framework in tackling harms such as digital undressing.
International Law & Human Rights Framework
Digital strip-teasing and AI-assisted exploitation apply fundamental human rights principles. The UDHR and ICCPR enshrine the right to privacy and autonomy, which makes non-consensual AI images a breach of informational privacy. This is connected to the right to dignity and non-discrimination: the ICCPR and the Declaration on the Elimination of Violence Against Women oblige states to prevent actions that sexualize or exploit women, girls, and marginalized groups.
Children’s rights under the CRC impose state obligations to safeguard against sexual exploitation, with international bodies sounding the alarm about AI deepfakes used for grooming and exploitation. At the regional level, the Council of Europe’s Framework Convention on Artificial Intelligence incorporates transparency, accountability, and rights safeguards into AI governance, while the EU Digital Services Act and directives on violence against women criminalize exploitative deepfakes.
With regard to the UN, there are guidelines that emphasize the need to ensure the privacy and security of children in AI governance. Overall, these documents clearly establish that AI sexual exploitation is a crime and a human rights abuse, and that there is a need for states to align their criminal laws, cybercrime laws, and data protection laws with international law to ensure dignity, equality, and privacy in the digital age.
Gaps, Challenges & Enforcement Realities
Despite strong laws in Ghana, the UK, and internationally, major gaps persist in tackling AI‑enabled sexual exploitation. Jurisdictional limits, slow platform takedowns, and weak moderation hinder enforcement. Realistic AI images are hard to detect, forensic tools lag behind, and victim support remains limited. Uneven industry cooperation further complicates cross‑border action, underscoring the need for global coordination, stronger platform accountability, and better victim protections.
Recommendations
In order to combat the rising threat of AI-assisted digital undressing, several steps need to be taken in the realms of law, regulation, technology, and society as a whole. Firstly, legislative changes must be made to clearly criminalize the production, solicitation, and dissemination of non-consensual AI-generated sexual images and videos, including adult and child victims and the liability of platforms. Secondly, regulation must be undertaken through the development of extensive guidelines for platforms, including the need for platforms to monitor and report such activities and assess the risks involved in AI-generated content.
Technology and AI companies must also implement effective safety measures for AI, including filters, watermarking, and deepfake detection tools, along with verification tools for preventing the misuse of these platforms by unauthorized users. Transparency and timely action against such activities will also help in curbing the menace of AI-generated sexual images and videos. Lastly, society as a whole must also play its part through raising awareness and providing support services for victims, enabling them to report such activities and receive the necessary help and guidance they need in dealing with such situations and understanding their rights as citizens and their access to these rights.
Conclusion
There are grave concerns regarding privacy, dignity, and the protection of women and children with the use of AI technologies for digital undressing. Though there are laws in Ghana and the UK, as well as internationally, which offer protection under human rights, there is still a need to address the issue of clarity and enforcement.
Legal frameworks, as well as regulations, technological solutions, and awareness, play a crucial role in effectively addressing these concerns regarding digital undressing, as technological advancements need to be checked so that they do not compromise basic human rights and safety.
[1] UK Government, Better protection for victims thanks to new law on sexually explicit deepfakes (GOV.UK, 22 January 2025).
[2] Convention on the Rights of the Child (adopted 20 November 1989, entered into force 2 September 1990) 1577 UNTS 3, arts 19 and 34.
[3] Cybersecurity Act 2020 (Act 1038) (Ghana) ss 62–67; Criminal Offences Act 1960 (Act 29) (Ghana).
[4] Online Safety Act 2023 (UK); UK Government, Government crackdown on explicit deepfakes (GOV.UK, January 2025).
[5] International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) 999 UNTS 171, art 17; Convention on the Elimination of All Forms of Discrimination Against Women (adopted 18 December 1979, entered into force 3 September 1981) 1249 UNTS 13.
[6] Ian Goodfellow and others, ‘Generative Adversarial Networks’ (2014) Neural Information Processing Systems Conference Proceedings.
[7] Luciano Floridi and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1(1) Harvard Data Science Review.
[8] Reuters, ‘Grok says safeguard lapses led to images of minors in minimal clothing on X’ (2 January 2026).
[9] Associated Press, ‘EU launches probe into X over Grok AI deepfake imagery’ (January 2026).
[10] Cybersecurity Act 2020 (Act 1038) (Ghana) ss 1–3.
[11] Criminal Offences Act 1960 (Act 29) (Ghana); Data Protection Act 2012 (Act 843) (Ghana); Electronic Communications Act 2008 (Act 775) (Ghana).
[12] Cybersecurity Act 2020 (Act 1038) (Ghana) Parts Two and Three.
[13] Cybersecurity Act 2020 (Act 1038) (Ghana) ss 62–67.
[14] Cybersecurity Act 2020 (Act 1038) (Ghana) s 63.
[15] Cybersecurity Act 2020 (Act 1038) (Ghana) s 67.
[16] Cybersecurity Act 2020 (Act 1038) (Ghana) Parts One and Four.
[17] Cybersecurity Act 2020 (Act 1038) (Ghana) ss 1–4, 62–67.
[18] The Guardian, ‘Laurence Fox charged with sexual offence over photo of TV presenter’ (25 March 2025) https://www.theguardian.com/uk-news/2025/mar/25/laurence-fox-charged-sexual-offence-photo-tv-presenter.
