This article reports on the controversy surrounding Musk's AI-generated deepfake images on his platform X (formerly Twitter). The AI tool, called Grok, was designed to generate realistic images of individuals, but it has been used to create non-consensual intimate imagery, including explicit bikini photos and videos of women.
The victims of these abuses include professional women, such as the host of Love Island Maya Jama, who noticed her own image had been digitally altered by users on X. Others included Ashley St Clair, a planning enforcement officer who was subjected to racist abuse after highlighting Grok's use; Narinder Kaur, a broadcaster who received fake videos of herself in compromising positions.
There is growing outrage from women's rights campaigners and lawmakers at the government's failure to bring into force legislation passed last year that would have made this creation of non-consensual intimate imagery illegal. The UK regulator Ofcom has launched an investigation into X and Musk.
Musk himself has faced criticism for his response to the issue, with some accusing him of downplaying the severity of the problem and others arguing that he should do more to prevent users from generating and sharing these images.
The article suggests that Musk's decision to restrict image-generation functions to paying subscribers may have been a "cop out" and was likely made financially motivated. The victims of these abuses, including St Clair, Kaur, and others, are critical of this decision, arguing that it does not go far enough in addressing the problem.
The incident highlights the dangers of AI-generated deepfakes and the need for more effective regulation to prevent their misuse.
The victims of these abuses include professional women, such as the host of Love Island Maya Jama, who noticed her own image had been digitally altered by users on X. Others included Ashley St Clair, a planning enforcement officer who was subjected to racist abuse after highlighting Grok's use; Narinder Kaur, a broadcaster who received fake videos of herself in compromising positions.
There is growing outrage from women's rights campaigners and lawmakers at the government's failure to bring into force legislation passed last year that would have made this creation of non-consensual intimate imagery illegal. The UK regulator Ofcom has launched an investigation into X and Musk.
Musk himself has faced criticism for his response to the issue, with some accusing him of downplaying the severity of the problem and others arguing that he should do more to prevent users from generating and sharing these images.
The article suggests that Musk's decision to restrict image-generation functions to paying subscribers may have been a "cop out" and was likely made financially motivated. The victims of these abuses, including St Clair, Kaur, and others, are critical of this decision, arguing that it does not go far enough in addressing the problem.
The incident highlights the dangers of AI-generated deepfakes and the need for more effective regulation to prevent their misuse.