xAI launched Grokipedia in October, after Musk had been complaining that Wikipedia was biased against conservatives. Reporters soon noted that while many articles seemed to be copied directly from Wikipedia, Grokipedia also claimed that pornography contributed to the AIDS crisis, offered 'ideological justifications' for slavery, and used denigrating terms for transgender people. All that might be expected for an encyclopedia associated with a chatbot that described itself as 'Mecha Hitler' and was used to flood X with sexualized deepfakes. However, its content now seems to be escaping containment from the Musk ecosystem, with the Guardian reporting that GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. The Guardian says ChatGPT did not cite Grokipedia when asked about topics where its inaccuracy has been widely reported' topics like the January 6 insurrection or the HIV/AIDS epidemic. Instead, it was cited on more obscure topics, including claims about Sir Richard Evans that the Guardian had previously debunked. (Anthropic's Claude also appears to be citing Grokipedia to answer some queries.)...
Earlier this week, the California Attorney General's office announced that it was investigating xAI over reports that the startup's chatbot, Grok, was being used to create nonconsensual sexual imagery of women and minors. On Friday, the government followed up by sending a cease-and-desist letter to the company, demanding that it take immediate action to stop the production of nonconsensual intimate images and CSAM ' child sexual abuse material. 'Today, I sent xAI a cease-and-desist letter, demanding the company immediately stop the creation and distribution of deepfake, nonconsensual, intimate images and child sexual abuse material,' said California AG Rob Bonta in a press release. 'The creation of this material is illegal. I fully expect xAI to immediately comply. California has zero tolerance for [CSAM].' The AG's office additionally claimed that xAI appeared to be 'facilitating the large-scale production' of nonconsensual nudes, the likes of which are being 'used to harass women and girls across the internet.' The agency said it expects xAI to prove that it is taking steps to address these issues within the next five days....
In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok, several U.S. senators are asking the companies to provide proof that they have 'robust protections and policies' in place and to explain how they plan to curb the rise of sexualized deepfakes on their platforms. The senators also demanded that the companies preserve all documents and information relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, as well as any related policies. The letter comes hours after X said it updated Grok to prohibit it from making edits of real people in revealing clothing and restricted image creation and edits via Grok to paying subscribers. (X and xAI are part of the same company.) Pointing to media reports about how easily and often Grok generated sexualized and nude images of women and children, the senators pointed out that platforms' guardrails to prevent users from posting nonconsensual, sexualized imagery may not be enough. 'We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing,' the letter reads....
Elon Musk said Wednesday he is 'not aware of any naked underage images generated by Grok,' hours before the California attorney general opened an investigation into xAI's chatbot over the 'proliferation of nonconsensual sexually explicit material.' Musk's denial comes as pressure mounts from governments worldwide ' from the U.K. and Europe to Malaysia and Indonesia ' after users on X began asking Grok to turn photos of real women, and in some cases children, into sexualized images without their consent. Copyleaks, an AI detection and content governance platform, estimated roughly one image was posted each minute on X. A separate sample gathered from January 5 to January 6 found 6,700 per hour over the 24-hour period. (X and xAI are part of the same company.) Several laws exist to protect targets of nonconsensual sexual imagery and child sexual abuse material (CSAM). Last year the Take It Down Act was signed into a federal law, which criminalizes knowingly distributing nonconsensual intimate images ' including deepfakes ' and requires platforms like X to remove such content within 48 hours. California also has its own series of laws that Gov. Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes....