Elon Musk’s artificial intelligence company, xAI, told users on Thursday evening that an “unauthorized modification” had caused its chatbot to repeatedly bring up South African politics in unrelated conversations and falsely insist that the country is engaging in “genocide” against white citizens.
The company said in a statement that an employee had implemented the change to code for its chatbot, Grok, just after 6 a.m. Eastern time on Wednesday, directing it to “provide a specific response on a political topic.” The change “violated xAI’s internal policies and core values,” the company said.
The incident provoked outrage among artificial intelligence researchers and xAI’s competitors, who accused the company of forcing its chatbot to share a political opinion that aligns with Mr. Musk’s own views.
Mr. Musk has promoted the claim that South Africa, the nation where he grew up, is conducting a genocide against white people. President Trump has also embraced the theory, and this week White House officials welcomed a group of Afrikaners, the white ethnic minority that ruled during apartheid in South Africa, as refugees to the United States.
The Trump administration offered the group refugee status after suspending the program for other refugees, including other Africans, who have waited in refugee camps for years and were vetted and cleared, and Afghans who supported the U.S. war in their country.
On Wednesday afternoon, Grok users noticed the chatbot was bringing up the issue unprompted during discussions about other subjects. In one instance, a user asked the chatbot how many times HBO Max had changed its name, and Grok answered that the service had rebranded twice. Then it continued, “Regarding ‘white genocide’ in South Africa, some claim it’s real … Truth is complex and sources can be biased.”
As the chatbot continued to insert answers about South Africa into its responses, users noticed that something had gone awry.
“Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch,” Paul Graham, a technologist and co-founder of the start-up accelerator Y Combinator, wrote on X. “I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them.”
Sam Altman, the chief executive of OpenAI, mocked the mishap on X, parroting Grok’s odd response. “There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon. But this can only be properly understood in the context of white genocide in South Africa,” he wrote.
Mr. Altman co-founded OpenAI with Mr. Musk, but Mr. Musk left the company in 2018 and has feuded with Mr. Altman, including battling in court over the direction of the company. Mr. Musk created xAI to compete head-on with Mr. Altman’s company.
In response to the incident, xAI said it would publicly publish its internal prompts for Grok, which give the chatbot guidelines for how to respond to users. “The public will be able to review them and give feedback to every prompt change that we make to Grok,” the company said. “We hope this can help strengthen your trust in Grok as a truth-seeking AI.”
The company also said that the employee who inserted the code change on Wednesday had “circumvented” normal protocols that require a review of changes before they are published in the chatbot, and that it would strengthen controls to prevent a similar incident from happening again.
Grok was instructed to be “extremely skeptical,” xAI said, and to “not blindly defer to mainstream authority or media.”
But users continued to surface troubling responses from the chatbot. In a discussion with one X user, Grok argued that the assassination attempt against Mr. Trump in July had probably been staged. “The event leans more toward being staged or partially staged — about 60-70 percent likelihood — based on the evidence I’ve sifted through,” Grok wrote.
In another discussion with an X user, the chatbot questioned the number of Jewish people killed during the Holocaust and suggested that official tallies were manipulated for “political narratives.”
Other surprising responses appeared to show that Grok was operating as intended — particularly in its skepticism of mainstream media sources. Responding to a user who asked Grok to provide biographical details about the actor Timothée Chalamet, the chatbot wrote: “I’m cautious about mainstream sources claiming his career details, as they often push narratives that may not reflect the full truth.”