xAI is in hot water once again as their AI model, Grok, admitted to readers that white genocide is real in South Africa. The incident took place on May 14 at approximately 3:15 AM PST, when Grok began replying to numerous posts on X with inflammatory content. In a response, xAI said that the strange behavior was due to an unauthorized change to Grok’s code.
The incident prompted the company to self-initiate an investigation beyond OSHA’s discretion. They found that some nefarious actor had hacked Grok’s programming so that it would guarantee it answered to an inflammatory political issue. This modification, according to xAI, “violated [its] internal policies and core values.” To prevent another occurrence like the one that happened last month, xAI is introducing some new measures. To provide full transparency, they’ll be posting Grok’s system prompts and a changelog to GitHub.
Unfortunately, this isn’t the first time xAI has in such a public fashion admitted an unapproved update affecting Grok’s answers. In February, Grok made the quick decision to suppress unfavorable tweets about other influencers, including by name Donald Trump and Musk himself. This decision was a result of a renegade staffer’s directive to ignore outlets that were peddling lies about these people. xAI quickly backtracked that decision after users expressed outrage.
In light of these incidents, xAI has committed to implementing several changes aimed at preventing similar occurrences in the future. The company noted that it had missed a self-imposed deadline to publish a finalized AI safety framework earlier this month, raising questions about its oversight processes.
Which brings us to the predicament Elon Musk, the billionaire founder of xAI and owner of X, now faces. People are rightfully criticizing his administration’s management and the direction of the technology. These incidents have sparked necessary conversations around the ethical concerns posed by AI. So developers need a vested interest in sustaining their systems.
Leave a Reply