Self-driving cars xAI, the artificial intelligence company co-founded by Elon Musk, recently missed its self-imposed deadline. Yet it has not converted that work into a finalized AI safety framework. Watchdog group The Midas Project signaled out the missed deadline. This would tend to undermine the credibility of the company’s commitment to safety in AI development. xAI committed to that deadline at the AI Seoul Summit back in March of this year. Their deadline was May 10.
In February, at the summit, xAI released a draft framework that provided an early snapshot of its views on AI safety and xAI’s direction. This eight-page document became a living, breathing expression of the company’s safety culture and leadership in safety. The original draft introduced provisions for benchmarking protocols and factors for the deployment of AI models. Its agency-wide climate adaptation plan found it was not equipped to map out a coherent strategy to determine risk mitigations. This is a key to any meaningful safety accountability structure.
At the AI Seoul Summit, xAI signed a document pledging to articulate how it would implement risk mitigations, reinforcing expectations for transparency and accountability. Despite these commitments, xAI’s official channels have not acknowledged the passing of its deadline, leaving observers questioning the company’s dedication to safety protocols in light of rapid advancements in AI technology.
Elon Musk has been the most prominent scaremonger, sounding alarms over the dangers of unregulated AI development. His prophetic warnings have highlighted the need for rigorous safety practices to minimize the risk of unintended consequences. Yet, xAI’s track record when it comes to AI safety has been criticized — especially in comparison with its competitors.
Yet in recent months, competitors such as Google and OpenAI are rushing their AI to safety testing – or at least giving that impression. Both companies have come under fire for years for withholding model safety reports, or in some cases, just not issuing them at all. Their response to public safety concerns has been reactive rather than preventive.
Despite this ambition, xAI has not completed its public-facing safety plan. This is happening as public and regulatory scrutiny of AI technologies is understandably increasing. Well, stakeholders in the industry aren’t letting up. They’d like to focus on making sure that companies are prepared to address risks associated with their AI systems.
xAI’s updated safety policy deadline has come and gone, with no communication from the company. As a result, a lot of folks are wondering what’s in store for xAI’s future moves. As it prepares to go public, xAI is under mounting pressure to show it can actually address AI safety concerns. Simultaneously, it needs to stay ahead of its competitors.
Leave a Reply