Google Faces Scrutiny Over Lack of Safety Details in New AI Report

Google Faces Scrutiny Over Lack of Safety Details in New AI Report

Just last week, Google announced its Gemini 2.5 Pro AI model, making everybody’s expert heads spin. Most advocates are worried because it completely omits critical safety information. Last year, we introduced our Frontier Safety Framework (FSF). Its goal is to identify emerging AI techniques that will likely pose the most extraordinary and extreme risks. The FSF’s absence from the latest report has led to disappointment and skepticism regarding Google’s commitment to AI safety.

Thomas Woodside, cofounder of the Secure AI Project, discussed his concerns. Seath thinks the Gemini 2.5 Pro report falls far short of what’s needed. He noted that excluding the FSF undermines our faith in Google’s promise of timely safety evaluations for its AI models. This glaring omission calls into question their commitment to safety.

“This report is very sparse, contains minimal information, and came out weeks after the model was already made available to the public,” said Peter Wildeford, a technology expert. He emphasized that the lack of transparency makes it impossible to verify if Google is meeting its public commitments concerning safety.

Gemini 2.5 Pro report even acknowledged the FSF, it leaves us wondering how safe these models are when their safety assessments are based on questioned standards. Google had previously assured the U.S. government that it would publish safety reports for all significant public AI models “within scope.” Critics say the technology behemoth hasn’t come close to delivering on that pledge. This fuels worries over a chilling pattern of industry-wide decreasing transparency around AI.

He stated, “Combined with reports that competing labs like OpenAI have shaved their safety testing time before release from months to days, this meager documentation for Google’s top AI model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market.”

Google has previously bombarded their AI chat models with this kind of adversarial red teaming as part of a robust pre-release safety testing protocol. The company claims that it publishes technical reports only once it considers a model to have graduated from the experimental stage. Google has not released results from harmful capability tests since June 2024. This came on the heels of another model announcement made in February 2024.

Google’s new focus on safety and transparency has not convinced Woodside. “I hope this is a promise from Google to start publishing more frequent updates,” he stated. As a matter of improving the process, he called for revisions to include outcomes from evaluations of models not yet publicly deployed. Without controls, these models could make life-threatening mistakes.

Google has been at the forefront among AI labs in advocating for standardized model reporting. It’s not the only one under fire for not doing enough to provide transparency. The industry as a whole has been criticized for the lack of an approach to safety documentation and public perception assessments.

As you may have heard last week, Google unveiled the new Gemini 2.5 Flash model. So far, they have not released an official report in this compact, more fiscally-responsible version. This ironically adds to the dualistic narrative that has been created around Google’s commitment to transparency and safety in their AI development.

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *