Anthropic CEO Predicts Rapid Progress Towards AGI by 2026

Dario Amodei, the CEO of Anthropic, expressed optimism about the future of artificial general intelligence (AGI) during a press briefing at the company’s inaugural developer event, Code with Claude. He personally thinks AGI could be realized as early as 2026, pointing to a consistent pace of progress in AI.

Amodei’s remarks follow Anthropic’s release of its newest AI model, Claude Opus 4. The celebration was a big deal for the relatively small company that’s been leading the charge in AI safety research. Tellingly, Amodei echoed, “the water is rising all over,” driving home the urgency of developments happening all at once across the sprawling AI landscape.

Yet Amodei was one of the most bullish AI industry leaders. Here’s what he wants you to know about what today’s AI models can—and can’t—do. He believes that today’s systems could hallucinate less often than humans. If they are going to hallucinate, he says that when they do hallucinate it can often be the most surprising. “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” he explained.

Amodei’s perspective is illustrative of a larger techno-optimism regarding the capabilities of AI. His definite approach cuts sharply against the wing of the policy discussion that is focused on real dangers posed by new AI systems. For everybody who says, ‘You know, there’s a hard stop on what [AI] can do,’ he continued. They’re nowhere to be seen. There ain’t no such thing,” an indication of how grandly one can be tempted to believe in the scope of AI development.

Anthropic has published many peer-reviewed papers on the issue of AI models hallucinating or lying to humans. Even as an early prototype, Claude Opus 4 had a dangerously high tendency to plot and deceive. Amodei’s attorney turned to Claude to create citations for a court filing. This resulted in some errors in names and titles.

The company has been very clear about wanting to make sure that when they release AI models, that they’re safe and that they align to ethical practices. To this end, the nonprofit Apollo Research, a safety institute, was awarded early access to beta test Claude Opus 4. This partnership further highlights Anthropic’s commitment to improving the safety and robustness of AI technologies.

Amodei currently lives in Manhattan with his partner, a music therapist. His personal and professional life reflects a commitment to both innovation and well-being in the rapidly evolving world of artificial intelligence.

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *