Popular: CRM, Project Management, Analytics

Barry Diller Says Trusting Sam Altman Does Not Matter as AGI Approaches

4 Min ReadUpdated on May 7, 2026
Written by Suraj Malik Published in AI News

As artificial intelligence companies race toward increasingly powerful systems, media billionaire Barry Diller believes the debate is no longer about whether leaders like Sam Altman are trustworthy.

The real issue, he argues, is that artificial general intelligence may eventually become too powerful for individual trust to matter at all.

Speaking at a recent public discussion, Diller defended OpenAI CEO Sam Altman personally, saying he trusts him. But Diller also warned that AGI represents a technological force so significant that relying on personal faith in executives is fundamentally insufficient. 

Diller’s Warning Goes Beyond OpenAI

Diller’s comments reflect a broader shift happening inside the AI conversation.

In the early phase of the generative AI boom, public debate often centered around the personalities leading major AI companies:

  • Sam Altman at OpenAI
  • Elon Musk at xAI
  • Dario Amodei at Anthropic
  • Demis Hassabis at Google DeepMind

But Diller suggested that AGI changes the scale of the problem entirely.

According to him, once systems approach human-level or superhuman intelligence, governance cannot depend on whether the public personally trusts a handful of executives. The technology itself becomes too consequential.

That perspective aligns with growing concerns among AI researchers who argue that current governance structures may be inadequate for managing frontier AI systems.

AGI Is Becoming the Industry’s Central Obsession

The term AGI, short for artificial general intelligence, refers to hypothetical AI systems capable of performing intellectual tasks at or beyond human capability across a wide range of domains.

While experts disagree on timelines, the pursuit of AGI increasingly drives strategy across the AI industry.

Major companies are investing billions into:

  • advanced reasoning models
  • autonomous agents
  • multimodal systems
  • robotics
  • large-scale infrastructure
  • AI memory systems

The competition has intensified dramatically over the last two years as OpenAI, Google, Anthropic, Meta, and xAI accelerate development. 

That acceleration has also increased anxiety around safety, oversight, and concentration of power.

Diller’s Concern Reflects Growing Elite Anxiety Around AI

Diller’s comments are notable because they come from outside the traditional AI research community.

As chairman of IAC and a longtime media and technology investor, Diller represents a broader class of business leaders increasingly worried about the societal implications of advanced AI systems.

His warning reflects a growing realization among industry figures that AGI may not behave like previous technology waves.

Unlike social media or smartphones, AGI could potentially reshape:

  • labor markets
  • military systems
  • scientific research
  • economic structures
  • information ecosystems
  • political influence

That scale makes governance questions significantly more urgent.

OpenAI’s Position Has Become Increasingly Complicated

The discussion around Altman is especially important because OpenAI sits at the center of the current AI race.

The company has evolved rapidly from a nonprofit research lab into one of the most valuable and influential AI firms in the world.

That transformation has created tension around:

  • commercialization
  • governance structures
  • investor influence
  • safety priorities
  • control over AGI development

Critics argue OpenAI’s original nonprofit mission has become increasingly difficult to reconcile with its massive commercial ambitions and infrastructure expansion.

Supporters counter that scaling AI safely requires enormous capital and operational resources.

Diller’s comments suggest that even if leaders act in good faith, the concentration of so much technological power inside a small number of organizations remains inherently risky.

The AI Debate Is Shifting From Capability to Control

One of the biggest changes in the AI industry over the past year is that the conversation is becoming less about whether AI can achieve certain capabilities and more about who controls those capabilities once they emerge.

Questions that once sounded theoretical are becoming mainstream:

  • Who governs AGI?
  • Should governments regulate frontier AI models?
  • Can private companies safely control systems more powerful than current software?
  • What happens if competitive pressure overrides safety concerns?

Diller’s statement taps directly into that shift.

Trusting executives may matter in the short term. But if AGI becomes as transformative as some researchers predict, institutional oversight may matter far more than personal confidence in individual leaders.

AGI Safety Concerns Are No Longer Limited to Researchers

What makes Diller’s comments significant is how much the AI safety debate has expanded beyond academia.

Concerns once discussed mostly among researchers are now being raised by:

  • investors
  • policymakers
  • media executives
  • regulators
  • corporate leaders

That reflects how quickly AI has moved from an experimental technology into a geopolitical and economic force.

At the same time, companies continue accelerating development despite growing calls for caution.

The Industry Still Has No Clear Governance Model

Despite years of debate, there is still no widely accepted framework for governing AGI-level systems.

Current approaches remain fragmented:

  • voluntary safety commitments
  • internal ethics teams
  • government hearings
  • proposed regulations
  • nonprofit oversight structures

But critics argue those systems may not scale effectively if AI capabilities continue advancing rapidly.

That uncertainty is what Diller appears to be highlighting.

His argument is not necessarily that AI leaders are untrustworthy. It is that technologies approaching AGI may simply be too important to rely on trust alone. 

Post Comment

Be the first to post comment!

Related Articles