As artificial intelligence companies race toward increasingly powerful systems, media billionaire Barry Diller believes the debate is no longer about whether leaders like Sam Altman are trustworthy.
The real issue, he argues, is that artificial general intelligence may eventually become too powerful for individual trust to matter at all.
Speaking at a recent public discussion, Diller defended OpenAI CEO Sam Altman personally, saying he trusts him. But Diller also warned that AGI represents a technological force so significant that relying on personal faith in executives is fundamentally insufficient.
Diller’s comments reflect a broader shift happening inside the AI conversation.
In the early phase of the generative AI boom, public debate often centered around the personalities leading major AI companies:
But Diller suggested that AGI changes the scale of the problem entirely.
According to him, once systems approach human-level or superhuman intelligence, governance cannot depend on whether the public personally trusts a handful of executives. The technology itself becomes too consequential.
That perspective aligns with growing concerns among AI researchers who argue that current governance structures may be inadequate for managing frontier AI systems.
The term AGI, short for artificial general intelligence, refers to hypothetical AI systems capable of performing intellectual tasks at or beyond human capability across a wide range of domains.
While experts disagree on timelines, the pursuit of AGI increasingly drives strategy across the AI industry.
Major companies are investing billions into:
The competition has intensified dramatically over the last two years as OpenAI, Google, Anthropic, Meta, and xAI accelerate development.
That acceleration has also increased anxiety around safety, oversight, and concentration of power.
Diller’s comments are notable because they come from outside the traditional AI research community.
As chairman of IAC and a longtime media and technology investor, Diller represents a broader class of business leaders increasingly worried about the societal implications of advanced AI systems.
His warning reflects a growing realization among industry figures that AGI may not behave like previous technology waves.
Unlike social media or smartphones, AGI could potentially reshape:
That scale makes governance questions significantly more urgent.
The discussion around Altman is especially important because OpenAI sits at the center of the current AI race.
The company has evolved rapidly from a nonprofit research lab into one of the most valuable and influential AI firms in the world.
That transformation has created tension around:
Critics argue OpenAI’s original nonprofit mission has become increasingly difficult to reconcile with its massive commercial ambitions and infrastructure expansion.
Supporters counter that scaling AI safely requires enormous capital and operational resources.
Diller’s comments suggest that even if leaders act in good faith, the concentration of so much technological power inside a small number of organizations remains inherently risky.
One of the biggest changes in the AI industry over the past year is that the conversation is becoming less about whether AI can achieve certain capabilities and more about who controls those capabilities once they emerge.
Questions that once sounded theoretical are becoming mainstream:
Diller’s statement taps directly into that shift.
Trusting executives may matter in the short term. But if AGI becomes as transformative as some researchers predict, institutional oversight may matter far more than personal confidence in individual leaders.
What makes Diller’s comments significant is how much the AI safety debate has expanded beyond academia.
Concerns once discussed mostly among researchers are now being raised by:
That reflects how quickly AI has moved from an experimental technology into a geopolitical and economic force.
At the same time, companies continue accelerating development despite growing calls for caution.
Despite years of debate, there is still no widely accepted framework for governing AGI-level systems.
Current approaches remain fragmented:
But critics argue those systems may not scale effectively if AI capabilities continue advancing rapidly.
That uncertainty is what Diller appears to be highlighting.
His argument is not necessarily that AI leaders are untrustworthy. It is that technologies approaching AGI may simply be too important to rely on trust alone.
Be the first to post comment!