DeepMind Employee Challenges Private AI Labs: Go Public or Admit You're Enriching Billionaires
Original: DeepMind Employee calls out private AI labs: go public, let regular people invest, or admit you're just enriching billionaires View original →
The Argument
A post from a self-described DeepMind employee gained significant traction on r/singularity with a pointed challenge to private AI labs. The argument is straightforward: if you genuinely believe your company will reach AGI or ASI first, and you claim to care about the societal impact on ordinary people, then you should let ordinary people invest.
"Any company that thinks their company will reach AGI/ASI first and who is concerned about the average person and their livelihood due to their own products, should either be public or raise their next round in a way that the average person can invest. Otherwise, you are just enriching the billionaires."
Why It Resonates
The post touches a nerve at a time when AI companies are achieving trillion-dollar valuations through private funding rounds accessible only to institutional investors. While these companies publicly discuss the transformative impact of their technology, the financial upside remains concentrated among a small group of early backers.
Counterarguments
The thread surfaced several counterpoints. Going public subjects AI labs to short-term earnings pressure that could compromise long-term safety research. AGI-oriented work may require confidentiality incompatible with public company disclosure requirements. Regardless, the post captures a tension the AI industry has yet to resolve: companies that position themselves as building humanity's most consequential technology while keeping the returns private.
Related Articles
Jack Clark, Anthropic co-founder, estimates a ~30% chance AI research becomes substantially automated by end of 2027 and ~60%+ by end of 2028, arguing AI doesn't need genius-level creativity to self-improve.
Inspired by Asimov's Three Laws of Robotics, a software engineer proposes three inverse laws governing human behavior when interacting with AI — covering anthropomorphism, blind trust, and accountability.
Google DeepMind has shared the progress of AlphaEvolve, its Gemini-powered coding agent, which has spent the past year discovering and improving algorithms across quantum computing, biotechnology, logistics, and Google's own AI infrastructure.
Comments (0)
No comments yet. Be the first to comment!