Worries About AI’s Risks to Humanity Loom Over the Trial Pitting Musk Against OpenAI’s Leaders
Security WeekArchived May 09, 2026✓ Full text saved
Musk said that he could have founded OpenAI as a for-profit company, just like the other companies he started or took over. “I deliberately chose this,” he said, “for the public good.” The post Worries About AI’s Risks to Humanity Loom Over the Trial Pitting Musk Against OpenAI’s Leaders appeared first on SecurityWeek .
Full text archived locally
✦ AI Summary· Claude Sonnet
At the heart of the trial pitting Elon Musk against OpenAI CEO Sam Altman is a moment when they found common cause on an ever more pressing question: how to protect humanity from the risks of artificial intelligence.
It turned sour, and the jury is charged with settling the ensuing legal dispute between the two Silicon Valley titans.
But the unresolved questions about the dangers of AI have been looming over the federal courthouse in Oakland, California, since the trial began last week. The technology itself is not on trial – the judge has warned lawyers not to get “sidetracked” by questions about its dangers – but witness testimony has touched on concerns around workforce disruptions and the prospect raised by Musk that superhuman AI might one day kill us all.
Musk, the world’s richest person, filed the case accusing his fellow OpenAI co-founder of betraying promises to keep the company as a nonprofit. Altman, in turn, accuses Musk of trying to hobble the ChatGPT maker for the benefit of his own AI company.
One witness, AI pioneer Stuart Russell, said that the “winner take all” power struggle over AI’s future is itself threatening humanity.
Learn More at the AI Risk Summit at Half Moon Bay
Musk’s lawyers brought Russell to the stand as an expert witness, at the rate of $5,000 an hour. The University of California, Berkeley computer scientist listed a host of AI dangers, from racial and gender discrimination to jobs displacement, misinformation and emotional attachments that take some AI chatbot users down a spiral of psychosis.
“Whichever company develops AGI first would have a very big advantage” and an increasingly big lead over everyone else, Russell told the court, using the initials for artificial general intelligence, a term for advanced AI technology that surpasses humans at many tasks.
A judge’s warning hasn’t kept out talk of AI’s dangers
The trial centers on the 2015 birth of OpenAI as a nonprofit startup primarily funded by Musk.
Both Musk and Altman, who has not yet testified in the trial, have said they wanted OpenAI to safely develop AGI for the benefit of humanity and not for any one person’s gain or under any one person’s control. And both camps allege it’s the other guy who was trying to control it.
A jury of nine people selected from the San Francisco Bay Area will get to say which one of them is telling the truth.
Early on, Judge Yvonne Gonzalez Rogers warned lawyers, particularly Musk’s, not to delve into broader AI concerns that go beyond Musk’s claims that OpenAI violated its charitable mission.
“This is not a trial on the safety risks of artificial intelligence. This is not a trial on whether or not AI has damaged humanity,” Gonzalez Rogers told lawyers before jurors arrived at the federal courthouse.
Still, Musk managed to skirt that guidance in his testimony last week. Asked to describe artificial general intelligence, Musk said it is when AI becomes “as smart as any human,” and added that “we are getting close to that point,” and AI will be smarter than any human as soon as next year.
Musk said he has “extreme concerns” about AI and has had them for a long time. Musk said he wanted a “counterpoint” to Google, which at the time had “all the money, all the computers and all the talent” for AI, with no counterbalance.
“I was concerned AI would be a double-edged sword,” he said.
Musk and OpenAI each say they are working for humanity’s benefit
During his testimony, Musk repeatedly said that he could have founded OpenAI as a for-profit company, just like the other companies he started or took over. “I deliberately chose this,” he said, “for the public good.”
The judge expressed some skepticism. In comments to lawyers last week before the jury came into the room, Gonzalez Rogers pointed out that Musk, “despite these risks, is creating a company that is in the exact same space,” referring to the billionaire’s xAI artificial intelligence company, which launched in 2023 and has since merged with Musk’s rocket company SpaceX.
OpenAI’s side also claims its goals are to benefit the public. OpenAI co-founder and president Greg Brockman, a defendant in Musk’s lawsuit along with Altman and their company, said he thought the technology OpenAI was developing was “transformative” — bigger than corporations, corporate structures and bigger than any one individual. It was, he said, “about humanity as a whole.”
Brockman testified this week that his No. 1 goal was always the “mission” of OpenAI and it was Musk who sought unilateral control over the company.
Brockman recalled a meeting where at first Musk seemed open to the idea of Altman being OpenAI’s CEO. In the end, however, “he said people needed to know he was in charge.”
In addition to damages, Musk is seeking Altman’s ouster from OpenAI’s board. If Musk wins, it could derail OpenAI’s plans for an initial public offering of its shares.
Learn More at the AI Risk Summit at Half Moon Bay
Related: Claude Code, Gemini CLI, GitHub Copilot Agents Vulnerable to Prompt Injection via Comments
Related: Critical Vulnerability in Claude Code Emerges Days After Source Leak
Related: Hackers Weaponize Claude Code in Mexican Government Cyberattack
Related: Claude Code Flaws Exposed Developer Devices to Silent Hacking
WRITTEN BY
Associated Press
More from Associated Press
US Military Reaches Deals With 7 Tech Companies to Use Their AI on Classified Systems
Germany Suspects Russia Is Behind Signal Phishing That Targeted Top Officials
US Launches Sweeping Crackdown on Southeast Asia Cyberscams and Sanctions Cambodian Senator
Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US
Most Serious Cyberattacks Against the UK Now From Russia, Iran and China, Cyber Chief Says
Senate Extends Surveillance Powers Until April 30 After Chaotic Votes in House
White House Chief of Staff to Meet With Anthropic CEO Over Its New AI Technology
Lawmakers Gathered Quietly to Talk About AI. Angst and Fears of ‘Destruction’ Followed
Latest News
In Other News: Train Hacker Arrested, PamDOORa Linux Backdoor, New CISA Director Frontrunner
Polish Security Agency Reports ICS Breaches at Five Water Treatment Plants
AI Firm Braintrust Prompts API Key Rotation After Data Breach
Cyberattack Hits Canvas System Used by Thousands of Schools as Finals Loom
‘PCPJack’ Worm Removes TeamPCP Infections, Steals Credentials
Ransomware Group Takes Credit for Trellix Hack
Vulnerability in Claude Extension for Chrome Exposes AI Agent to Takeover
Ivanti Patches EPMM Zero-Day Exploited in Targeted Attacks
Trending
Webinar: ROSI For CPS Security Programs
May 13, 2026
In cyber-physical systems (CPS), just one hour of downtime can outweigh an entire annual security budget. Learn how to master the Return on Security Investment (ROSI) to align security goals with the bottom-line priorities.
Register
Virtual Event: Threat Detection And Incident Response Summit
May 20, 2026
Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.
Register
People on the Move
Malwarebytes has named Chung Ip as Chief Financial Officer.
Semperis has appointed John Podboy as Chief Information Security Officer.
Randy Menon has become Chief Product and Marketing Officer at One Identity.
More People On The Move
Expert Insights
The Mythos Moment: Enterprises Must Fight Agents With Agents
Only with the right platform and an agentic, AI-driven defense, will enterprises be able to protect themselves in the agentic era. (Etay Maor)
Why Cybersecurity Must Rethink Defense In The Age Of Autonomous Agents
From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. (Torsten George)
Government Can’t Win The Cyber War Without The Private Sector
Securing national resilience now depends on faster, deeper partnerships with the private sector. (Steve Durbin)
The Hidden ROI Of Visibility: Better Decisions, Better Behavior, Better Security
Beyond monitoring and compliance, visibility acts as a powerful deterrent, shaping user behavior, improving collaboration, and enabling more accurate, data-driven security decisions. (Joshua Goldfarb)
The New Rules Of Engagement: Matching Agentic Attack Speed
The cybersecurity response to AI-enabled nation-state threats cannot be incremental. It must be architectural. (Nadir Izrael)
Flipboard
Reddit
Whatsapp
Email