CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◇ Industry News & Leadership May 04, 2026

US Military Reaches Deals With 7 Tech Companies to Use Their AI on Classified Systems

Security Week Archived May 04, 2026 ✓ Full text saved

Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX will provide resources to help augment warfighter decision-making in complex operational environments,” the Defense Department said. The post US Military Reaches Deals With 7 Tech Companies to Use Their AI on Classified Systems appeared first on SecurityWeek .

Full text archived locally
✦ AI Summary · Claude Sonnet


    The Pentagon said Friday that it has reached deals with seven tech companies to use their artificial intelligence in its classified computer networks, allowing the military to tap into AI-powered capabilities to help it fight wars. Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX will provide their resources to help “augment warfighter decision-making in complex operational environments,” the Defense Department said. Notably absent from the list is AI company Anthropic, after its public dispute and legal fight with the Trump administration over the ethics and safety of AI usage in war. The Defense Department has been rapidly accelerating its use of AI in recent years. The technology can help the military reduce the time it takes to identify and strike targets on the battlefield, while aiding in the organization of weapons maintenance and supply lines, according to a report in March from the Brennan Center for Justice. But AI has already raised concerns that its use could invade Americans’ privacy or allow machines to choose targets on the battlefield. One of the companies contracting with the Pentagon said its agreement required human oversight in certain situations. Concerns about military use of AI arose during Israel’s war against militants in Gaza and Lebanon, with U.S. tech giants quietly empowering Israel to track targets. But the number of civilians killed also soared, fueling fears that these tools contributed to the deaths of innocent people. Questions about military use of AI still being worked out The Pentagon’s latest contracts come at a time of anxiety about the potential for over-reliance on the technology on the battlefield, said Helen Toner, interim executive director at Georgetown University’s Center for Security and Emerging Technology. “A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations,” said Toner, a former board member of OpenAI. “AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets.” But questions about the appropriate levels of human involvement, risk and training are still being worked out, she said. “How do you roll out these tools rapidly for them to be effective and provide strategic advantage?” Toner asked, “While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?” Such concerns were raised by Anthropic. The tech company said it wanted assurances in its contract that the military would not use its technology in fully autonomous weapons and the surveillance of Americans. Defense Secretary Pete Hegseth said the company must allow for any uses the Pentagon deemed lawful. Anthropic sued after President Donald Trump, a Republican, tried to stop all federal agencies from using the company’s chatbot Claude and Hegseth sought to label the company a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries. OpenAI had announced a deal with the Pentagon in March to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that it was the same agreement it announced in early March. “As we said when we first announced our agreement several months ago, we believe the people defending the United States should have the best tools in the world,” the company said. One company’s agreement with the Pentagon included language that said there should be human oversight over any missions in which the AI systems act autonomously or semiautonomously, according to a person familiar with the agreement who was not authorized to speak about it publicly. The language also said the AI tools must be used in ways that are consistent with constitutional rights and civil liberties. Those resemble sticking points for Anthropic, though OpenAI has previously said that it secured similar assurances when it made its own deal with the Pentagon. The Pentagon’s point of view Emil Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it would have been irresponsible to rely on only one company, an acknowledgment of the friction with Anthropic. “And when we learned that one partner didn’t really want to work with us in the way we wanted to work with them, we went out and made sure that we had multiple different providers,” Michael said. Some of the companies, including Amazon and Microsoft, have long worked with the military in classified environments, and it was not immediately clear if the new agreements significantly altered their government partnerships. Others, such as chipmaker Nvidia and the startup Reflection, are new to such work. Both companies make open-source AI models, which Michael has described as a priority to provide an “American alternative” to China’s rapid development of AI systems in which some key components are publicly accessible for others to build upon. The Pentagon said Friday that military personnel are already using its AI capabilities through its official platform, GenAI.mil. “Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days,” the Pentagon said, adding that the military’s growing AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.” In many cases, the military uses artificial intelligence the same way civilians do: to take on rote tasks that would take humans hours or days to complete, said Toner, of Georgetown University. AI can be used to better predict when a helicopter needs maintenance or figure out how to efficiently move large amounts of troops and gear, she said. It can also help determine whether vehicles on a drone’s surveillance feeds are civilian or military. But people shouldn’t become overly dependent on it. “There’s a phenomenon called automation bias, where people can be prone to assume that machines work better than they actually do,” Toner said. Related: Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That Could Also Supercharge Attacks Related: The Mythos Moment: Enterprises Must Fight Agents with Agents Related: Claude Mythos Finds 271 Firefox Vulnerabilities Related: OpenAI Widens Access to Cybersecurity Model After Anthropic’s Mythos Reveal WRITTEN BY Associated Press More from Associated Press Germany Suspects Russia Is Behind Signal Phishing That Targeted Top Officials US Launches Sweeping Crackdown on Southeast Asia Cyberscams and Sanctions Cambodian Senator Trump Administration Vows Crackdown on Chinese Companies ‘Exploiting’ AI Models Made in US Most Serious Cyberattacks Against the UK Now From Russia, Iran and China, Cyber Chief Says Senate Extends Surveillance Powers Until April 30 After Chaotic Votes in House White House Chief of Staff to Meet With Anthropic CEO Over Its New AI Technology Lawmakers Gathered Quietly to Talk About AI. Angst and Fears of ‘Destruction’ Followed Sweden Blames Pro-Russian Group for Cyberattack Last Year on Its Energy Infrastructure Latest News Edtech Firm Instructure Discloses Data Breach Amid Hacker Leak Threats New Bluekit Phishing Kit Features AI Assistant In Other News: Scattered Spider Hacker Arrested, SOC Effectiveness Metrics, NSA Tool Vulnerability  Google Adjusts Bug Bounties: Chrome Payouts Drop as Android Rewards Rise Amid AI Surge Two US Security Experts Sentenced to Prison for Helping Ransomware Gang Sophisticated Deep#Door Backdoor Enables Espionage, Disruption Cisco Releases Open Source Tool for AI Model Provenance  Hugging Face, ClawHub Abused for Malware Distribution Trending Webinar: A Step-By-Step Approach To AI Governance April 28, 2026 With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment. Register Virtual Event: Threat Detection And Incident Response Summit May 20, 2026 Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization. Register People on the Move Chris Sistrunk has been promoted to Practice Leader for Mandiant's OT Security Consulting. Nudge Security has appointed Patrick Dillon as its Chief Revenue Officer. AutoNation has appointed Brian Fricke as Chief Information Security Officer. More People On The Move Expert Insights The Mythos Moment: Enterprises Must Fight Agents With Agents Only with the right platform and an agentic, AI-driven defense, will enterprises be able to protect themselves in the agentic era. (Etay Maor) Why Cybersecurity Must Rethink Defense In The Age Of Autonomous Agents From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. (Torsten George) Government Can’t Win The Cyber War Without The Private Sector Securing national resilience now depends on faster, deeper partnerships with the private sector. (Steve Durbin) The Hidden ROI Of Visibility: Better Decisions, Better Behavior, Better Security Beyond monitoring and compliance, visibility acts as a powerful deterrent, shaping user behavior, improving collaboration, and enabling more accurate, data-driven security decisions. (Joshua Goldfarb) The New Rules Of Engagement: Matching Agentic Attack Speed The cybersecurity response to AI-enabled nation-state threats cannot be incremental. It must be architectural. (Nadir Izrael) Flipboard Reddit Whatsapp Email
    💬 Team Notes
    Article Info
    Source
    Security Week
    Category
    ◇ Industry News & Leadership
    Published
    May 04, 2026
    Archived
    May 04, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗