Pentagon Picks Eight Tech Giants for Classified AI, Anthropic Left Out
Pentagon Picks Eight Tech Giants for Classified AI, Anthropic Left Out
AI & Machine Learning
Anthropic is reportedly in talks to raise funds at a valuation near $900 billion, a level that would eclipse recent benchmarks set for rival labs and underscore intense investor interest in frontier AI firms; the CNBC report ties the discussions to Anthropic’s commercial traction, including reported growth toward $30 billion in annualized revenue and product pushes like the Mythos preview and Claude Opus updates. If realized, the valuation would reshape competitive dynamics among major AI labs and intensify pressure on cloud providers and customers to pick platform partners, while also raising questions about concentration of talent and capital. The story remains based on people familiar with the talks and is unconfirmed as a closed deal, so it should be treated as a developing market-moves report rather than a finalized transaction. Source: CNBC Verified: True
Canonical’s announcement to integrate opt-in AI features into Ubuntu (previewed for 26.10) has reignited calls from parts of the Linux community for an “AI kill switch,” reflecting deep unease about defaults, telemetry and control when AI agents are embedded in core OS tooling. Canonical says the AI features will be delivered as removable Snaps and will be opt-in initially, but community debate centers on how defaults, update channels and package provenance could affect privacy and system integrity. The exchange highlights broader tensions between convenience-driven AI integrations and the open-source community’s expectations for transparency and user control, and it may influence how other distributions approach similar features. Source: The Verge Verified: True
Consumer Hardware
General Motors said it will roll out Google’s Gemini assistant via over-the-air updates to eligible 2022-and-newer Cadillac, Chevrolet, Buick and GMC models across the U.S., reaching roughly four million vehicles and marking one of the largest deployments of Gemini in the auto industry to date. The integration aims to deliver more conversational, context-aware in-car assistance for navigation, messaging and media, and GM intends to expand languages and markets over time. The move continues the trend of automakers partnering with major cloud and AI vendors to accelerate in-vehicle experiences while balancing driver safety and data privacy considerations. Source: The Verge Verified: True
Cybersecurity
A critical authentication-bypass vulnerability in cPanel/WHM tracked as CVE-2026-41940 (CVSS 9.8) was disclosed and confirmed to be exploited in the wild, prompting urgent advisories from vendors, hosting providers and CISA, which added the flaw to its Known Exploited Vulnerabilities catalog; administrators were warned that successful exploitation can yield root-level control of hosting servers. Patching guidance, temporary mitigations (such as blocking management ports or stopping services) and rapid incident response are being prioritized by affected hosts, and the active exploitation status raises immediate operational risk for shared hosting environments and customers. The incident once again spotlights the threat posed by critical infrastructure software flaws and the need for rapid coordinated patching across large provider ecosystems. Source: The Hacker News Verified: True
Security researchers from SentinelOne disclosed a newly analyzed malware framework named “Fast16” that targeted engineering and modeling software as early as 2005, manipulating high-precision floating-point computations to introduce subtle sabotage—an approach that predates Stuxnet by roughly five years and suggests earlier state-linked interest in degrading industrial and scientific outputs. The research, reported by CSO, indicates the malware was designed for precision-targeted interference rather than data theft, expanding our historical understanding of cyber sabotage techniques and the long evolution of capabilities aimed at operational disruption. The finding raises fresh questions about attribution, defensive readiness for integrity attacks on scientific/engineering workloads, and the need for hardened validation of computational results in critical sectors. Source: CSO Online Verified: True
Nearly a year after a ransomware incident at Sandhills Medical Foundation, legal and class-action activity is advancing, illustrating the protracted downstream consequences healthcare breaches impose on providers and patients through litigation, notification duties and ongoing remediation costs. The Security Boulevard report details how plaintiffs and investigators continue to pursue accountability and damages, and it underscores that ransomware fallout often persists long after initial containment and remediation. The case serves as a reminder that incident response must include legal, regulatory and communications planning given the long tail of reputational and financial risk in healthcare. Source: Security Boulevard Verified: True
Enterprise Infrastructure
Major cloud and hyperscale players set a new record for AI-driven capital expenditure, with Google, Amazon, Microsoft and Meta reporting a combined quarterly capex around $130.65 billion largely directed at AI data centers and custom infrastructure, signaling sustained and massive investment in compute capacity. The New York Times coverage frames this spending surge as concentrating raw compute and model training capability with the largest providers, which accelerates AI capability development but also raises concerns about consolidation, access to advanced compute, and competitive dynamics for startups and smaller cloud users. Companies signaled plans to keep investing heavily this year, suggesting the infrastructure buildout for next-generation models will continue to be a dominant strategic and financial theme across the industry. Source: The New York Times Verified: True
Policy & Regulation
The Department of Defense announced agreements with eight major tech firms—SpaceX, OpenAI, Google, Microsoft, Nvidia, AWS, Oracle and Reflection—to use their AI tools on classified networks, a move that expands Pentagon AI partnerships beyond Anthropic after that company balked at contract terms allowing full military use. CNN’s reporting frames the decision as intensifying competition for Pentagon AI work while leaving Anthropic sidelined amid legal and policy disagreements, and it highlights how procurement terms and corporate stances on defense use can materially affect market access. The agreements are likely to accelerate integration of commercial AI capabilities into classified workflows but may also prompt debate about oversight, export controls and the role of private AI labs in military contexts. Source: CNN Verified: True
Negotiations between EU member states and European Parliament negotiators failed to produce a deal on a watered-down compromise version of the bloc’s proposed AI Act, leaving the legislation unresolved and prolonging regulatory uncertainty for AI vendors operating in Europe. Reuters reports the impasse stems from disagreements over scope, safety obligations and enforcement mechanisms, and the stalemate means providers face ambiguity on compliance expectations and timelines. The outcome increases the risk that companies will delay product rollouts or maintain conservative defaults in the EU market until clearer rules emerge, and it underscores the political difficulty of balancing innovation with safeguards across diverse member-state priorities. Source: Reuters Verified: True
Two UK laws—the Children’s Wellbeing & Schools Act and the Crime & Policing Act—received Royal Assent at the end of April, granting the Secretary of State expedited powers to amend the Online Safety Act and UK GDPR in ways that could strengthen protections for children and bring certain AI chatbots and “AI services” under online safety regulation. The legislative changes include potential authority to set age limits for digital consent and new duties to address AI-generated harms, accelerating the UK’s regulatory capacity to respond to risks tied to chatbots and automated services used by young people. Legal advisors and industry observers note the bills move the UK toward more active and flexible regulation of online harms and AI, while creating compliance imperatives for platform and AI service operators in the near term. Source: Two Birds Verified: True