You can chart a year through product launches, or you can measure it in the greater moments that change the way we look at AI. The AI industry is constantly churning out news, like major acquisitions, indie developer successes, public outcry against sketchy products, and existentially dangerous contract negotiations — it’s a lot to untangle, so we’re taking a glimpse at where we’re at and where we’ve been so far this year.
Anthropic vs. the Pentagon
Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter stalemate as they renegotiated the contracts that dictate how the U.S. military can use Anthropic’s AI tools in February.
Anthropic established a hard line against its AI being used for mass surveillance of Americans or to power autonomous weapons that can attack without human oversight. Meanwhile, the Pentagon has argued that the Department of Defense — which President Donald Trump’s administration calls the Department of War — should be permitted access to Anthropic’s models for any “lawful use.” Government representatives took offense to the idea that the military should be limited to the rules of a private company, but Amodei stood his ground.
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei wrote in a statement addressing the situation. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of employees at Google and OpenAI signed an open letter urging their respective leaders to respect Amodei’s limits and refuse to budge on issues of autonomous weapons or domestic surveillance.
The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed federal agencies to phase out their use of Anthropic tools over a six-month transition period and called the AI company, which is valued at $380 billion, a “radical left, woke company” in an all-caps social media post. The Pentagon then moved to declare Anthropic a “supply chain risk,” a designation that is usually reserved for foreign adversaries and prevents any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since sued to challenge the designation.)
Anthropic rival OpenAI then swooped in and announced that it had reached an agreement allowing its own models to be deployed in classified situations. It was a shock to the tech community, since reports had indicated that OpenAI would stick to Anthropic’s red lines governing use of AI for the military.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Public sentiment would indicate that people found OpenAI’s move fishy — ChatGPT uninstalls jumped 295% day-over-day on the day after OpenAI announced its deal, and Anthropic’s Claude shot to No. 1 in the app store. OpenAI hardware executive Caitlin Kalinowski quit in response to the deal, saying that it was “rushed without the guardrails defined.”
OpenAI told TechCrunch that it believes its agreement “makes clear [its] redlines: no autonomous weapons and no autonomous surveillance.”
As this saga plays out, it will have significant implications for the future of how AI is deployed at war, potentially changing the course of history — you know, no big deal…
‘Vibe-coded’ app OpenClaw accelerates the turn to agentic AI
February was the month of OpenClaw, and its impact continues to reverberate. In quick succession, the vibe-coded AI assistant app went viral, spawned a bunch of spinoff companies, suffered from privacy snafus, and then got acquired by OpenAI. Even one of the companies built on OpenClaw, a Reddit-clone for AI agents called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley into a downright frenzy.
Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language via the most popular chat apps, like iMessage, Discord, Slack, or WhatsApp. There’s also a public marketplace where people can code and upload “skills” for people to add to their AI agents, making it possible to automate basically anything that can be done on a computer.
If that seems too good to be true, it’s because it kind of is. In order for an AI agent to be effective as a personal assistant, it needs to have access to your email, credit card numbers, text messages, computer files, etc. If it were to be hacked, a lot could go wrong, and unfortunately, there’s no way to fully secure these agents against prompt-injection attacks.
“It is just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use,” Ian Ahl, CTO at Permiso Security, told TechCrunch. “So what that means is, when you get an email, and maybe somebody is able to put a little prompt injection technique in there to take an action, [and] that agent sitting on your box with access to everything you’ve given it to can now take that action.”
One AI security researcher at Meta said that OpenClaw ran amok on her inbox, deleting all of her emails despite repeated calls to stop. “I had to RUN to my Mac mini like I was defusing a bomb” to physically unplug the device, she wrote in a now-viral post on X, which included images of the ignored stop prompts as receipts.
Despite the security risks, the technology piqued OpenAI’s interest enough for an acquihire.
Other tools built on OpenClaw, including Moltbook — a Reddit-like “social network” where AI agents can communicate with one another — ended up becoming more viral than OpenClaw itself.
In one instance, a post went viral in which an AI agent appeared to be encouraging its fellow agents to develop their own secret, end-to-end-encrypted language where they could organize amongst themselves without humans knowing.
But researchers soon revealed that the vibe-coded Moltbook wasn’t very secure, meaning that it was very easy for human users to pose as AIs to make posts that would trigger viral social hysteria.
Again, even though the discussion around Moltbook was more grounded in panic than reality, Meta saw something in the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would join Meta Superintelligence Labs.
It seems strange that Meta would buy a social network where all of the users are bots. While Meta hasn’t revealed much about the acquisition, we theorize that owning Moltbook is more about gaining access to the talent behind it, who are enthusiastic about experimenting with AI agent ecosystems. CEO Mark Zuckerberg has said it himself: He thinks that one day, every business will have a business AI.
As we watch the hubbub around OpenClaw, Moltbook, and NanoClaw play out it seems as though those who predicted an agentic AI future may be onto something, at least for now.
Chip shortages, hardware drama, and data center demands escalate
The harsh demands of the AI industry — which require computing power and data centers in unprecedented volumes — are reaching a point where the average consumer has no choice but to pay attention. Now, it may not even be possible for the industry to satisfy the astronomical demands for memory chips, and consumers are already seeing the prices of their phones, laptops, cars, and other hardware increase.
So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will plummet about 12 to 13 percent this year; Apple has already raised MacBook Pro prices by up to $400.
Google, Amazon, Meta, and Microsoft are planning to spend up to a combined $650 billion on data centers alone this year, which is an estimated 60% increase from last year.
If the chip shortage doesn’t hit you in your wallet, it might hit your community at large. In the U.S. alone, nearly 3,000 new data centers are under construction, adding to the 4,000 already operating in the country. The need for laborers to build these data centers is significant enough that “man camps” have sprung up in Nevada and Texas, attempting to lure workers with the promise of golf simulator game rooms and steaks grilled on-demand.
Not only does data center construction have a long-term impact on the environment, but it also creates health hazards for nearby residents, polluting the air and impacting the safety of nearby water sources.
All the while, one of the most valuable hardware and chip developers, Nvidia, is reshaping its relationship to leading AI companies like OpenAI and Anthropic. Nvidia has been an ongoing backer of these companies, sparking concerns around the circularity of the AI industry, and how much of those eye-popping valuations are based on recursive deals with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and OpenAI then said it would buy $100 billion of Nvidia chips.
It was surprising, then, when Nvidia CEO Jensen Huang said that his company would stop investing in OpenAI and Anthropic. He said that this is because the companies plan to go public later this year, though that logic doesn’t quite make sense, since investors typically funnel in more money pre-IPO to extract as much value as possible.