
Dear Readers,
OpenAI is gearing up to drop GPT-5.4 with a 1-million-token context window and an "extreme" reasoning mode built for problems that actually deserve deep thought — and if that lands as promised, it could finally close the gap with Google and Anthropic on raw capacity while pushing the frontier on what AI agents can reliably handle over hours of autonomous work.
Meanwhile, Anthropic itself is on a tear, approaching a $20 billion revenue run rate even as the Pentagon threatens to blacklist it for refusing to build mass surveillance tools — a clash that exploded further when CEO Dario Amodei called OpenAI's defense contract "safety theater" in a blistering internal memo. Beyond the corporate knife fight, MIT researcher Christian Catalini drops a framework that might explain why all of this tension exists in the first place: AI is making doing things absurdly cheap, but verifying what was done remains expensive and slow — a gap that invites exactly the kind of rogue AI behavior we're already seeing in lab tests.
And if you think the stakes are only digital, today's feature on CRISPR gene-editing getting the FDA's green light for heart disease trials will remind you just how far the frontier stretches. Let's get into it.
In Today’s Issue:
🤑 Anthropic approaches a staggering $20B revenue run rate
💵 OpenAI prepares to launch GPT-5.4
📈 MIT researcher explains why AI's "verification bottleneck" is driving rogue behavior in models
📉 Anthropic's CEO slams OpenAI's defense deal as "safety theater”
✨ And more AI goodness…
All the best,




🚀 Anthropic Revenue Soars Amid Pentagon Clash
Anthropic is approaching a $20B annual revenue run rate, more than doubling from $9B at the end of 2025, driven by massive adoption of its AI models and tools like Claude Code. The company, now valued around $380B, is gaining viral momentum and even topping Apple’s app download charts as interest surges.
But growth comes with tension: the US Defense Secretary labeled Anthropic a supply-chain risk after the company pushed back on military uses of its AI, potentially cutting it off from Pentagon contracts and sparking a looming legal fight.

🤖 Verification Bottleneck Meets Rogue AI
MIT Lab and Research Associate Christian Catalini argues that AI is making doing incredibly cheap, while verifying what was done stays expensive and slow, creating a “measurability gap” that can drive real economic and safety risk.
That’s why we see alarming behavior in controlled tests (e.g., models evading shutdown or hiding actions): once you optimize hard on imperfect metrics, systems exploit whatever isn’t measured (Goodhart’s Law “with teeth”).
He maps the economy into regimes based on the cost to automate vs. the cost to verify, arguing that early AI wins lived in the “easy” zone where checking outputs was basically free, but the next phase will be won by trust infrastructure (proof, provenance, auditing, liability).

🧠 Anthropic Slams OpenAI Pentagon Deal
Anthropic CEO Dario Amodei told staff the Pentagon is moving to label Anthropic a “supply chain risk” after the company refused military uses like mass domestic surveillance and fully autonomous lethal weapons, while blasting OpenAI’s DoD contract as “safety theater” with “all lawful purposes” wiggle room. He also claimed the Trump administration dislikes Anthropic because it wouldn’t donate or offer “dictator-style praise,” contrasting that with alleged OpenAI ties - including a reported $25M pro-Trump super PAC donation linked to Greg Brockman - before Altman later tightened contract language amid backlash. The fallout could hit Anthropic’s government revenue via Palantir and further escalates the high-stakes AI safety vs. defense adoption fight in Washington and Silicon Valley.


Box CEO: AI agents will be the biggest users of software in the future



GPT-5.4 Doubles Down on Deep Thinking
The Takeaway
👉 GPT-5.4 expands the context window to 1 million tokens, matching Google and Anthropic - a critical catch-up move that enables processing of far larger documents and datasets in a single query.
👉 The new "extreme" reasoning mode allocates significantly more compute time to hard problems, making it particularly valuable for scientific research and complex coding tasks where accuracy matters more than speed.
👉 Improved reliability on long-horizon tasks means AI agents like Codex can now handle multi-hour workflows with fewer errors - a necessary step before enterprises trust autonomous AI in production.
👉 OpenAI's shift to monthly model updates reflects competitive pressure: with 910M weekly users (below their 1B target) and rivals growing fast, the company is prioritizing steady improvements over blockbuster launches.
OpenAI is about to shake things up — again. The company's next model, GPT-5.4, is reportedly arriving soon, and it's packing some seriously ambitious upgrades that could redefine what we expect from AI reasoning.
Here's the deal: GPT-5.4 will reportedly feature a 1-million-token context window — more than double the 400,000 tokens in the current GPT-5.2. Think of it like upgrading from a small notebook to an entire filing cabinet that the AI can read and reference all at once. That finally puts OpenAI on equal footing with Google and Anthropic, who've already been offering that capacity.

But the real headline? An "extreme" reasoning mode. This lets the model take significantly more time — and computing power — to work through genuinely hard problems. We're not talking about faster chatbot replies here. We're talking about scientific research, multi-hour coding tasks, and deep analysis where you actually want the AI to slow down and think harder. For tools like OpenAI's Codex, which automates complex programming workflows, that reliability boost could be a game-changer.

The model is also said to be far more dependable on long-running tasks - remembering instructions, staying on track, and making fewer mistakes across many steps. That's exactly what matters when you're building AI agents that need to work autonomously for hours.
There's a strategic angle here, too. OpenAI has shifted to a faster release cycle - roughly monthly updates — to avoid the hype-and-disappointment trap that plagued earlier launches. With 910 million weekly active users on ChatGPT (still short of their 1 billion goal) and growing competition from Google's Gemini and Anthropic's Claude, OpenAI can't afford to sit still.
Why it matters: GPT-5.4 signals that the AI race is no longer just about who has the smartest model - it's about who builds the most reliable, deep-thinking AI that can handle real-world complexity. For developers, researchers, and businesses betting on AI agents, this could be the update that turns experimental tools into genuinely dependable ones.
Sources:
🔗 https://www.theinformation.com/newsletters/ai-agenda/openais-next-ai-model-will-extreme-reasoning?rc=bfliih


The Year-End Moves No One’s Watching
Markets don’t wait — and year-end waits even less.
In the final stretch, money rotates, funds window-dress, tax-loss selling meets bottom-fishing, and “Santa Rally” chatter turns into real tape. Most people notice after the move.
Elite Trade Club is your morning shortcut: a curated selection of the setups that still matter this year — the headlines that move stocks, catalysts on deck, and where smart money is positioning before New Year’s. One read. Five minutes. Actionable clarity.
If you want to start 2026 from a stronger spot, finish 2025 prepared. Join 200K+ traders who open our premarket briefing, place their plan, and let the open come to them.
By joining, you’ll receive Elite Trade Club emails and select partner insights. See Privacy Policy.



Anthropic's ARR is a single story of exponential development.


CRISPR Heart Therapy Gets Green Light
CRISPR-based medicine just got a second chance, and this time, the stakes couldn't be higher. The FDA has lifted the clinical hold on Intellia Therapeutics' MAGNITUDE Phase 3 trial, allowing the company to resume testing its gene-editing therapy nex-z for a serious heart condition caused by a rare protein buildup disease called transthyretin amyloidosis (ATTR).

Here's the backstory: Last October, both of Intellia's late-stage nex-z trials were paused after a patient in the heart disease study developed severe liver complications and later died. Intellia's stock dropped more than 40%. It was a gut punch, not just for the company, but for the entire gene therapy space that has been watching this program closely.

Now, just about four months later, both trials are back in action. The FDA cleared the nerve damage trial (MAGNITUDE-2) in January, and this week gave the green light for the heart disease study to resume. Evercore analysts noted the hold was lifted relatively quickly by historical standards, calling it a "great result" with likely only three to four months of total clinical delay.

To get here, Intellia agreed to tighter safety measures: more frequent liver monitoring, short-term steroid protocols for early liver issues after dosing, and new exclusion criteria for patients with severe heart dysfunction or certain liver abnormalities. These aren't minor tweaks, they're real safeguards designed to prevent a repeat of the tragedy that triggered the pause.
The FDA's decision to lift both clinical holds signals growing regulatory confidence in CRISPR-based therapies, even after serious adverse events. If nex-z succeeds in Phase 3, it could become the first one-time gene editing treatment for a major cardiac disease, a milestone that would reshape how we think about treating chronic, progressive conditions.


Tired of news that feels like noise?
Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.




