
In Today’s Issue:
🌌 A deep dive into NVIDIA Cosmos with NVIDIA VP and Superintelligence
🧪 GPT-5.2 now powers ChatGPT's "Deep Research" feature
🧬 Google DeepMind's Isomorphic Labs unveiled a new engine that more than doubles the accuracy of AlphaFold 3
🎙️ The new Expressive Mode brings ultra-low latency and human-level emotional nuance to 70+ languages
🏛️ Experts argue we have officially moved from "Co-pilot" to "Autopilot."
✨ And more AI goodness…
Dear Readers,
A Chinese video model just passed what many consider the Turing test for AI-generated footage. SeeDance 2.0 produces clips so physically accurate that even seasoned researchers can't tell them apart from reality.
But that's just one thread today: OpenAI supercharges Deep Research with GPT-5.2, ElevenLabs crosses the uncanny valley for voice across 70+ languages, and Isomorphic Labs claims to have leapfrogged AlphaFold 3 in drug design. We also unpack Matt Shumer's sharp argument that February 2026 marked the moment AI shifted from copilot to full autopilot, with models now accelerating their own development.
And don't miss today's video highlight: our editor-in-chief, Kim Isenberg, sat down yesterday with NVIDIA's VP of Research, Ming-Yu Liu, for a deep dive into NVIDIA Cosmos, one of our most exciting interviews yet.
All the best,

Kim Isenberg



💊 Faster, Cheaper Drug Design
Isomorphic Labs unveiled its IsoDDE engine, claiming 2×+ accuracy gains over AlphaFold 3, 2.3× better antibody predictions, and binding-affinity results that beat physics-based gold standards—all at a fraction of the time and cost. The platform generalizes to truly novel biology, discovers cryptic drug pockets from sequence alone, and scales AI-driven design from small molecules to complex biologics, signaling a major leap toward fully in-silico drug discovery. Built with insights from Google DeepMind, IsoDDE aims to unlock hard targets faster and cheaper—often in seconds.
🎙️ ElevenLabs Expressive Mode Expands
ElevenLabs has launched Expressive Mode, scaling emotional nuance across 70+ languages and significantly improving voice delivery in dialects where natural tone previously lagged, including Hindi. The upgrade enables teams to deploy conversational agents that sound on-brand, react in real time, and adapt to customer emotion, making interactions feel genuinely helpful instead of transactional. The latency is insanely low, and the tone sounds all human. We have crossed the threshold.

🔍 GPT-5.2 Supercharges Deep Research
OpenAI announced that Deep Research in ChatGPT is now powered by GPT-5.2, rolling out with major upgrades aimed at power users and professionals. The update enables app integrations and site-specific searches, real-time progress tracking with interrupt capability, and full-screen report views - making research more interactive, transparent, and workflow-friendly.
With nearly 1 million views within hours, the rollout signals OpenAI’s push to turn ChatGPT into a more robust research assistant that can plug directly into users’ tools and deliver structured, monitorable outputs in real time.


Another of xAI's founders is leaving. But he's leaving with grand pronouncements, something you currently read about from all the important people in the field:
”We are heading to an age of 100x productivity with the right tools. Recursive self improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species.”


Intro to NVIDIA Cosmos with Ming-Yu ft. Superintelligence with our editor in chief Kim Isenberg



From Copilot to Full Autopilot
The Takeaway
👉 AI progress has entered a new phase where models can complete complex, multi-step tasks end-to-end with minimal human input.
👉 The February 5, 2026 releases of GPT-5.3 Codex and Claude Opus 4.6 marked a visible leap in autonomy and capability.
👉 AI systems are now contributing to their own development, accelerating the pace of improvement through feedback loops.
👉 Task-length benchmarks show rapid expansion in how long AI can work independently, signaling growing real-world impact across knowledge industries.
In February 2020, the world looked normal, until it suddenly wasn’t. Matt Shumer argues we’re in the same “this seems overblown” phase again, except the wave is AI, and it’s already hitting knowledge work. The significant impact on the labor market is expected as early as 2026.
The trigger, he says, is the February 5, 2026 one-two punch: OpenAI’s GPT-5.3-Codex and Anthropic’s Claude Opus 4.6, models that don’t just help with tasks, but can run multi-step projects with far less hand-holding. The scary part isn’t a single demo; it’s the feedback loop: OpenAI states Codex helped accelerate its own development, and Anthropic is pushing longer “agentic” work with bigger context windows. The takeoff has already started.

Zoom out and it matches the data: METR tracks how long an AI can reliably complete real-world tasks end-to-end, and the trend has been climbing fast. If that curve keeps bending upward, “AI as colleague” becomes “AI as team” quicker than most orgs are ready for. What was long considered a vision of the future is now becoming tangible: AI is changing everything, and 2026 will be the tipping point.
Why it matters: This is the moment automation shifts from narrow workflows to end-to-end cognition on a screen. The winners won’t be the smartest, they’ll be the fastest at reorganizing work around these agents.
Source:
🔗 https://x.com/mattshumer_/status/2021256989876109403


Unlock ChatGPT’s Full Power at Work
ChatGPT is transforming productivity, but most teams miss its true potential. Subscribe to Mindstream for free and access 5 expert-built resources packed with prompts, workflows, and practical strategies for 2025.
Whether you're crafting content, managing projects, or automating work, this kit helps you save time and get better results every week.


SeeDance 2.0 hype reaches new heights
Since the hype surrounding SeeDance v2.0 simply won't die down, we're taking another look today at just how significant this new model really is. It seems that a Chinese model has, for the first time, overtaken US AI dominance, at least in the text-to-video sector. Many are speechless, and rightly so. The touring test for video models has been passed; for the most part, the clips are indistinguishable from reality.
The physical accuracy reproduced by the model is impressive.
Anime in the real world? No problem!
And even Professor Ethan Mollick's otter test was solved and created almost perfectly in one shot.
Last but not least: Currently, it's still possible to circumvent all IP rights and use copyrighted material. But see for yourself. This Dragon Ball clip could certainly run on Crunchyroll exactly like this.


Learn how to make AI work for you
AI won’t take your job, but a person using AI might. That’s why 2,000,000+ professionals read The Rundown AI – the free newsletter that keeps you updated on the latest AI news and teaches you how to use it in just 5 minutes a day.










