In partnership with

In Today’s Issue:

🖱️ DeepMind reimagines the mouse pointer

🧠 Sutskever's SSI stake shows frontier valuation pressure

🧬 Isomorphic raises $2.1B for AI drug design

🩺 AI diagnosis research moves toward clinical testing

📊 Most people still have not used AI directly

And more AI goodness…

⚡ The Signal

AI is moving into the operating layer.

DeepMind's AI pointer is a small interface idea with a big implication: the next wave of AI may not wait for prompts in a separate chat box. It will read context where work is already happening. Today's issue follows that shift from screen control to scientific work: frontier labs are still drawing extreme valuations, drug discovery is attracting serious capital, diagnosis research is testing what reasoning models can do, and the usage graph shows how much of the world has not yet touched AI directly.

All the best,

Kim Isenberg

🤝 Tech-CEOs Join Trump China Trip

Trump is bringing more than a dozen US executives to Beijing, including Elon Musk, Tim Cook, Jensen Huang and Larry Fink, as he prepares to meet Xi Jinping amid tense fights over chips, tariffs and Iran. Huang’s late addition stands out because Nvidia sits at the center of the US-China AI rivalry, making the delegation feel less like a diplomatic entourage and more like a map of American economic pressure points and it may also reflect pressure from Huawei’s growing domestic chip push as Nvidia tries to keep selling more GPUs into China.

👉 tl;dr: Trump’s China trip is packed with top US business leaders, with Nvidia’s Jensen Huang becoming the most politically charged addition.

🧬 Isomorphic Raises $2.1B for AI Drug Design

Demis Hassabi’s (CEO Google DeepMind) Isomorphic Labs announced a $2.1 billion Series B led by Thrive Capital, with backing from Alphabet, GV, MGX, Temasek, CapitalG, and the UK Sovereign AI Fund. The company says the money will scale its AI drug design engine and move its drug candidate pipeline forward across multiple therapeutic areas.

👉 tl;dr: AI drug discovery is moving from model promise into a capital-heavy race to build pipelines, partnerships, and clinical proof.

🤖 Japan Tests a Human-Free Robot Lab

Interesting Engineering reports that the Institute of Science Tokyo's Robotics Innovation Center is operating a medical-research lab with 10 robots and no on-site human staff! The setup includes the Maholo LabDroid for delicate lab procedures, and the university says it wants to expand toward roughly 2,000 research robots by 2040.

👉 tl;dr: Lab automation is starting to look less like one smart instrument and more like a full robotic research workforce.

Ask AI to convert a messy screen, document, or workflow into an action map before you ask for an answer.

Why it helps: Today's issue is about AI moving closer to context: the pointer sees the screen, diagnosis models reason over case notes, and drug-design systems work inside scientific pipelines. A good prompt should do the same by first naming the objects, constraints, and next actions already in front of you.

Try this: "Look at this page, note, screenshot, or workflow: [paste or upload it]. Identify the key objects, the decision I need to make, the missing context, and the next three actions I can take without leaving this workflow."

🎬 Watch This

Unitree Unveils GD01, a Manned Transformable Mecha

Why it’s worth your time:

Unitree's new GD01 demo is a blunt reminder that the robotics race is not only about lab automation or humanoid assistants. It shows a pilot-carrying machine moving in bipedal form, switching into a quadruped stance, and testing force output in public.

Best bit:

The interesting part is the transition itself: Unitree is treating legged mobility as a configurable platform, not a single fixed body plan.

Watch if you care about:

Robotics / physical AI / China hardware / humanoids / mobility platforms

“We’re going through the single largest infrastructure buildout in human history.”

Anthropic may want the tools layer.

Anthropic is reportedly in advanced talks to buy Stainless, the developer-tools startup used by companies including OpenAI and Google, for at least $300 million, according to The Information via an Investing summary. The deal is still unconfirmed, but the signal is clear enough to watch: frontier labs increasingly want control over the SDK, API, and agent infrastructure around their models, not only the models themselves.

The AI Cursor Arrives

The Takeaway

👉 Google DeepMind published experimental demos of an AI-enabled pointer powered by Gemini.

👉 The prototype is meant to understand what a user is pointing at, not just where the cursor sits on screen.

👉 Google says the idea is already moving into Gemini in Chrome, Googlebook's Magic Pointer, and future Labs concepts.

👉 The bigger shift is that AI interfaces are starting to use gesture, speech, and screen context together instead of asking users to write better prompts.

DeepMind is trying to make the cursor a context sensor. The company says the familiar mouse pointer has barely changed in decades, even though the work around it has become more visual, app-heavy, and fragmented. Its AI pointer experiment uses Gemini to understand what someone is pointing at and why that object matters in the current task.

The design goal is simple but cool: keep the user inside the flow. Instead of copying text, screenshots, tables, or images into a separate chat window, a person could point at a PDF and ask for email-ready bullets, hover over a table and request a chart, or point at a place in a photo and ask for directions. The important move is not the cursor itself. It is the idea that AI should inherit context from the screen!

Google is already tying the concept to real products. DeepMind says the principles are being woven into Chrome, the new Googlebook laptop experience, and future Google Labs concepts, including demos in Google AI Studio. That makes this less like a one-off research toy and more like a public sketch of how Google wants Gemini to live inside everyday computing.

Why it matters: This is important because prompt-writing is still a tax on AI use. If the interface can understand what "this" and "that" mean on screen, more people may use AI without feeling like they are operating a separate tool.

Sources:
🔗 https://deepmind.google/blog/ai-pointer/

The Architecture Behind AI-Native Revenue Automation

In our new white paper, The Architecture Behind AI-Native Revenue Automation, Tabs CTO Deepak Bapat breaks down what it actually takes to apply AI to revenue workflows without breaking the books.

You’ll learn why probabilistic reasoning isn’t enough for finance, how Tabs pairs LLMs with deterministic logic, and why a unified Commercial Graph is the foundation for scalable, audit-ready automation. From contract interpretation to cash application, this paper goes deep on where AI belongs—and where it absolutely doesn’t.

If you’re evaluating AI for billing, collections, or revenue operations, this is the architecture perspective most vendors won’t show you.

The chart: Each dot represents about 3.3 million people, with a May 2026 estimate splitting the world into non-users, free chatbot users, paid AI subscribers, and coding-scaffold users.

The visual: Direct AI use is still a minority behavior: the chart shows roughly 6.47B people as never having used AI directly, about 1.75B as free chatbot users, about 60M paying $20/month for AI, and about 10M using coding scaffolds.

The lesson: AI can feel ubiquitous inside tech, but direct, intentional usage is still early. The interface race matters because most future adoption may come through everyday surfaces, not people opening a chatbot on purpose.

The caveat: The image excludes passive or embedded AI exposure, so it is a snapshot of recognizable direct interaction, not total AI contact.

AI Diagnosis Moves Toward Clinical Testing

⚡ Bottom line: A Science study tested how a large language model handled physicians' reasoning tasks, including difficult real-world emergency-room cases.

💡 Why it matters: The research points to AI as a second-opinion layer for complex diagnosis, but not as an independent replacement for clinicians.

🔎 What it means: The next research question is not whether AI can suggest a diagnosis in a clean text case. It is how safely it can help inside messy clinical workflows.

Researchers led by Harvard and Beth Israel Deaconess evaluated OpenAI's o1-preview on medical reasoning tasks, including de-identified data from 76 Boston emergency-room patients (yes, the over one-year old model). Science News reports that the model was more likely than physicians to include the correct diagnosis, or a very close one, among its possible answers.

The strongest result is also the reason for caution. The model can scan broad patterns and surface rare possibilities that clinicians might not think of quickly, including one dangerous infection case where researchers said it raised suspicion earlier than the human team. But critics note that clinical reasoning is not the same thing as model reasoning: doctors must juggle uncertainty, patient communication, physical exams, incomplete data, and risk.

AI diagnosis is becoming testable infrastructure, but not medical magic. The path forward is clinical trials, careful workflow design, and guardrails that make the model an extension of the physician rather than a hidden decision-maker. Still, this shows how useful reasoning models could already become in clinical practice. And they are just getting better.

Real-World Ads, Simple to Run

With AdQuick, executing Out Of Home campaigns is as easy as running digital ads. Plan, deploy, and measure your real-world advertising effortlessly — so your team can scale campaigns and maximize impact without the headaches.

Reply

Avatar

or to participate

Keep Reading