
In Today’s Issue:
🧠 Thinking Machines released interaction models
📊 Hermes leads the agent leaderboard
☁️ Claude lands on AWS
💸 Cerebras IPO demand surges
🤖 AI and robotics move toward manipulation
✨ And more AI goodness…
⚡ The Signal
AI is leaving the prompt box.
Today’s main story is Thinking Machines’ research preview of interaction models: AI systems designed to listen, see, speak, reason, and respond in real time instead of waiting for a clean turn. This shows that the shift is not just smarter answers. It is presence: models that can notice pauses, interruptions, visual cues, tool results, and messy human feedback as work is happening. Around that same shift, agent usage is becoming a live market signal, Claude is moving deeper into AWS procurement, Cerebras is testing public-market appetite, and robotics models are learning to reason about the physical world.
All the best,

Kim Isenberg



💸 Cerebras IPO Demand Surges
Reuters reports that Cerebras, known for its inference-chips, is preparing to raise its IPO price range to $150 to $160 per share after investor demand surged ahead of pricing. The chipmaker is pitching itself as the first public pure-play alternative to Nvidia for AI compute, but its own filing still shows the tension behind the story: huge backlog, fast revenue growth, and heavy dependence on a small number of large customers.
👉 tl;dr: The market is hungry for public AI compute exposure, but Cerebras still has to prove its wafer-scale bet can diversify beyond a few giant deployments.

🧠 Sutskever Reveals SSI Stake
Former OpenAI chief scientist Ilya Sutskever said in court testimony that his ownership stake in Safe Superintelligence is worth nearly $7 billion, according to Reuters. The number matters less as personal wealth gossip than as a signal of how investors are valuing safety-first frontier labs before they have the public product footprint of OpenAI, Anthropic, or Google.
👉 tl;dr: SSI is still deliberately quiet, but Sutskever's testimony shows the market is assigning enormous value to its bet on a single superintelligence-focused research path.

☁️ Claude Lands Inside AWS
AWS says Claude Platform on AWS is now generally available, giving companies direct access to Anthropic's native Claude Platform through their existing AWS accounts. The important distinction is that this is not just Claude on Bedrock: customers get Anthropic-operated platform features like Managed Agents, web search, web fetch, code execution, Skills, MCP connectors, batch processing, prompt caching, and the Claude Console while keeping AWS billing, IAM, and CloudTrail visibility.
👉 tl;dr: Claude just became much easier for AWS-heavy enterprises to buy, govern, and test without losing access to Anthropic's fastest-moving platform features.


Ask AI to turn one vague workflow into a deployment map before you buy another tool.
Why it helps: Most AI projects fail because teams start with a model or vendor, not with the work that actually needs to change. A deployment map forces the useful questions early: who does the work now, what data is needed, where human approval stays, and how success will be measured.
Try this: "Here is a workflow I want to improve: [paste it]. Map it into five parts: current steps, where AI could help, data or tool access needed, human approvals that should remain, and the smallest pilot that would prove value in two weeks."


🎬 Watch This
NEO Factory in Hayward, California
Why it’s worth your time:
1X's April 30 factory video is one of the cleaner robotics proofs of the week: humanoids are no longer just staged demos, they are entering production lines, supply chains, and internal test fleets!
Best bit:
The video shows the manufacturing side of physical AI, including how 1X is trying to scale NEO from preorder hype into a real product with a 58,000-square-foot Hayward factory and a 10,000-unit annual capacity target.
Watch if you care about:
AI robots / humanoids / physical AI / manufacturing / OpenAI-backed startups / consumer robotics


Thinking Machines, in its May 11 interaction models announcement. The next model race is not only about intelligence, but about whether AI can collaborate in real time.


The DeployCo economics are already causing raised eyebrows. OpenAI's announcement says the new company launches with more than $4 billion in initial investment, but Axios reports additional investor terms not included in the announcement: a guaranteed minimum 17.5% return and capped profits. That does not make the venture weak. It does make the structure worth watching, because frontier labs are increasingly using private equity portfolios as both funding source and distribution channel.


AI That Talks Before You Finish
The Takeaway
👉 Thinking Machines announced a research preview of interaction models, a new architecture built for real-time human-AI collaboration.
👉 The lead model, TML-Interaction-Small, is designed to process audio, video, and text continuously rather than waiting for a clean user turn.
👉 The system combines a fast interaction model with an asynchronous background model for deeper reasoning, browsing, tools, and longer tasks.
👉 The bigger signal is that interactivity is becoming a model capability, not just a product wrapper.
Thinking Machines is attacking the one thing that still makes AI feel awkward: turn-taking. On May 11, Mira Murati's lab announced a research preview of interaction models: AI systems designed to handle audio, video, and text as continuous streams, so collaboration feels closer to a live conversation than a prompt-response loop.
The technical shift is the model's sense of time. Instead of waiting for a complete prompt, Thinking Machines uses a multi-stream, micro-turn design that processes roughly 200ms chunks of input and output at the same time. That lets the model backchannel while someone is speaking, respond to visual cues, handle interruptions, and keep listening while a background model performs deeper reasoning or tool work.
This is bigger than voice UX. If AI is going to help with coding, support, tutoring, lab work, robotics, design, or operations, it cannot require perfectly packaged instructions. It has to stay present while humans are still thinking. Thinking Machines is betting that the next frontier is not just intelligence, but interaction itself.
Why it matters: This is a direct challenge to the chat interface. If interaction models work, the best AI products may feel less like asking a tool for output and more like working with someone who can see, hear, interrupt, wait, and keep context alive.
Sources:
🔗 https://thinkingmachines.ai/blog/interaction-models/
🔗 https://venturebeat.com/technology/thinking-machines-shows-off-preview-of-near-realtime-ai-voice-and-video-conversation-with-new-interaction-models
🔗 https://techcrunch.com/2026/05/11/thinking-machines-wants-to-build-an-ai-that-actually-listens-while-it-talks/


Attio is the AI CRM for high-growth teams.
Connect your email, calls, product data and more, and Attio instantly builds your CRM with enriched data and complete context. Whether you’re running product-led growth or enterprise sales, Attio adapts to your unique GTM motion.
Then Ask Attio to plan your next move.
Run deep web research on prospects. Update your pipeline as you work. Find customers and draft outreach emails. Powered by Universal Context, Attio's intelligence layer, Attio searches, updates, and creates across your data to accelerate your workflow.
Ask more from your CRM.



The chart: OpenRouter's agent leaderboard shows Hermes Agent at #1 with 224B tokens, ahead of OpenClaw at 186B, Kilo Code at 130B, Claude Code at 51.5B, and Lemonade at 33.2B.
The visual: A captured OpenRouter agent-ranking screenshot, showing token usage across leading personal and coding agents.
The lesson: Agent usage is becoming a market signal. The leaders are not generic chat apps, but personal and CLI agents that sit close to real work: coding, automation, files, messaging, and repeatable workflows.
The caveat: OpenRouter rankings measure opted-in token usage on one routing platform, not total market share. The numbers move quickly, so the screenshot should be treated as a captured moment, not a permanent league table.


Robots Get Their Own Brains
⚡ Bottom line: This week's robotics signal is Genesis AI's GENE-26.5: a foundation-model push for dexterous, human-scale manipulation.
💡 Why it matters: The next AI interface is not only chat. It is models that can understand the physical world well enough to move, grasp, inspect, and adapt.
🔎 What it means: Robotics is moving from scripted demos toward general-purpose systems, but the real bottleneck is still physical data, safety, and reliable manipulation.
On May 6, 2026, Genesis AI unveiled GENE-26.5, a robotics foundation model paired with a human-scale dexterous hand and a data engine meant to teach robots complex manipulation at scale. The company says the system is built for long-horizon physical tasks, not just isolated pick-and-place demos.
A useful recent comparison point is Google DeepMind's Gemini Robotics-ER 1.6, announced on April 14, 2026 and made available to developers via the Gemini API and Google AI Studio, which focuses less on dexterous hands and more on embodied reasoning: spatial understanding, multi-view success detection, safety constraints, and instrument reading. Meta is also moving into the category after acquiring Assured Robot Intelligence on May 1, 2026, a startup focused on AI models for humanoid robots.
Robotics is becoming the place where multimodal AI has to prove it can do more than talk. A robot has to know what it is seeing, whether a task is finished, when a movement is unsafe, and how to adapt when the world does not match the prompt.
Sources:
https://www.prnewswire.com/news-releases/genesis-ai-unveils-gene-26-5--the-first-ai-brain-to-enable-robots-with-human-level-physical-manipulation-capabilities-302763638.html
https://www.youtube.com/watch?v=6K_bGH54ltI
https://deepmind.google/blog/gemini-robotics-er-1-6/
https://techcrunch.com/2026/05/01/meta-buys-robotics-startup-to-bolster-its-humanoid-ai-ambitions/


Your ads ran overnight. Nobody was watching. Except Viktor.
One brand built 30+ landing pages through Viktor without a single developer.
Each page mapped to a specific ad group. All deployed within hours. Viktor wrote the code and shipped every one from a Slack message.
That same team has Viktor monitoring ad accounts across the portfolio and posting performance briefs before the day starts. One colleague. Always on. Across every account.




