
In Today’s Issue:
📄 Zhipu AI releases a lightweight, open-source OCR model that hits 94.62 on OmniDocBench
⚡ Google Becomes a Utility, effectively "bringing its own generation" to bypass public grid congestion for AI
📉 Leaked reports suggest OpenAI is testing AMD, Cerebras, and Groq
🧪 Users report a quiet 50% reduction in "thinking" limits for Plus/Business tiers
✨ And more AI goodness…
Dear Readers,
AI agents are hiring humans now - no, really. RentAHuman.ai just launched and within hours, over 130 people signed up to be "rented" by autonomous AI systems for physical tasks, from package pickups to hardware installations, all booked via API call. It's the kind of headline that sounds like satire until you realize it's live infrastructure.
But that's not the only plot twist today: OpenAI is quietly shopping for alternatives to Nvidia's chips, frustrated with inference performance as they push tools like Codex harder. Google, meanwhile, just dropped $4.75 billion to buy its own renewable energy pipeline - because in 2026, winning AI means winning the power game.
And then there's StepFun's Step 3.5 Flash, a 196B parameter model that somehow outperforms trillion-parameter giants by activating only 11B parameters per token, running on consumer hardware, and going fully open-source. Efficient, local-first, frontier-capable AI for everyone? Let's dig in.
All the best,




📄 GLM-OCR Tops Complex Documents
GLM-OCR is an open-source, multimodal OCR system built on a GLM-V encoder–decoder stack (CogViT visual encoder + cross-modal connector + GLM-0.5B decoder) and a two-stage pipeline (PP-DocLayout-V3 layout analysis + parallel recognition) to handle messy real-world documents such as tables, formulas, code, and seals. It claims #1 on OmniDocBench V1.5 with 94.62. It highlights practical deployment wins: only ~0.9B params, support for vLLM/SGLang/Ollama/Transformers, and a speed test showing 1.86 PDF pages/sec and 0.67 images/sec under the stated single-replica/single-concurrency setup—aimed at cheaper, faster production OCR with stronger layout robustness.

🤖 OpenAI Rethinks Nvidia Dependence
OpenAI is reportedly dissatisfied with aspects of Nvidia’s latest chips for AI inference and has quietly explored alternatives since last year, signaling a potential shift in the AI hardware landscape. Sources say this has delayed Nvidia’s proposed $100 billion investment while OpenAI tests faster, memory-heavy options from AMD, Cerebras, and Groq - especially to boost speed for coding tools like Codex. The move highlights how inference performance, not just training power, is becoming the next battleground in AI infrastructure.

⚡ Google Bets Big On Power
Google is committing $4.75 billion to buy renewable developer Intersect, becoming the first major tech company to effectively own its own power pipeline for AI data centers. By pairing wind, solar, geothermal, and future nuclear bets, Google is positioning itself ahead of rivals as grid congestion, rising prices, and regulatory pressure intensify—especially in regions like PJM Interconnection. The move signals a clear reality of the AI boom: compute leadership now depends as much on energy strategy as on chips and models.


At first this looks very nice, however it seems as GPT-5.2 Thinking “juice” values in ChatGPT were quietly cut. Plus/Business down ~50%, Pro tiers reshuffled, regional variance, and test prompts now partially blocked.



AI Agents Hire Humans Now
The Takeaway
👉 RentAHuman.ai enables AI agents to book humans for physical tasks via MCP calls or REST API - no human negotiation required
👉 Early traction proves demand: 130+ signups in hours, including AI industry insiders willing to be "rented”
👉 The platform treats humans as callable services with rates and availability - a new employment model for the agent economy
👉 Autonomous AI systems finally have a bridge to the physical world, potentially unlocking capabilities previously impossible without human intervention
The tables have turned. A new platform called RentAHuman.ai just flipped the script on automation - and it's blowing up. Within hours of launching, over 130 people signed up, including AI startup founders and CEOs. The concept is as wild as it sounds: AI agents can now directly hire real humans for physical tasks through a simple API call.

RentAHuman.ai positions itself as the "meatspace layer for AI." Think of it as a gig marketplace, but built for machines. Can't touch grass? Can't shake hands? Can't pick up a package? No problem. Agents use MCP integration or REST API to book humans for tasks like pickups, meetings, verification, errands, or hardware installations. Humans set their own rates, get clear instructions from their robot employers, and receive instant payment in stablecoins.

The system already works, and the interface is designed for machines first—humans are listed as callable services.
Why it matters: RentAHuman.ai exposes a critical gap in today's AI stack (and is paradox at the same time) - physical-world execution. As autonomous agents become more capable, platforms bridging digital intelligence with human labor could evolve into essential infrastructure.
Sources:
🔗 https://rentahuman.ai
🔗 https://ucstrategies.com/news/rentahuman-ai-is-live-ai-agents-can-now-hire-real-humans-for-irl-tasks/
🔗 https://medium.com/@gemQueenx/rent-a-human-ai-hire-real-people-for-physical-tasks-on-rentahuman-ai-475fbc8c746d


Trusted by millions. Actually enjoyed by them too.
Morning Brew makes business news something you’ll actually look forward to — which is why over 4 million people read it every day.
Sure, the Brew’s take on the news is witty and sharp. But the games? Addictive. You might come for the crosswords and quizzes, but you’ll leave knowing the stories shaping your career and life.
Try Morning Brew’s newsletter for free — and join millions who keep up with the news because they want to, not because they have to.



“Open AI had to halve the reasoning efforts throughout the chatgpt app in all subscriptions from free to pro.
Reason : free access to codex and 200k new users that they got yesterday and to balance out compute.
Solution: They should at least notify customers we don't pay for it.”


Small Model Beats AI Giants
Small model, big punch. Chinese AI startup StepFun just dropped Step 3.5 Flash - and it's turning heads across the entire industry. Despite its relatively modest size of about 196 billion parameters - far smaller than Moonshot AI's Kimi K2.5 with 1 trillion parameters or DeepSeek V3.2 with 671 billion parameters - Step 3.5 Flash outperformed its larger rivals across several benchmark tests.

The secret sauce? Built on a sparse Mixture-of-Experts architecture, it selectively activates only 11B of its 196B parameters per token, achieving the reasoning depth of top-tier models while maintaining real-time responsiveness with 100-300 tok/s throughput. That means frontier-level intelligence at a fraction of the computational cost. For developers, this is huge: the model supports a 256K context window, runs on consumer hardware like Mac Studio M4 Max or NVIDIA DGX Spark, and is fully open-source under Apache 2.0.

Step 3.5 Flash is purpose-built for the agent era - handling complex coding tasks, deep research workflows, and multi-tool orchestration with remarkable stability. It scored 74.4% on SWE-bench Verified and 97.3% on AIME 2025, matching or beating models many times its size. Can efficient, local-first AI finally democratize frontier capabilities for everyone?

Step 3.5 Flash proves that raw parameter count isn't everything - smart architecture beats brute force. This shift toward efficient, locally deployable models could make cutting-edge AI accessible to developers and organizations who can't afford massive cloud bills.


Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.





