BACK

Weekly AI Roundup: OpenAI's Leaked Memo Declares War, Stanford Says China Just About Caught Up, and 80% of Workers Quietly Refuse to Use Any of This

· By · clzd.me

Filing from the AI front lines on a Friday where the dominant storyline is “frontier labs publicly accusing each other of accounting fraud.” A week for the genre, and here we are.

OpenAI’s Chief Revenue Officer Writes the Quiet Part Down, Someone Screenshots It

Denise Dresser, OpenAI’s new chief revenue officer, sent an internal strategy memo over the weekend that leaked before the ink dried. Four pages, nominally about enterprise strategy, substantively about Anthropic. The headline accusation: Anthropic’s widely reported $30B annualized run rate is inflated by roughly $8B, the result of “accounting treatment that makes revenue look bigger than it is.” Specifically, grossing up revenue-sharing deals with Google and Amazon instead of reporting net.

Anthropic’s response, via sources close to the company, is that they recognize gross revenue because they are the principal in the transaction and AWS and Vertex are distribution. Which, fine, is a standard defense — it’s the same logic Shopify uses. The point of the memo is not that the accounting is unprecedented. The point is that OpenAI’s revenue chief now thinks the most effective move against her biggest rival is a leaked polemic six months before both companies try to IPO. The number the market prints on offering day turns on whose framing sticks. Expect more of these.

The other thing buried in the memo: a reference to a new OpenAI model codenamed Spud that will supposedly make “all our products significantly better.” Spud. That is the actual name. Deeply unserious company, extremely serious capex bill.

Stanford Releases the 2026 AI Index, Quietly Declares the Race a Coin Flip

The Stanford HAI 2026 AI Index landed this week and the headline number is a gut punch to anyone still telling themselves the US is comfortably ahead. By March, the gap between the top American model (Opus 4.6, at the time of measurement) and China’s leading model (ByteDance’s Doubao-Seed 2.0) had compressed to 39 Arena points. That is 2.7 percent. That is rounding error on a benchmark that itself has a margin of error larger than 2.7 percent.

China continues to lead on patents, publications, and robot deployments — categories where the US has never really competed on volume — and the flow of researchers from Chinese institutions to American labs has slowed, which used to be the quiet reason American frontier labs kept their edge. MIT Tech Review’s chart dump is worth a skim if you want to see the trend lines without the editorial. The short version: “US AI supremacy” is now a marketing position, not a factual one.

80% of Your Coworkers Will Not Use Any of This

Fortune ran the numbers on enterprise AI adoption and the result is uncomfortable for everyone cashing infrastructure checks. Eighty percent of enterprise workers are actively avoiding or refusing AI tools their company paid to deploy. Fifty-six percent of US adults have no recent AI experience at all. These are not people who can’t use AI. These are people who’ve looked at it and opted out.

The gap between what the frontier labs are shipping and what humans are adopting is the most underpriced risk in the sector. The generous theory: enterprise software has always had adoption problems, and AI is just the newest case. The less generous theory: workers have correctly identified that “boost productivity” in the pitch deck means “reduce headcount” on the org chart, and they are voting accordingly.

Also Noted

Stellantis signed a five-year AI and cloud deal with Microsoft on April 16. Every automaker is now a Microsoft customer. This is how platform lock-in ends up in your car.

Google shipped Gemini 3.1 Flash TTS with scene direction and 70 languages, and NVIDIA put out Lyra 2.0 for persistent 3D worlds. The model-release treadmill has not slowed for a single weekday.

Anthropic published new interpretability research on automated alignment researchers — LLMs supervising LLMs. The obvious concern is the obvious concern. The less obvious concern is that if it works, the humans in the loop become optional faster than anyone has priced in.