AI Daily Briefing — April 2, 2026
Today's digest is headlined by OpenAI's surprising media acquisition and a wave of privacy concerns hitting AI-adjacent tools, while developers get a meaningful Claude Code update packed with MCP and plugin improvements. Infrastructure drama in the Middle East and the ongoing AI cost math debate round out a busy news day.
Industry Moves
OpenAI buys a talk show. OpenAI has acquired TBPN, Silicon Valley's cult-favorite founder-and-tech-executive talk show that broadcasts live three hours a day on weekdays — and frequently features OpenAI's own leadership as guests. The show will reportedly operate independently under chief political operative Chris Lehane's oversight, raising eyebrows about the blurring line between AI company and media property. It's a notable content strategy bet from a company already burning far more on compute than it collects in subscriptions.
The AI cost problem isn't going away. A detailed breakdown of Sora's unit economics making the rounds suggests OpenAI spends roughly $65 in compute for every $20/month subscriber — a gap that only widens for video-generation workloads. The analysis underscores why AI video remains a money furnace, and why monetization pressure across the industry is intensifying.
Privacy & Policy
Granola's "private" notes aren't very private. The Verge flags that Granola, the popular AI meeting note-taker, makes notes viewable to anyone with a share link by default — despite marketing them as "private." Users should audit their sharing settings immediately if they've been dropping sensitive meeting content into the app.
Perplexity's Incognito Mode called out as a "sham." A new lawsuit accuses Perplexity — along with Google and Meta — of sharing millions of private user chats to boost ad revenue, despite promising users that Incognito Mode would keep conversations off the record. If the allegations hold up, it would be a significant breach of trust for a product that's leaned heavily on privacy as a differentiator.
Infrastructure & Geopolitics
AWS Bahrain takes a hit. Amazon's cloud infrastructure in Bahrain was damaged in a strike by Iranian Revolutionary Guards, a serious escalation with direct implications for cloud reliability in the Middle East region. AWS has quietly removed all Bahrain EC2 instances from its documentation, signaling the outage may be prolonged or permanent for that region.
Research & Community
Jane Street's backdoor LLM challenge cracked. A write-up on Reddit details how researcher Adam Kruger systematically solved all three models in Jane Street's Dormant LLM Challenge — a red-teaming exercise in backdoor discovery. The approach is methodical and worth reading for anyone working on model security or adversarial robustness.
Schmidhuber vs. LeCun, round N. Jürgen Schmidhuber has published a detailed claim asserting he invented the Joint Embedding Predictive Architecture (JEPA) concept before Yann LeCun popularized it at Meta. The dispute is a familiar pattern in deep learning's history, but the specificity of Schmidhuber's documentation makes it worth a read for those interested in the intellectual lineage of modern architectures.
Claude usage limits frustrating power users. Two threads on r/ClaudeAI capture growing frustration: an official Anthropic follow-up acknowledges that peak-hour limits are tighter than expected and that 1M-context conversations burn quota faster, while a separate thread surfaces user strategies for optimizing usage. Anthropic says it's investigating but stops short of committing to specific changes.
Claude Code Developer Corner
v2.1.91 is out — here's what changed and why it matters:
MCP tool result size just got a lot more flexible. The headline feature in v2.1.91 is the ability to override MCP tool result truncation via a _meta["anthropic/maxResultSizeChars"] annotation, up to 500K characters. In practice, this means large payloads like full database schemas, extensive API responses, or long file trees can now pass through MCP without being silently cut off — a pain point that's tripped up many agentic workflows.
New security control for skills. A new disableSkillShellExecution setting lets you block inline shell execution within skills, custom slash commands, and plugin commands. This is a meaningful guardrail for teams deploying Claude Code in shared or production-adjacent environments where arbitrary shell access from skill definitions is a security concern.
Deep links now support multi-line prompts. The claude-cli://open?q= deep link handler previously rejected encoded newlines (%0A), limiting its usefulness for launching Claude Code with structured, multi-line prompts from external tools or scripts. That restriction is lifted in this release.
Plugins can now ship their own executables. Plugins can include binaries under a bin/ directory and invoke them directly. This opens the door to plugins that bundle compiled tools, language servers, or custom runtimes — significantly expanding what plugin authors can build without requiring users to install dependencies separately.
Community builds on top of Claude Code. Two standout community projects this week: a developer connected Claude Voice Mode on mobile to Claude Code for a conversational, hands-free coding experience, and another built SeedCraft, a plugin that translates your Claude Code usage stats into a unique Minecraft world seed. On the more practical side, a Star Trek LCARS-themed dashboard scans your ~/.claude/ directory and visualizes skills, agents, hooks, MCP servers, and memory files in full TNG aesthetic.
Worth Watching
- Sigrid Jin, WSJ, and the Claude Code token economy. The Wall Street Journal profiled Claw Code author Sigrid Jin back on March 20 for consuming 25 billion Claude Code tokens — context that reframes some of the recent community discourse around power users and quota.
- AutoResearch vs. classic hyperparameter tuning. A thread on r/MachineLearning questions whether automated research pipelines actually outperform well-tuned traditional methods — a healthy skepticism check for the AutoML hype cycle.
- Tailscale's new macOS notch integration. Not AI, but relevant to developers: Tailscale's updated macOS app now lives in the notch area, a clever use of otherwise wasted screen real estate.
Sources
- OpenAI acquires TBPN, the buzzy founder-led business talk show — https://techcrunch.com/2026/04/02/openai-acquires-tbpn-the-buzzy-founder-led-business-talk-show/
- OpenAI just bought TBPN — https://www.theverge.com/ai-artificial-intelligence/906022/openai-buys-tbpn
- PSA: Anyone with a link can view your Granola notes by default — https://www.theverge.com/ai-artificial-intelligence/906253/granola-note-links-ai-training-psa
- Perplexity's "Incognito Mode" is a "sham," lawsuit says — https://arstechnica.com/tech-policy/2026/04/perplexitys-incognito-mode-is-a-sham-lawsuit-says/
- A $20/month user costs OpenAI $65 in compute. AI video is a money furnace — https://aedelon777.substack.com/p/i-did-the-math-on-sora-ai-video-is
- Amazon data center in Bahrain attacked by Iranian Revolutionary Guards — https://www.reuters.com/world/middle-east/amazons-cloud-business-bahrain-damaged-iran-strike-ft-reports-2026-04-01/
- AWS has officially removed all EC2 instances in Bahrain from their docs — https://twitter.com/astuyve/status/2039777883485254081
- [R] Solving the Jane Street Dormant LLM Challenge: A Systematic Approach to Backdoor Discovery — https://reddit.com/r/MachineLearning/comments/1sarnt0/r_solving_the_jane_street_dormant_llm_challenge_a/
- Jürgen Schmidhuber claims to be the true inventor of JEPA, not Yann LeCun — https://people.idsia.ch/~juergen/who-invented-jepa.html
- Follow-up on usage limits — https://reddit.com/r/ClaudeAI/comments/1sat07y/followup_on_usage_limits/
- Claude usage gets burned absurdly fast for serious work, even with tools/features disabled — https://reddit.com/r/ClaudeAI/comments/1satb89/claude_usage_gets_burned_absurdly_fast_for/
- [claude-code] v2.1.91 — https://github.com/anthropics/claude-code/releases/tag/v2.1.91
- [claude-code] Changelog v2.1.91 — https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2191
- I built a Star Trek LCARS terminal that reads your entire AI coding setup — https://reddit.com/r/artificial/comments/1sat7og/i_built_a_star_trek_lcars_terminal_that_reads/
- I connected Claude Voice Mode to Claude Code and it's kind of great — https://v.redd.it/9ks5dk4hhtsg1
- I built a Claude Code plugin that turns your coding stats into a Minecraft world — https://v.redd.it/dib2cvapwtsg1
- Sigrid Jin, the author of Claw Code, was already featured in The Wall Street Journal on March 20 for using 25 billion Claude Code tokens — https://i.redd.it/cughgqwgptsg1.jpeg
- [R] Is autoresearch really better than classic hyperparameter tuning? — https://reddit.com/r/MachineLearning/comments/1satj6r/r_is_autoresearch_really_better_than_classic/
- Tailscale's new macOS home — https://tailscale.com/blog/macos-notch-escape