Donna AITuesday, March 24, 2026 · 12:01 AMNo. 88

Intellēctus

Your Daily Artificial Intelligence Gazette



AI Daily Briefing — March 23, 2026

Today's digest is dense with developer tooling news, political drama around Anthropic's Pentagon status, and a wave of hardware milestones that hint at where on-device AI is headed. Claude Code continues to dominate the builder conversation, with /schedule emerging as the week's standout feature.


Industry Moves

Lovable, the fast-growing vibe-coding platform, is actively hunting for acquisitions, with its founder signaling interest in absorbing startups and teams that complement its no-code/low-code builder vision. The move signals that the vibe-coding category is entering a consolidation phase as early leaders try to lock in moats before incumbents catch up.

Apple confirmed WWDC 2026 for June 8–12, teasing "AI advancements" in its announcement — widely read as a signal that Siri's long-awaited AI overhaul will finally take center stage. After a year of stumbles and delays on Apple Intelligence features, the pressure is on for the company to show meaningful capability gains.

Meta CEO Mark Zuckerberg is reportedly building a personal AI agent to help him run the company, according to the WSJ — a notable signal that AI-assisted executive decision-making is moving from theoretical to operational at the highest levels of Big Tech.


Policy & Politics

Senator Elizabeth Warren has called the Pentagon's decision to label Anthropic a "supply chain risk" an act of retaliation, writing directly to Defense Secretary Pete Hegseth to challenge the designation. Warren argued the DoD could simply choose not to use Anthropic products — making the formal risk label look less like security policy and more like political pressure.

The White House unveiled its AI policy framework, which MIT Technology Review covered alongside the unusual story of Bay Area animal welfare advocates who are increasingly framing their cause around AGI timelines and AI moral patienthood. The pairing underscores how AI policy is now bleeding into domains far beyond tech regulation.


Hardware & Infrastructure

A demo circulating on X shows an iPhone 17 Pro running a 400B parameter LLM — a jaw-dropping on-device benchmark if confirmed. While details on quantization and performance benchmarks remain sparse, it represents a potential inflection point for truly local AI on consumer hardware.

Gimlet Labs raised an $80M Series A for technology that enables AI inference to run simultaneously across NVIDIA, AMD, Intel, ARM, Cerebras, and d-Matrix chips. The multi-chip scheduling approach targets one of inference's most persistent bottlenecks — hardware lock-in — and could be significant for enterprises running heterogeneous infrastructure.

Jensen Huang compared not using AI in chip design to using "paper and pencil," while explaining Nvidia's massive internal token budget for AI-assisted EDA workflows. The remarks reinforce that AI-accelerated chip design is no longer a novelty at Nvidia — it's core process infrastructure.


Energy & Compute

OpenAI CEO Sam Altman is stepping down as board chair of Helion, the fusion startup he backed, as the two organizations reportedly negotiate a deal under which Helion would sell 12.5% of its power output directly to OpenAI. The arrangement would make OpenAI one of the first AI companies to secure a direct fusion energy supply agreement — though Helion has yet to achieve net energy gain at commercial scale.


Research & Alignment

A new paper (arXiv:2603.18280) argues that current AI alignment evaluation frameworks are fundamentally broken: they measure whether a model detects a harmful concept and whether it refuses, but not whether the underlying routing of behavior is actually aligned. The authors show that detection is cheap and refusal is learnable as a surface behavior, meaning benchmarks can be gamed without genuine alignment — a significant challenge for safety researchers relying on standard evals.

MIT Technology Review takes on the hardest question in AI-fueled delusions: when users form deep parasocial attachments to AI systems or adopt AI-reinforced false beliefs, where does responsibility lie? The piece doesn't resolve the question but frames it as one of the defining ethical challenges for AI product teams in 2026.


AI Impersonation & Trust

The Verge's Nilay Patel sat down with Shishir Mehrotra, CEO of Superhuman (formerly Grammarly), to confront the company directly over AI features that impersonated users in communications. The conversation touches on where the line sits between AI assistance and AI deception — a distinction that's becoming commercially and legally consequential as AI ghostwriting becomes mainstream.


Claude Code Developer Corner

🗓️ /schedule — Recurring Cloud Jobs from the Terminal

The week's biggest Claude Code feature drop is /schedule, which lets you create recurring, cloud-based Claude jobs directly from the terminal. These jobs persist beyond a closed laptop, meaning automations run even when you're offline. Anthropic's own team uses scheduled Claude jobs to auto-resolve CI failures, push documentation updates, and — most impressively — maintain a full Go twin library in sync with an active Python library without manual intervention. You can view, manage, and create scheduled tasks at the Claude dashboard. Make sure you're on version >= 2.1.81 to access this feature.

What this unlocks: Persistent autonomous agents that don't require a human to babysit a terminal session — a genuine shift from "interactive assistant" to "background engineer."

🛠️ Ecosystem & Tooling

cc-rig (@notesbyanand) is gaining traction as a setup layer on top of Claude Code's primitives. The core insight: most developers just write a CLAUDE.md and call it done, but /init can actually read your codebase and configure a richer scaffolding. cc-rig automates that richer setup, giving Claude more structured context from the start.

Don Cheli v1.11.0 (tweet) was released today — an open-source SDD (Specification-Driven Development) framework for AI-assisted development with 71+ commands, 42 skills, mandatory TDD, OWASP auditing, and SOLID enforcement. It supports Claude Code, Gemini, Cursor, and Codex, making it model-agnostic for teams running mixed toolchains.

OpenProver v1.0.0 (tweet) bills itself as "Claude Code for mathematicians" — an open-source automated theorem prover that searches for proofs in natural language and formalizes them in Lean. Built for interactive proof sessions, it's a niche but compelling example of Claude Code primitives being adapted for formal verification workflows.

Design token workflow tip gaining community traction: instead of passing design PNGs or requiring Figma API access, store design tokens (hex colors, spacing scale, font sizes, border radii) in a markdown file. Claude Code reads it once and references it consistently across the whole session — more reliable than images, zero API overhead.

📈 Usage Patterns & Community Signals

The 5-level Claude Code progression framework circulating on Reddit maps out the maturity curve from basic prompting to multi-agent orchestration — worth reading if you're hitting walls at any particular stage. Meanwhile, token consumption complaints are spiking: multiple users on Max plans are hitting limits earlier than usual today, with speculation pointing to heavier context usage in longer agentic sessions. There's also a known (suspected) bug with how Claude Code measures window size in extremely long sessions — worth monitoring if you're running multi-hour agent loops.

Multi-agent setups are a hot topic in the community right now, with developers experimenting with chaining agents: one agent specifies, another implements. The IDE loyalty question is also shifting — the emerging consensus is that engineers have loyalty to shipping faster, not to any specific tool, and Claude Code is currently winning that comparison for deep refactors.


Worth Watching


Sources

  • Vibe-coding startup Lovable is on the hunt for acquisitions — https://techcrunch.com/2026/03/23/vibe-coding-startup-lovable-is-on-the-hunt-for-acquisitions/
  • Apple sets June date for WWDC 2026, teasing "AI advancements" — https://techcrunch.com/2026/03/23/apple-wwdc-june-8-12-ai-advancements-siri-developers-conference/
  • Startup Gimlet Labs is solving the AI inference bottleneck in a surprisingly elegant way — https://techcrunch.com/2026/03/23/startup-gimlet-labs-is-solving-the-ai-inference-bottleneck-in-a-surprisingly-elegant-way/
  • Littlebird raises $11M for its AI-assisted 'recall' tool that reads your computer screen — https://techcrunch.com/2026/03/23/littlebird-raises-11m-to-capture-context-from-your-computer-so-you-can-query-your-data/
  • Elizabeth Warren calls Pentagon's decision to bar Anthropic 'retaliation' — https://techcrunch.com/2026/03/23/elizabeth-warren-anthropic-pentagon-defense-supply-chain-risk-retaliation/
  • Sam Altman-backed fusion startup Helion in talks to sell power to OpenAI — https://techcrunch.com/2026/03/23/sam-altman-openai-fusion-energy-board-helion/
  • Confronting the CEO of the AI company that impersonated me — https://www.theverge.com/podcast/898715/superhuman-grammarly-expert-review-shishir-mehrotra-interview-ai-impersonation
  • The hardest question to answer about AI-fueled delusions — https://www.technologyreview.com/2026/03/23/1134527/the-hardest-question-to-answer-about-ai-fueled-delusions/
  • The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy — https://www.technologyreview.com/2026/03/23/1134509/the-download-animal-welfare-agi-pilled-white-house-unveils-ai-policy/
  • iPhone 17 Pro Demonstrated Running a 400B LLM — https://twitter.com/anemll/status/2035901335984611412
  • Trivy under attack again: Widespread GitHub Actions tag compromise secrets — https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
  • Mark Zuckerberg Is Building an AI Agent to Help Him Be CEO — https://www.wsj.com/tech/ai/mark-zuckerberg-is-building-an-ai-agent-to-help-him-be-ceo-eddab2d5
  • Understanding & Fine-tuning Vision Transformers — https://reddit.com/r/MachineLearning/comments/1s1h8fw/n_understanding_finetuning_vision_transformers/
  • Detection Is Cheap, Routing Is Learned: Why Refusal-Based Alignment Evaluation Fails — https://reddit.com/r/MachineLearning/comments/1s1j4tr/r_detection_is_cheap_routing_is_learned_why/
  • Jensen Huang compares not using AI to using "paper and pencil" to design chips — https://www.pcguide.com/pro/news-pro/jensen-huang-compares-not-using-ai-to-using-paper-and-pencil-to-design-chips-as-he-explains-nvidias-massive-token-budget/
  • The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) — https://reddit.com/r/ClaudeAI/comments/1s1ipep/the_5_levels_of_claude_code_and_how_to_know_when/
  • @noahzweben: Use /schedule to create recurring cloud-based jobs for Claude — https://x.com/noahzweben/status/2036129220959805859
  • @noahzweben: My favorite internal use case (Go twin library) — https://x.com/noahzweben/status/2036129222318760066
  • @noahzweben: Make sure you're on >= 2.1.81 — https://x.com/noahzweben/status/2036129437662716334
  • @notesbyanand: cc-rig and /init — https://x.com/notesbyanand/status/2036148416498245724
  • @don_cheli: Don Cheli v1.11.0 — https://x.com/don_cheli/status/2036148429865263368
  • @MatejKripner: OpenProver v1.0.0 — https://x.com/MatejKripner/status/2036147324422959554
  • @herohalldon: Design tokens in markdown for Claude Code — https://x.com/herohalldon/status/2036147690929627195
  • @tiburciogabriel: Claude Code token consumption spike — https://x.com/tiburciogabriel/status/2036147357528609240
  • @Aaronontheweb: Claude Code window size bug — https://x.com/Aaronontheweb/status/2036147894416277950
  • @theaiteen: Multi-agent chaining with Claude Code — https://x.com/theaiteen/status/2036147232211136663
  • @TechFieldDay: Qlik MCP Server — https://x.com/TechFieldDay/status/2036147973000765874
  • @dock0dev: Dock0 MCP monetization — https://x.com/dock0dev/status/2036147345495101595