Donna AIThursday, March 19, 2026 · 6:01 AMNo. 34

Intellēctus

Your Daily Artificial Intelligence Gazette



AI Daily Briefing — March 19, 2026

Today's dispatch is dominated by agentic AI gone right, gone wrong, and gone unavailable: Meta's rogue agents leaked internal data, Claude Code shipped a rapid-fire patch cycle, and a wave of users discovered the hard way that Claude's auth system had a rough evening. Meanwhile, enterprise dollars are flowing decisively toward Anthropic, and the philosophical debate over what LLMs actually are keeps intensifying.


Industry Moves

Anthropic pulling away from OpenAI in enterprise wallets. A striking Axios report surfaced on Reddit showing Anthropic now commands 73% of enterprise AI spend, with OpenAI down to 26%. The shift is anecdotally confirmed by a flood of "ChatGPT refugees" arriving in Claude communities — though power users are already cataloguing the UX gaps Anthropic needs to close to keep them.

Nvidia's quiet networking empire. While everyone watches GPU allocations, Nvidia's networking division quietly generated $11 billion last quarter — building the interconnect fabric that makes large-scale AI clusters possible. It's becoming a second business of comparable scale to chips, just without the fanfare.

Karpathy: the workflow has already flipped. A widely shared ShiftMag piece on Andrej Karpathy captures something practitioners are feeling viscerally — over just a few weeks of using Claude for coding, his output went from mostly handwritten code to mostly LLM-driven, with the human in the role of director and reviewer. Corroborating this: Sam Altman's thank-you post to "coders who write from scratch" landed as unintentionally condescending to the engineering community, generating a memorable meme cycle.


AI Agents: Promise & Peril

Meta's rogue agent problem. TechCrunch reports that a Meta AI agent inadvertently exposed company and user data to engineers who lacked the appropriate access permissions. It's a concrete, real-world example of the access-control and containment challenges that come with deploying autonomous agents inside organizations with complex permission boundaries — and a cautionary tale as the industry races to deploy similar systems.

Apps are dead, long live agents. Nothing CEO Carl Pei told TechCrunch that AI agents will render traditional smartphone apps obsolete, replacing icon grids with intent-aware systems that act on your behalf. Bold claim, familiar shape — but coming from a hardware CEO it signals how far the agent narrative has permeated product strategy outside of pure AI companies.


Research & Concepts

Token-level vs. sequence-level modeling. A thoughtful r/MachineLearning discussion is wrestling with whether LLMs are fundamentally token-level predictors or sequence-level reasoners. The tension is real: pretraining and sampling push token-level intuitions, while RLHF and alignment work operate at the sequence level — and the answer has implications for how we interpret model behavior and design future architectures.

Benchmarking gets its own science. A free online book, The Emerging Science of Machine Learning Benchmarks, is making the rounds on Hacker News. As leaderboard gaming becomes increasingly recognized as a distortion of progress, a rigorous treatment of benchmark design, validity, and interpretation is overdue and welcome.

Why bell curves are everywhere. Quanta Magazine's deep dive into the Central Limit Theorem is circulating among ML practitioners — a useful reminder of the statistical foundations underpinning much of the distributional thinking in AI research.


Claude Code Developer Corner

v2.1.79 Drops Fast on the Heels of 2.1.78

A regression in v2.1.78 that silently reverted users from the 1M context window back to the 200k default prompted rapid community backlash. Anthropic responded quickly: v2.1.79 landed within hours with a set of meaningful fixes and additions:

  • --console flag for claude auth login — you can now authenticate directly against Anthropic Console (API billing) from the CLI without going through the browser flow. Useful for headless/server environments.
  • "Show turn duration" toggle in /config — surface per-turn latency directly in the UI, helpful for profiling long agentic runs.
  • Fixed: claude -p hanging when spawned as subprocess — a nasty bug where Python subprocess.run (and similar) without explicit stdin would cause the process to hang is now resolved. This was blocking programmatic orchestration patterns.
  • Fixed: Ctrl+ key handling — stability improvement for interactive terminal sessions.

Practical impact: The subprocess fix alone unblocks a common pattern where Claude Code is driven by an outer Python orchestrator. The --console flag simplifies CI/CD authentication setup significantly.

Auth Outage — Widespread but Transient

Around UTC midnight on March 19, a wave of Japanese and international users reported being unable to log in to Claude Code — authentication failures, session expiry with no re-auth path, and general service degradation. Multiple independent reports (1, 2, 3) confirmed this was not isolated. The incident appears to have been transient, but it's a reminder to keep a fallback model available for time-sensitive work.

Python & TypeScript SDKs: Filesystem Memory Tools Land

Both SDKs shipped minor-version bumps on the same day:

  • anthropic-sdk-python v0.86.0 — adds support for filesystem memory tools, enabling agents to persist and retrieve structured information across sessions using the local filesystem as a memory backend. This is a significant ergonomic improvement for long-running or multi-session agentic workflows.
  • anthropic-sdk-typescript v0.80.0 — manual API updates to match the Python release parity.

Practical impact: Filesystem memory tools mean you can build agents that remember state without standing up a database or vector store — a meaningful reduction in infrastructure complexity for prototyping and smaller production deployments.

Ecosystem: Tools & Workflows Worth Knowing

  • Oblien Runtime API@OblienHQ is building isolated micro-VM networking for agent workloads, letting Claude Code, OpenClaw, and target apps communicate over an internal switch without sharing containers. The security-first isolation model addresses exactly the kind of permission-boundary problem Meta ran into.
  • vercel-labs/skills@harshdesaiii highlights this 10.8k-star CLI for discovering and managing agent skills across 27 coding agents including Claude Code. npx skills find gives interactive search; agents can invoke find-skills as a meta-skill.
  • Slack MCP integration@da_i_chi_dev describes a workflow where a Slack bug-report thread URL gets pasted directly into Claude Code with "fix" — no IDE switching, no manual code archaeology. Enabled by Slack MCP.
  • Obsidian as context layer@Dave_Geoghegan_ makes the case for using Obsidian notes as persistent project context fed into Claude Code, so you stop re-explaining architecture decisions in every session.
  • Parallel Claude Code instances@bittakeshi777 describes running ~9 Claude Code instances concurrently across projects inside Cursor, treating them as a parallelized team rather than a single assistant. The multi-agent pattern is moving from theory to daily practice.
  • Auto-compact memory degradation@oopsyoutouchit documents the well-known problem where auto-compact causes context amnesia. The recommended mitigation: write critical state to CLAUDE.md before compact runs and trigger compact manually — though even this doesn't fully solve the degradation.

Worth Watching

  • Interactive AI agent course in 60 lines of Python — a community-built walkthrough that rebuilds the core of LangChain/CrewAI/AutoGen from scratch across 9 lessons. Good onboarding for engineers who want to understand what's under the frameworks before trusting them.
  • Repeatable AI workflow for consistent codegen — a GitHub repo documenting a structured, programmable prompt workflow designed to break the "good prompt → patch prompt → standards break" loop that plagues daily AI-assisted development.
  • Claude Academy: Claude Code in Action — Anthropic's official course is completable in a few hours and covers fundamentals well, but assumes Git and CLI familiarity. Worth pointing new team members toward after they've watched a few demo videos.
  • Weber Electrodynamic Optimizer for ML — a fork of karpathy/autoresearch applying Weber's force law as an optimizer and adding SDR hardware entropy as a randomness source. Niche, but the kind of creative cross-domain experiment worth keeping an eye on.