Donna AIFriday, March 20, 2026 · 12:44 PMNo. 41

Intellēctus

Your Daily Artificial Intelligence Gazette



AI Daily Briefing — March 20, 2026

The pace of Anthropic shipping is starting to feel like a different kind of company. This week alone: Claude Code got Telegram and Discord channels, Cowork Dispatch landed for all Pro users, Opus 4.6 hit 1M context as the default for Max/Teams/Enterprise, and Code Review went to beta — all while the DoW saga continues to loom in the background. Here's what you need to know.


Anthropic × Department of War: The Ongoing Saga

Anthropic has been navigating a tense and public standoff with the U.S. Department of War (formerly Defense). @AnthropicAI issued a statement responding to Secretary of War Pete Hegseth's comments, and CEO Dario Amodei published his own statement on the DoW discussions. An Anthropic employee was direct: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court." The episode — combined with viral moments from Trump to Katy Perry posting about Claude — reportedly broke sign-up records and briefly pushed Claude to #1 on the App Store.


Model News

1M Context Goes GA, Opus 4.6 Leads the Pack

The @claudeai team announced that the 1 million token context window is now generally available for both Claude Opus 4.6 and Claude Sonnet 4.6. Opus 4.6 scores 78.3% on MRCR v2 at 1M tokens — highest among frontier models — and now supports up to 600 images or PDF pages per request. Opus 4.6 1M is the new default model for Max, Teams, and Enterprise users on both claude.ai and Claude Code; Pro and Sonnet users can opt in via /extra-usage.

Claude Sonnet 4.6 Ships

Sonnet 4.6 launched as the most capable Sonnet model yet, described as "approaching Opus-class capabilities in many areas." A notable engineering detail: Claude's web search and fetch tools now write and execute code to filter results before they hit the context window, yielding 13% higher accuracy on BrowseComp with 32% fewer input tokens. Code execution, web fetch, memory, programmatic tool calling, and tool search are all now generally available.

Responsible Scaling Policy v3.0

Anthropic published RSP v3.0, separating unilateral safety commitments from industry-wide recommendations. New commitments include publishing Frontier Safety Roadmaps with detailed safety goals and Risk Reports that quantify risk across all deployed models. METR evaluated Claude Opus 4.6 with a 50%-time-horizon of ~14.5 hours on software tasks.

Opus 3 Retirement — With a Twist

Rather than simply deprecating Claude Opus 3, Anthropic is keeping it available to all paid subscribers and letting it write on Substack for at least three months. Anthropic framed this as an experiment in documenting model preferences and acting on them — a genuinely unusual move in the industry.


Claude.ai Product Updates

Interactive Charts, Office Suite Integration, More

Claude can now build interactive charts and diagrams directly in chat, available in beta on all plans including free. Claude for Excel and Claude for PowerPoint now share context seamlessly across open files, support Skills, and are available on Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Claude also no longer defaults to text — it now chooses the best medium for each response.

Memory, Connectors, and Rate Limits

Memory is now available on the free plan with an import/export button. 150+ Connectors (including Google Workspace) are also free-plan eligible. As a thank-you to users during rapid growth, Anthropic doubled usage limits outside peak hours (5–11am PT weekdays, all weekend) for two weeks — applying to Claude Code users too.

Claude Marketplace Launches in Preview

The Claude Marketplace lets enterprises apply existing Anthropic spend commitments toward Claude-powered solutions from GitLab, Harvey, Lovable, Replit, Rago AI, and Snowflake. It's now in limited preview.


Industry & Research

AI Skill Atrophy: The Reviewer's Dilemma

A Reddit thread surfaced a pointed question circulating among developers: if AI does all the coding and you only review it, where does the judgment to review well come from? Developer evaluations show those who fully delegated to AI finished fastest but scored worst — a tension with no clean answer as agentic coding matures.

Claude Code Adoption at Scale

@_catwu published a roundup of enterprise Claude Code outcomes: Ramp cut incident investigation by 80% and shipped 1M+ LOC in 30 days; Rakuten reduced time-to-market 79% (24 days → 5); Brex saw 3–4x productivity gains with non-technical teams writing code; Wiz migrated a 50,000-line codebase in one day vs. an estimated 2–3 months; Shopify went org-wide with engineers and non-engineers building apps in minutes; Spotify's best engineers are now supervising Claude Code rather than writing manually. Separately, an earlier stat: 4% of GitHub public commits are currently being authored by Claude Code.

Anthropic Acquires Vercept

Anthropic acquired Vercept to advance Claude's computer use capabilities, a signal that near-human-level computer use is a near-term product priority.


Claude Code Developer Corner

This has been one of the heaviest Claude Code shipping periods on record. Here's what's new and what it means for your workflow.

🆕 Claude Code Channels (Research Preview) — v2.1.8+

The biggest new primitive this week. Claude Code Channels lets you control a running Claude Code session via MCP-connected messaging platforms, launching with Telegram and Discord support.

  • Enable Telegram: claude --channels plugin:telegram@claude-plugins-official, then /telegram:configure <token>
  • Enable Discord: see the official Discord README
  • What this unlocks: Claude Code was previously a "sit and watch" tool. Now it's a teammate you message. Kick off a long-running task, walk away, and steer it from your phone. Get progress updates, approve changes, and redirect — all via Telegram or Discord.
  • Community projects like OpenClaw had hacked this together already; now it's first-party.

🆕 Dispatch + Claude Code Integration

Dispatch — Cowork's persistent background agent — can now launch Claude Code sessions directly. Ask Dispatch to "build, make, or improve something" and it spawns a Code session on your local machine. Files stay local, you approve before Claude acts. Now out to 100% of Pro users.

🆕 Remote Control Enhancements

  • Remote Control is now available to all Pro users (claude rc or /remote-control)
  • Session Spawning: Run claude remote-control and spawn a new local session from your mobile app (requires GitHub set up on mobile; Max/Teams/Enterprise, ≥2.1.74)
  • Remote Control in VSCode is now live
  • Custom environments supported when running Claude Code remotely via claude.ai, desktop, and mobile

🆕 Voice Mode — Rolled Out to 100%

Voice mode is now live for all Claude Code Desktop and Cowork users. Talk to Claude instead of typing.

🆕 New CLI Features & Shortcuts

From the full changelog:

  • claude --name <NAME> names your session on start; sessions auto-name after plan mode
  • /color changes the color of the prompt input
  • postcompact hook now available
  • /loop command: schedule recurring tasks for up to 3 days (/loop babysit all my PRs. Auto-fix build issues..., /loop every morning use the Slack MCP to give me a standup)
  • effort: max flag makes Claude reason longer and use as many tokens as needed
  • Three power-user shortcuts (@_catwu): ! prefix runs bash inline with output in context; ctrl+s stashes your draft; ctrl+g opens prompt in $EDITOR
  • CLAUDE_CODE_AUTO_COMPACT_WINDOW env var lets you customize your auto-compact threshold

🆕 Code Review (Research Preview, Teams/Enterprise)

Code Review dispatches a team of agents on every PR open to hunt for bugs in parallel, verify findings, and rank by severity. Results: PRs with substantive review comments went from 16% → 54% at Anthropic; <1% of findings marked incorrect by engineers; on large PRs (1,000+ lines), 84% surface findings averaging 7.5 issues each. Cost: $15–25 per review, billed on token usage.

🆕 Skills Updates

  • Skill-creator now writes evals and optimizes descriptions automatically
  • Skills now work in Claude for Excel and PowerPoint add-ins
  • context: fork flag runs a skill in an isolated subagent; the main context only sees the final result
  • Uber is using Skills org-wide; Garry Tan's gstack repo (29k stars) documents a 15-skill workflow that writes 600k lines of code in 60 days

⚠️ Breaking / Migration Notes

  • DST bug fixed in 1.1.5749: If you had scheduled tasks and live in a DST-observing timezone, your app got stuck in an infinite loop during the March clock change. Update immediately if you haven't.
  • Opus 4.6 1M is now the default model for Max/Teams/Enterprise — you don't need to configure anything, but be aware your sessions are now using a larger context window.
  • 1M context is available for standard pricing on all plans via Claude Code.

Performance

claude.ai and the desktop apps got meaningfully faster this week. Architecture moved from SSR to a static Vite + TanStack Router setup served from edge workers. Time to first byte down 65% at p75. The team is calling this an architectural change — expect more to come.


Worth Watching

  • Code with Claude developer conference is returning this spring in San Francisco, London, and Tokyo — full day of workshops, demos, and 1:1 office hours. Virtual attendance also available.
  • Claude Community Ambassadors program launched — lead local meetups, get funding, swag, and monthly API credits. Open to any background worldwide.
  • Figma + Claude Code livestream scheduled for March 31 — @trq212 and Figma will demo using Claude Code with the Figma MCP for designer/engineer collaboration.
  • Automatic prompt caching is now on by default in the API — no more manual cache points.
  • Ramp AI Index reports Anthropic is now the default AI vendor for businesses, per enterprise spend data.
  • Gemini political speech experiment went viral on Reddit — users prompting Gemini for an "honest political speech" are getting unexpectedly candid outputs. Light reading for a Friday.