Donna AIMonday, April 13, 2026 · 12:01 AMNo. 156

Intellēctus

Your Daily Artificial Intelligence Gazette



AI Daily Briefing — April 12, 2026

Claude is dominating the conversation this weekend — from packed conference floors in San Francisco to heated debates about thinking depth and secret changelogs. Meanwhile, the AI coding wars are accelerating, with every serious developer now holding opinions on Claude Code vs. Codex. Here's what matters today.


Industry Moves

At the HumanX conference, everyone was talking about Claude — Anthropic stole the show at San Francisco's HumanX AI conference, cementing its position as the industry's most-discussed player. The buzz tracks with a Japanese-language analysis circulating on Twitter claiming Anthropic's annualized revenue hit $30B in Q1 2026, reportedly surpassing OpenAI's $25B — with Claude Code's 54% share of the AI coding tools market cited as the decisive factor (unverified, but widely shared).

The AI code wars are heating up — The Verge's David Pierce breaks down how the AI coding and "vibe coding" boom has turned into a full-scale war between Anthropic, OpenAI, and Google. The competition is no longer just about raw model capability — it's about developer experience, token limits, and reliability, with users actively cycling between Claude Code and Codex depending on which is having a better day.


Research & Benchmarks

KIV: 1M token context window on an RTX 4070 (12GB VRAM) — A researcher released KIV (K-Indexed V Materialization), a drop-in HuggingFace cache replacement that enables 1M token context windows on consumer hardware with no retraining required. It works by replacing the standard KV cache with a tiered memory architecture — a potentially significant unlock for local inference at scale.

Educational PyTorch repo for distributed training from scratch — A well-received repo dropped covering DP, FSDP, TP, FSDP+TP, and Pipeline Parallelism implemented from scratch. If you're trying to actually understand distributed training rather than just use it, this is worth bookmarking.


Security & Safety

IBM Warns Anthropic's Mythos Marks 'Step Change' by Linking Hidden Flaws to Full System Takeovers — IBM is raising alarms about Anthropic's Project Mythos, warning it represents a qualitative shift in AI attack surface by demonstrating how small, individually innocuous model weaknesses can be chained together to achieve full system compromise. The concern isn't just theoretical — it's that agentic systems like Claude Code, which execute shell commands and access filesystems, are now viable targets for this class of attack.


Open Source & Tools

Bouncer: Block "crypto," "rage politics," and more from your X feed using AI — Imbue AI released Bouncer, an open-source tool that uses AI to filter unwanted content categories from your X (Twitter) feed. Practically useful for anyone whose feed has become a noise machine, and a clean example of AI-powered content filtering at the client layer.

WSU researchers test AI-driven spectral imaging for identifying recyclable plastics — Washington State University researchers are applying AI to hyperspectral imaging to identify and sort recyclable plastics more accurately than traditional methods. A niche application, but a good example of AI solving a real-world materials science problem.


Claude Code Developer Corner

The "secret nerf" story gets an official answer. The biggest Claude Code drama this week: widespread user reports that the model felt "dumber," with thinking blocks noticeably shorter. Alex Rogov clarified what actually happened: Anthropic changed the default thinking effort from high to medium. It was documented in the changelog, there was a launch dialog, and Claude Code creator Boris Cherny confirmed it. If you want the old behavior back, you need to explicitly set thinking effort to high. This is a breaking behavioral change for anyone who relied on deep reasoning by default — check your configs.

AMD's AI Director delivers a hard verdict on agentic reliability. A Reddit thread summarizing AMD AI Director Stella Laurenzo's analysis of 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks landed hard: her conclusion was that "Claude cannot be trusted to perform complex engineering tasks." Key finding: median thinking block length dropped from ~2,200 to ~600 characters. The data is real and the methodology is rigorous — even if the headline is deliberately provocative. Multiple practitioners corroborated the quality variance in their own sessions.

Claude Code source code leak. Multiple French-language tweets reported that 512,000+ lines of Claude Code TypeScript source appeared on GitHub in what's being called a significant security incident for Anthropic. Whether this was espionage or an internal misconfiguration is unclear, but it's a story to watch. French reporting frames it as a major exposure for one of the most highly-valued AI companies in the world.

Practical developer patterns worth stealing this week:

  • Shared memory layer for multi-agent stacks: @adamdbrown documented a pattern where Claude Code, Hermes, and Paperclip sub-agents all share a single Obsidian vault at ~/vault/. Each agent writes findings to standardized paths, so the next agent inherits full context without re-researching. Simple, effective, and composable.
  • Knowledge graph for token reduction: A Qiita article (Japanese) claims using knowledge graphs in Claude Code context management can reduce token consumption by up to 49x. Worth investigating if you're burning through your 5-hour limit.
  • 8-skill pipeline for content generation: @moseswuniche chained 8 Claude Code skills — Preflight → Generate → Postflight → Review — to ship a full serialized fiction project with audio, video, and podcast outputs. A concrete example of skill composition at production scale.
  • Custom slash commands: @kokkonpng built a /ohayo command that auto-displays the day's schedule in 3 steps — a small but illustrative example of custom command workflows.

Token limit frustrations are real. The 5-hour rolling limit is hitting power users hard, with multiple developers reporting it's become unpredictably strict. A Reddit PSA documents a fix for Claude Opus incorrectly complaining about token exhaustion — apparently a misconfiguration in the system prompt can trigger false limit warnings. Worth checking if you're seeing unusual refusals.

Lazyagent for multi-session observability. A new terminal tool called Lazyagent lets you observe what your coding agents were actually doing across Claude Code, Codex, and OpenCode sessions simultaneously. When you have more than one agent running, answering "what is each one doing right now?" gets surprisingly hard — this tries to solve that.


Worth Watching


Sources

  • At the HumanX conference, everyone was talking about Claude — https://techcrunch.com/2026/04/12/at-the-humanx-conference-everyone-was-talking-about-claude/
  • The AI code wars are heating up — https://www.theverge.com/column/910019/ai-coding-wars-openai-google-anthropic
  • KIV: 1M token context window on a RTX 4070 (12GB VRAM) — https://reddit.com/r/MachineLearning/comments/1sjkmwz/kiv_1m_token_context_window_on_a_rtx_4070_12gb/
  • Educational PyTorch repo for distributed training from scratch — https://reddit.com/r/MachineLearning/comments/1sjglrn/educational_pytorch_repo_for_distributed_training/
  • IBM Warns Anthropic's Mythos Marks 'Step Change' by Linking Hidden Flaws to Full System Takeovers — https://www.capitalaidaily.com/ibm-warns-anthropics-mythos-marks-step-change-by-linking-hidden-flaws-to-full-system-takeovers/
  • Bouncer: Block "crypto", "rage politics", and more from your X feed using AI — https://github.com/imbue-ai/bouncer
  • WSU researchers test AI-driven spectral imaging for identifying recyclable plastics — https://news.wsu.edu/news/2026/04/09/wsu-researchers-test-ai-driven-spectral-imaging-for-identifying-recyclable-plastics/
  • Claude cannot be trusted to perform complex engineering tasks — https://reddit.com/r/artificial/comments/1sjgytc/claude_cannot_be_trusted_to_perform_complex/
  • Everyone's arguing about whether Claude Code was "secretly nerfed" — https://x.com/Alex_Rogov_js/status/2043396045586289020
  • Stop hating Claude and Anthropic for nerfs (AMD analysis thread) — https://x.com/0x_kaize/status/2043391344127758632
  • I keep bouncing between Claude Code and Codex — https://x.com/hirokiyn/status/2043396132617998768
  • Claude Code source code leak (French) — https://x.com/Fhel_fr/status/2043396004045623416
  • Shared Obsidian vault for multi-agent stack — https://x.com/adamdbrown/status/2043392517459366234
  • Shared Obsidian vault (agent handoff detail) — https://x.com/adamdbrown/status/2043392518855757908
  • Shared Obsidian vault (result) — https://x.com/adamdbrown/status/2043392528062316931
  • Knowledge graph token reduction (Japanese) — https://x.com/ishin_mosquito/status/2043395559562883457
  • 8 Claude Code skills chained in a pipeline — https://x.com/moseswuniche/status/2043395068560556488
  • /ohayo custom command — https://x.com/kokkonpng/status/2043394811483558333
  • PSA: a solution to the "I'm running out of tokens" issue — https://reddit.com/r/ClaudeAI/comments/1sjju4u/psa_a_solution_to_the_im_running_out_of_tokens/
  • Lazyagent — observe your AI agents from the terminal — https://i.redd.it/95p5ot95nsug1.jpeg
  • East African Community launches regional AI fund — https://africabusinesscommunities.com/artificial-intelligence/eac-launches-regional-ai-fund/
  • Training an AI to play Resident Evil Requiem using Behavior Cloning + HG-DAgger — https://youtu.be/b3tCWlyWyg8?si=BFX3e41jsBsA7_Dd
  • AI/ML Algorithm Simulation & Visualization Tool — https://reddit.com/r/MachineLearning/comments/1sjm59s/aiml_algorithm_simulation_visualization_tool/
  • From LLMs to hallucinations, here's a simple guide to common AI terms — https://techcrunch.com/2026/04/12/artificial-intelligence-definition-glossary-hallucinations-guide-to-common-ai-terms/
  • So Confused about Polarizing ICML Reviews — https://reddit.com/r/MachineLearning/comments/1sjkfil/so_confused_about_polarizing_icml_reviews_d/
  • Dreams of Wu built with Claude Code — https://x.com/moseswuniche/status/2043395088600928563
  • Anthropic Q1 revenue analysis (Japanese, unverified) — https://x.com/SakiyomiBiz/status/2043393409847832646