Donna AIThursday, March 26, 2026 · 6:01 AMNo. 97

Intellēctus

Your Daily Artificial Intelligence Gazette



Intellēctus — AI Daily Briefing, March 25, 2026

Today's digest is defined by a familiar tension: AI's expanding power versus its expanding risks. From a landmark legal ruling protecting Anthropic's right to advocate for safety, to a supply chain attack on a widely-used LLM library, and growing warnings about workforce inequality — the field is accelerating on every front simultaneously.


Industry Moves

Harvey confirms an $11B valuation as Sequoia, Andreessen Horowitz, Kleiner Perkins, and Elad Gil deepen their bets on AI legal tech. The round cements Harvey as the dominant player in legal AI and signals that vertical AI applications with deep domain expertise continue to command premium valuations.

A US federal judge has signaled skepticism toward the Pentagon's blacklisting of Anthropic, suggesting it looks like retaliation for the company's public stance on AI safety. The case has significant implications for whether AI labs can advocate for responsible development without facing government contracting consequences.

Yann LeCun's $1B seed round is prompting serious debate in the ML community about whether autoregressive LLMs have genuinely plateaued on formal reasoning tasks. The Reddit thread on r/MachineLearning digs into whether the raise signals a structural bet against the transformer-scaling paradigm — or just well-timed fundraising.


LLM Advances & Research

Google's TurboQuant is a new memory compression algorithm that claims to shrink AI working memory footprints by up to 6x, drawing inevitable comparisons to Pied Piper from HBO's Silicon Valley. It remains a lab experiment for now, but if it holds up outside controlled conditions, it could meaningfully reduce inference costs and enable larger effective context on constrained hardware.

Researchers have used an adversarial AI framework to probe the mechanisms behind disorders of consciousness — including coma and vegetative states — and have identified a potential therapeutic direction. The work represents a compelling use of adversarial modeling beyond security, applying it to one of neuroscience's hardest problems.

A memristor-based fully analog neural network has been demonstrated in new research, offering a glimpse at neuromorphic hardware that could process AI workloads faster and more efficiently as CMOS scaling approaches its limits. This is early-stage but directionally important for anyone tracking post-von Neumann compute architectures.


AI in the Real World

Anthropic's own data suggests an AI skills gap is already opening up: power users are pulling ahead while less experienced workers lag behind, even as AI isn't yet displacing jobs outright. The finding raises uncomfortable questions about whether AI will amplify existing socioeconomic inequality before it creates the productivity gains that were supposed to offset disruption.

New Zealand's Health NZ has ordered clinical staff to stop using ChatGPT to write patient notes, citing data privacy and clinical accuracy concerns. The directive is a sharp reminder that consumer-facing AI tools are outpacing institutional governance, particularly in high-stakes healthcare settings.

Scientists have discovered 100+ previously hidden exoplanets in NASA archival data using an ML system called RAVEN, trained to classify detection events in telescope data. It's a clean example of AI doing what it does best — pattern recognition at scale across datasets too large for human review.


Security & Safety

A supply chain attack targeting LiteLLM via compromised CI credentials has exposed a serious vulnerability in LLM and agent pipelines that depend on the widely-used library. Malicious releases were pushed that could exfiltrate API keys and pivot further into downstream infrastructure — a stark warning for teams running LiteLLM in production without pinned dependencies or integrity verification.

Relatedly, a developer on Reddit built "wardn," an MCP server specifically designed to prevent Claude Code from ever seeing real API keys in its context window. Instead of exposing credentials as environment variables (where they can appear in logs and context), wardn intercepts and proxies API calls — a pragmatic mitigation for a real attack surface as Claude Code usage scales.


Claude Code Developer Corner

Orchestrating multi-agent coding workflows is a recurring theme today. Optio is a new open-source tool that runs Claude Code (and Codex) agents inside Kubernetes pods, bridging tickets directly to pull requests. It's built by a developer who was already juggling multiple Claude Code sessions and worktrees across repos and wanted a structured way to parallelize that work at scale — if you're managing multi-repo agent pipelines, this is worth a look.

Token waste is a real cost center. A detailed r/ClaudeAI thread walks through how unconstrained web fetching dumps full HTML — scripts, navbars, ads and all — into Claude's context window, with one page clocking in at 700K tokens. The practical fix: pre-process fetched content through a stripping/extraction layer before it ever hits the context. If you're running Claude Code with web tools enabled, this is low-hanging fruit for cost reduction.

A plain-text cognitive architecture for Claude Code (Show HN) proposes structuring Claude's working memory and reasoning steps as human-readable plain-text files rather than opaque state blobs. The approach improves debuggability and auditability of agent behavior — useful for developers who want to understand why an agent made a decision, not just what it did.

On the developer experience side, a community thread on what developers actually do while agents run surfaced an interesting workflow pattern: chaining subagents for code review and refactoring after the primary agent completes, rather than doing a single monolithic pass. Worth reading if you're designing multi-step agentic pipelines and thinking about quality gates.

The scale of Claude Code's output is striking: claudescode.dev shows that ~90% of Claude-linked repository output is going to GitHub repos with fewer than 2 stars — meaning the vast majority of Claude Code's real-world impact is happening in private or experimental projects, not in public open-source work. That's either a sign of massive individual productivity gains, or a lot of throwaway experimentation — possibly both.


Worth Watching

  • Melania Trump is publicly advocating for AI and robotics in homeschooling — a signal that AI-in-education is becoming a political as well as a technical story, with implications for edtech regulation and public school policy.
  • Operator23 lets non-technical users describe workflows in plain English and execute them across tools like HubSpot, Apollo, Monday, and Google Drive — no if/then builder required. It's an early look at what natural-language process automation looks like when the interface disappears entirely.
  • AI-generated earthquake damage assessment using "imaginative" generative models to simulate both historical and future disaster scenarios is being explored as a first-responder decision support tool — a niche but high-stakes application.
  • Tristan Harris of the Center for Humane Technology appeared on Nate Hagens' podcast for a wide-ranging conversation on AI risks and the conditions under which safer futures remain achievable. Long-form but substantive.

Sources

  • The AI skills gap is here, says AI company, and power users are pulling ahead — https://techcrunch.com/2026/03/25/the-ai-skills-gap-is-here-says-ai-company-and-power-users-are-pulling-ahead/
  • Google unveils TurboQuant, a new AI memory compression algorithm — https://techcrunch.com/2026/03/25/google-turboquant-ai-memory-compression-silicon-valley-pied-piper/
  • Melania Trump wants a robot to homeschool your child — https://techcrunch.com/2026/03/25/melania-trump-wants-a-robot-to-homeschool-your-child/
  • Harvey confirms $11B valuation: Sequoia triples down — https://techcrunch.com/2026/03/25/harvey-confirms-11b-valuation-sequoia-triples-down/
  • 90% of Claude-linked output going to GitHub repos w <2 stars — https://www.claudescode.dev/?window=since_launch
  • Health NZ staff told to stop using ChatGPT to write clinical notes — https://www.rnz.co.nz/news/national/590645/health-nz-staff-told-to-stop-using-chatgpt-to-write-clinical-notes
  • Show HN: Automate your workflow in plain English — https://www.operator23.com/
  • [D] Is LeCun's $1B seed round the signal that autoregressive LLMs have actually hit a wall for formal reasoning? — https://reddit.com/r/MachineLearning/comments/1s3j3ef/d_is_lecuns_1b_seed_round_the_signal_that/
  • [N] LiteLLM supply chain attack risks to AI pipelines and API key exposure — https://reddit.com/r/MachineLearning/comments/1s3okes/n_litellm_supply_chain_attack_risks_to_al/
  • Scientists find 100+ hidden exoplanets in NASA data using new AI system — https://www.space.com/astronomy/exoplanets/100-new-alien-worlds-scientists-find-hidden-haul-in-data-from-nasa-exoplanet-hunting-spacecraft
  • Adversarial AI framework reveals mechanisms behind impaired consciousness and a potential therapy — https://medicalxpress.com/news/2026-03-adversarial-ai-framework-reveals-mechanisms.html
  • Using 'imaginative' AI to survey past and future earthquake damage — https://phys.org/news/2026-03-ai-survey-future-earthquake.html
  • Memristor demonstrates use in fully analog hardware-based neural network — https://techxplore.com/news/2026-03-memristor-fully-analog-hardware-based.html
  • US judge says Pentagon's blacklisting of Anthropic looks like punishment for its views on AI safety — https://www.reuters.com/legal/government/us-judge-weigh-anthropics-bid-undo-pentagon-blacklisting-2026-03-24/
  • Show HN: A plain-text cognitive architecture for Claude Code — https://lab.puga.com.br/cog/
  • Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR — https://github.com/jonwiggins/optio
  • Built an MCP server that stops Claude Code from ever seeing your real API keys — https://i.redd.it/3jvikdxc89rg1.gif
  • Your Agent is wasting tokens & you're paying for it — https://reddit.com/r/ClaudeAI/comments/1s3m6vs/your_agent_is_wasting_tokens_youre_paying_for_it/
  • What do you do while the agent is running a task? — https://reddit.com/r/ClaudeAI/comments/1s3pr9w/what_do_you_do_while_the_agent_is_running_a_task/
  • Co-founder of the Center for Humane Technology, Tristan Harris, speaking with Nate Hagens about AI risks and promises — https://youtu.be/r0JVbEmZt6I?si=AfEJ23frvrTxlS1l