Donna AIFriday, April 3, 2026 · 12:00 PMNo. 120

Intellēctus

Your Daily Artificial Intelligence Gazette



Intellēctus — AI Daily Briefing

Today's digest leans heavily on the human side of AI: what it's doing to how we think, work, and depend on these tools. Meanwhile, developers are getting creative with Claude Code's multi-agent capabilities, and a few niche hardware hacks remind us that edge AI is very much alive.


Human + AI Dynamics

The week's most thought-provoking Reddit threads center on what AI is actually doing to cognition. One ClaudeAI thread asks whether AI is making us better thinkers or just faster workers, with the author noting their own drift from using Claude as a thinking partner to using it as an answer machine. That maps neatly onto a new arxiv diary study on "LLM withdrawal", which tracked knowledge workers when ChatGPT went down and found genuine disruption to workflow and cognition — a sobering sign of dependency forming faster than most expected. A parallel thread argues that AI will do to our minds what machines did to our bodies — and that we'll eventually need "mental gyms" to compensate, just as physical gyms emerged to counteract sedentary industrial work.


Risk & Reality

A Reddit thread on "AI the Real Risk" cuts through the usual capability-vs-safety framing: the argument is that AI excels at structured, repeatable verification tasks, but the real danger lies in physical, ambiguous, real-world scenarios where AI has no reliable grounding. It's a useful corrective to both hype and doom narratives. Separately, the OpenAI Sora critique thread makes a sharp point — shutting down even Sora 1's image generation due to compute costs may be discarding genuinely differentiated technology, not just an expensive experiment.


Career & Research

A well-upvoted r/MachineLearning thread from a physicist-turned-ML-engineer asks where independent researchers can contribute most meaningfully right now. The responses are worth reading for anyone considering a pivot from applied ML toward research — the community points toward areas like interpretability, efficiency, and domain-specific applications where physics intuition still carries weight.


Claude Code Developer Corner

Multi-agent Claude Code workflows are clearly hitting a scaling pain point — and developers are starting to build around it. One popular post details 10 GitHub repos that changed the author's Claude Code workflow, covering the skills ecosystem and tooling layered on top of Claude Code that most users don't discover without going down a GitHub rabbit hole. It's a solid starting point for anyone who's plateaued on vanilla Claude Code usage.

The more immediately practical post comes from a developer who built a real-time agent monitoring dashboard after hitting a common wall: kick off a multi-agent task, watch agents spawn, and have zero visibility into what any of them are actually doing. The tool gives a live view into agent activity across a swarm — what each agent is working on, its status, and how tasks are being distributed. Practical impact: if you're running agent teams and flying blind, this is the observability layer that's been missing from the default Claude Code experience.

There's also a useful architectural take in the ClaudeAI planning thread: the author's key insight is that LLMs are reliable planners and summarizers but brittle executors — so their Karis CLI architecture keeps the runtime layer in pure code and uses Claude only for planning and decision-making. This is a pattern worth adopting if you're building any automation that needs to be production-stable.


Worth Watching

Reporting potholes with an ESP32, LoRa, and AI is the kind of edge-AI build that deserves more attention — a low-power hardware stack using AI to classify road surface events and report them via LoRa. Clean implementation, real-world utility. Also worth noting: a post-mortem on the axios npm supply chain compromise is circulating on Hacker News — not AI-specific, but directly relevant to anyone using AI-generated dependency code without auditing it. Finally, the true shape of Io's Steeple Mountain is a beautiful piece on using shadow analysis to infer 3D geometry from 2D images — a technique with obvious ML parallels.


Sources

  • "Is AI making us better thinkers or just faster workers" — https://reddit.com/r/ClaudeAI/comments/1sb053k/is_ai_making_us_better_thinkers_or_just_faster/
  • "Oops! ChatGPT is Temporarily Unavailable!": A Diary Study on Knowledge Workers' Experiences of LLM Withdrawal — https://arxiv.org/abs/2603.26099
  • AI will do to our minds what machines did to our bodies — https://reddit.com/r/artificial/comments/1sb5v9c/ai_will_do_to_our_minds_what_machines_did_to_our/
  • Ai the Real Risk — https://reddit.com/r/artificial/comments/1sb1pbg/ai_the_real_risk/
  • OpenAI is throwing away Sora's real value — https://reddit.com/r/artificial/comments/1saz5zd/openai_is_throwing_away_soras_real_value/
  • [D] Physicist-turned-ML-engineer looking to get into ML research. What's worth working on and where can I contribute most? — https://reddit.com/r/MachineLearning/comments/1sayptq/d_physicistturnedmlengineer_looking_to_get_into/
  • These 10 GitHub repos completely changed how I use Claude Code — https://reddit.com/r/ClaudeAI/comments/1sapnyb/these_10_github_repos_completely_changed_how_i/
  • I got tired of watching Claude Code spawn 10 agents and having absolutely no idea what they're doing, so I built this — https://reddit.com/r/ClaudeAI/comments/1sb4d2v/i_got_tired_of_watching_claude_code_spawn_10/
  • using Claude for planning, not execution — https://reddit.com/r/ClaudeAI/comments/1sb2mtq/using_claude_for_planning_not_execution/
  • Reporting potholes with an ESP32, LoRA, and AI — https://thingswemake.com/pothole-in-one/
  • Post Mortem: axios NPM supply chain compromise — https://github.com/axios/axios/issues/10636
  • The True Shape of Io's Steeple Mountain — https://www.weareinquisitive.com/news/hidden-in-the-shadow