by Stefan Wolpers|FeaturedAgile and ScrumAgile Transition
Using AI at Work Does Not Mean You Understand It
Many agile practitioners use ChatGPT at work. That does not mean they understand AI well enough to trust your own judgment. The problem is not that agile practitioners ignore AI. The problem is that many already use it confidently without knowing where their judgment breaks down. The free AI4Agile Foundational Assessment measures precisely this skill gap. (Download your access file below.)
The assessment comprises 40 scenario-based questions. It does not ask for definitions, but puts you into situations that agile coaches, product managers, and Scrum Masters face every week: weak prompting producing generic output, misleading data analysis, questionable agent output, and, possibly, organizational pressure to treat AI output as “good enough” to go with it.
Most people who use AI do not fail because they lack knowledge, but because they cannot distinguish between plausible outputs and trustworthy judgment. But see for yourself!
Welcome to the Sign-up Page of the ‘AI4Agile Foundational Assessment’
Many agile practitioners use ChatGPT at work. That does not mean they understand AI well enough to trust your own judgment. The problem is not that agile practitioners ignore AI. The problem is that many already use it confidently without knowing where their judgment breaks down. The free AI4Agile Foundational Assessment measures precisely this skill gap. (Download your access file below.)
The assessment comprises 40 scenario-based questions. It does not ask for definitions, but puts you into situations that agile coaches, product managers, and Scrum Masters face every week: weak prompting producing generic output, misleading data analysis, questionable agent output, and, possibly, organizational pressure to treat AI output as “good enough” to go with it.
Most people who use AI do not fail because they lack knowledge, but because they cannot distinguish between plausible outputs and trustworthy judgment. But see for yourself!
TL; DR: AutoResearch in Your Sleep — Food for Agile Thought #537
Welcome to the 537th edition of the Food for Agile Thought newsletter, shared with 35,652 peers. This week, Andrej Karpathy and Aakash Gupta explore how AI agents are reshaping digital work through autonomous multi-agent workflows and autoresearch loops that run 100 automated improvement cycles overnight. Shifting to team dynamics, Christina Wodtke sees the friction between frontline teams and management as a perspective problem across abstraction levels, while Stephanie Leue warns that polite CPO-CTO misalignment costs far more than the honest conversation both parties avoid. Maarten Dalmijn adds that autonomy without alignment creates silos, not freedom, and Jerry Neumann challenges the entire startup methodology canon, proposing that widely adopted frameworks like Lean Startup become self-defeating the moment everyone uses them.
Next, Sachin Rekhi maps out 15 AI prototyping skills product managers need to shift how teams prioritize roadmaps, and Tim O’Reilly warns the agentic economy still lacks the infrastructure to prevent single-gatekeeper capture. Anthropic researchers Massenkoff, Lyubich, and McCrory find that experienced Claude users tackle harder tasks with higher success rates as usage diversifies. Also, Margaret-Anne Storey identifies cognitive and intent debt as two underappreciated costs of AI-generated code, and Ruben Hassid demonstrates Claude’s new computer use feature for autonomous multi-step Mac workflows.
Lastly, Paweł Huryn documents 74 Claude releases in 52 days, signaling a widening competitive gap. At the same time, Ian Vanagas shares PostHog’s hard-won lessons on when product context beats flashy agent capabilities. Tristan Kromer warns that synthetic personas sharpen interview guides but cannot replace real customer discovery, and Bandan Singh proposes letting direct reports lead 1:1s before managers add their topics. Finally, Allan Kelly believes Agile’s decline stems from the community’s own retreat from in-person learning.
Jira was named after Godzilla and built to track bugs. It became the default agile tool because it satisfied a deeply human desire: controlling work by putting it in boxes with statuses, assignees, and due dates. That system works for humans scanning dashboards. It does not work for autonomous agents that need to reason about patterns across iterations, detect recurring problems, and forecast what is likely to break next. This article argues that the tool on which 62% of agile teams rely is about to be demoted from knowledge authority to execution interface. We need to move from Jira to AI Agents.
TL; DR: POM Starter Pack — Food for Agile Thought #536
Welcome to the 536th edition of the Food for Agile Thought newsletter, shared with 35,661 peers. This week, Anthropic’s 81,000-person study reveals that hope and alarm about AI coexist within the same individual. Alberto Romero channels that tension into eight practical strategies for AI career anxiety, while Allan Kelly warns that today’s AI hype mirrors the 1990s BPR failures. On the product side, Teresa Torres walks teams through measuring real customer impact rather than shipping features, Janna Bastow proposes that fixing bugs and technical debt is the strategy, and the Dotwork team provides a POM starter pack to operationalize Marty Cagan’s Product Operating Model.
Next, David Pereira suggests that product leadership means creating space for product managers to thrive, not being the smartest person in the room. Steve Blank warns that startups older than 2 years are likely running obsolete playbooks in a world reshaped by AI agents and vibe coding. Also, Ruben Dominguez highlights Claude’s 14x revenue jump and proposes that the real productivity gap lies in learning to co-work with AI. Cedric Chin recommends ignoring AI predictions and studying actual field reports instead, while Dave Snowden reminds us that Boyd’s OODA loop was never meant to be a safe iteration cycle.
Then, Jeff Gothelf proposes that storytelling is now the product manager’s key competitive advantage as AI commoditizes standard PM artifacts. Tristan Kromer addresses the lack of memory in AI agents. He proposes building a RAG-based experiment knowledge base to compound learning rather than repeat it. Martin Eriksson adds that AI agents need the same strategic clarity as human teams or organizations will scale confusion at machine speed. Finally, Sharyph explains how Claude Code Skills 2.0 turns Claude into a personalized, testable workflow system, while Deloitte’s 2026 State of AI report finds that only 34% of organizations truly reimagine their business with AI despite rising access.
by Stefan Wolpers|FeaturedAgile and ScrumAgile Transition
TL; DR: AI Thinking Skills for Agile Practitioners
Most agile practitioners use AI to produce outputs more quickly. Few use it to think better. This free download gives you three AI thinking skills (Socratic Explorer, Brutal Critic, Pre-Mortem) that turn Claude into a partner for diagnosing problems, stress-testing plans, and anticipating failures before they happen.