TL; DR: Test-Drive the Questions of the AI 4 Agile Assessment
Help me QA the questions for the AI 4 Agile Certificate by taking three short practice quizzes on the intersection of AI and Agile. In return, pass 2 out of 3 and get a free shot at the real AI 4 Agile Certification (40 questions, 45 minutes). The top performer of this small competition also receives the full AI 4 Agile online course.
Your feedback on the questions matters tremendously and helps shape a better curriculum for all practitioners.
by Stefan Wolpers|FeaturedAgile and ScrumAgile Transition
TL; DR: Mastering AI 4 Agile with the Best Self-Paced Online Course
The Mastering AI with the AI 4 Agile Online Course launches this week, and I am proud that I avoided another delay. Scope creep happened despite my supposed expertise in preventing exactly that. The course expanded from a simple prompt collection to over 8 hours of video, custom GPTs, and materials that I’ll apparently continue to update indefinitely, as I’m still not satisfied that it’s comprehensive enough. (Also, the field is advancing so rapidly.)
At least the $129 lifetime access means you will benefit from my urge to fight my imposter syndrome with perfectionism and from my inability to call a project “done.” I guess we are in for the long term. 🙂
TL; DR: State of AI Report 2025 — Food for Agile Thought #514
Welcome to the 514th edition of the Food for Agile Thought newsletter, shared with 40,403 peers. This week, Nathan Benaich’s State of AI Report 2025 highlights advancements in reasoning, China’s momentum, massive compute, sharper geopolitics, and a pragmatic shift toward reliability and governance. Jenny Wanger offers a hands-on way to surface implicit strategy through learner-mode interviews, drafting, and iterative feedback, while John Cutler tackles product-centricity in non-digital firms, urging a networked operating model that links funding, intent, collaboration, architecture, and outcomes. Nino Paoli notes Citi’s prompt training push while warning that real impact needs ongoing upskilling and integration. Additionally, Maarten Dalmijn highlights trust as the actual bottleneck that must be addressed before AI can effectively amplify execution.
Next, Roman Pichler sharpens stakeholder management with a power-interest focus, trust building, early involvement, and NVC for real conflict resolution. David Pereira draws out Ryan Singer on keeping six-week bets lean through framing, alignment, and timely founder input, while David Shapiro challenges “AI pilot failure” myths, focusing on integration and governance issues. Then, Sangeet Paul Choudary shifts AI agent talk to coordination and standards, and Dave Rooney marks 25 years of XP, calling for TDD, pairing, CD, and lean flow.
Lastly, Casey Newton probes OpenAI’s platform push, weighing integrations against privacy, incentives, and trust. Teresa Torres urges teams to own interview synthesis, then use AI to spot cross-patterns without losing empathy or skill. Sean Goedecke demonstrates that staff engineers can influence politics by aligning momentum and delivering visible wins, and Shane Hastie and Marcos Arribas share culture-at-scale practices from autonomy to right-sized quality. Finally, Anton Zaides advocates ruthless meeting hygiene to protect deep engineering flow.
The Agile world is splitting into two camps: Those convinced AI will automate practitioners out of existence, and those dismissing it as another crypto-level fad. Both are wrong. The evidence reveals something far more interesting and urgent: Principles written in 2001, before anyone imagined GPT-Whatever, align remarkably well with the most transformative technology of recent years. This is not a coincidence. I believe it is proof that human-centric values transcend technological disruption; it is the Agile AI Manifesto.
And coming back to the two camps, here is what both miss: The biggest threat is not that AI replaces agile practitioners. It is AI that reveals what many organizations have suspected. They never needed Agile practitioners. They needed someone to manage Jira.
If your value proposition is running ceremonies, I deliberately do not refer to them as “events,” maintaining Product Backlogs, and generating burndown charts, AI reveals you were doing work the organization could have automated a decade ago. The separation is between practitioners who do real Agile work and those who perform Agile theater. AI is an expertise detector.
TL; DR: No AI-Disruption — Food for Agile Thought #513
Welcome to the 513th edition of the Food for Agile Thought newsletter, shared with 40,441 peers. This week, Martha Gimbel, Molly Kinder, Joshua Kendall, and Maddie Lee report that there have been no economy-wide AI-disruption of the labor market since 2022 and call for better usage data. Maarten Dalmijn warns AI sped shipping bloats products and urges subtraction-minded PMs. Brian Balfour and Lauryn Motamedi rethink SaaS pricing by leveraging system-level levers and providing customer education. Also, Ethan Mollick shows near-expert AI agents shifting tasks under expert oversight, and Naval Ravikant advocates for iterative simplification and clear ownership.
Next, Chad McAllister interviews Rich Mironov on product leadership that speaks revenue, merchandises wins, cuts waste, and mentors for pragmatic team design. At the same time, Jana Paulech cautions against endless discovery and advocates simple, hypothesis-led research tied to business goals. Edward Zitron argues the generative AI boom is a fragile, Nvidia-dependent bubble. Leah Tharin spotlights 2025 benchmarks where activation speed drives retention, and OpenAI unveils GDPval, expert-graded tasks showing frontier models nearing expert quality.
Lastly, Jing Hu reports research showing that AIs favor AI-written content by 60 to 95 percent, urging audits of AI gatekeepers and strategic polishing without compromising human judgment. Seth Godin frames AI as infrastructure, shifting value to ambition, taste, and community. John Cutler, on the other hand, warns against comforting narratives, urging leaders to co-author cause-and-effect stories and surface risks early. Finally, Barry O’Reilly rejects maturity models, favoring outcome metrics, experiments, coaching, and DORA-like measures.
AI is tremendously helpful in the hands of a skilled operator. It can accelerate research, generate insights, and support better decision-making. But here’s what the AI evangelists won’t tell you: it can be equally damaging when fundamental AI risks are ignored.
The main risk is a gradual transfer of product strategy from business leaders to technical systems—often without anyone deciding this should happen. Teams add “AI” and often report more output, not more learning. That pattern is consistent with long-standing human-factors findings: under time pressure, people over-trust automated cues and under-practice independent verification, which proves especially dangerous when the automation is probabilistic rather than deterministic (Parasuraman & Riley, 1997; see all sources listed below). That’s not a model failure first; it’s a system and decision-making failure that AI accelerates.
The article is an extension to the lessons on “AI Risks” of the Agile 4 Agile Online course; see below. The research of sources was supported by Gemini 2.5 Pro.