TL; DR: Sam Altmann — Food for Agile Thought #539
Welcome to the 539th edition of the Food for Agile Thought newsletter, shared with 35,623 peers. This week, Ronan Farrow and Andrew Marantz investigate Sam Altman’s leadership at OpenAI, exposing a pattern of deception and safety trade-offs, while Ajeya Cotra predicts AI research parity could arrive by 2028, and Annie Vella reports from a retreat where nobody had AI figured out yet. On the product side, Stephanie Leue questions why leaders borrow other people’s styles instead of finding their own, and Janna Bastow warns that backlog-driven roadmaps create an illusion of progress. Also, Maarten Dalmijn proposes that fewer rules and better guardrails unlock real team performance.
Next, Packy McCormick warns against lazily comparing AI labs to Amazon, since these companies face direct competitors running nearly identical strategies. Tomasz Tunguz offers a 2×2 matrix that helps leaders sort AI strategy work by demand ceiling and loop type, and Itamar Gilad explores why product discovery keeps breaking down internally and proposes that teams time-box it but never skip it. On the technical side, Zvi Mowshowitz covers Anthropic’s Claude Mythos model, which found critical security vulnerabilities across every major OS and browser. Paweł Huryn breaks down Claude Code pricing after Anthropic cut third-party access, and Gergely Orosz hosts Kent Beck and Martin Fowler as they trace parallels between past tech disruptions and today’s AI shift, including why TDD matters more now than ever.
Lastly, John Cutler suggests that AI won’t fix broken product funnels but will amplify whatever dynamics already exist, and Jeff Gothelf warns that SAFe’s rigid ceremonies become catastrophic because AI development demands rapid hypothesis testing. Yoni Rechtman envisions AI-native companies organizing around four archetypes instead of traditional roles, and Kyle Poyar identifies five storytelling archetypes that cut through AI-generated noise. Finally, Jenny Wanger reminds us that offshore team frustrations run both ways and that trust, not process documentation, is the actual fix.
From Scrum to a Product Operating Model: Would This Course Help You?
Your organization is adopting POM, and nobody tells you how to make the transition work from the trenches. I'm considering building a short, practical course that helps practitioners navigate this transition without repeating the mistakes of the past.
However, before I invest the time, I want to know: Is this a real problem for you? Also, would your organization pay for a solution?
👉 Join the Survey — Just 2 Minutes: From Scrum to a Product Operating Model: Would This Course Help You?
Did you miss the previous Food for Agile Thought issue 538?
🗞 Shall I notify you about articles like this one? Awesome! You can sign up here for the ‘Food for Agile Thought’ newsletter and join 35,000-plus subscribers.
🎓 Join Stefan in one of his upcoming Professional Scrum training classes!
🏆 The Tip of the Week: Sam Altmann
and (via The New Yorker): Sam Altman May Control Our Future—Can He Be Trusted?
Ronan Farrow and Andrew Marantz investigate Sam Altman's leadership of OpenAI, documenting internal memos and interviews that reveal a pattern of deception and the systematic prioritization of commercial expansion over safety commitments.
🎯 Product
(via ProdPad): The Danger of Bottom-Up Roadmaps
Janna Bastow proposes that bottom-up roadmaps, built from backlogs instead of strategy, create an illusion of progress. At the same time, the product drifts without direction and suggests reconnecting roadmap items to objectives incrementally.
: The hidden mental load of leading a product org
Stephanie Leue explores why product leaders carry unnecessary mental weight by borrowing other people's leadership styles instead of identifying and leading from their own unique strengths and building accountability around them.
: Bad Analogies: Not Every Money-Burning Company is Amazon
Packy McCormick warns against lazily comparing every money-losing company to Amazon, noting that AI labs face fierce direct competition with similar strategies, unlike Amazon or Uber, which burned cash to achieve strategic dominance.
: The AI Problem Matrix
Tomasz Tunguz proposes a 2x2 matrix for AI strategy, sorting work by demand ceiling (infinite vs. finite) and loop type (closed vs. open), helping leaders identify where AI creates economic engines versus mere efficiency gains.
: Demand Mix, Shaping, and AI as (Dys)function Multiplier
John Cutler suggests that AI won't fix broken product development funnels. Instead, it amplifies existing dynamics: accelerating learning in healthy systems and intensifying chaos, negotiation, and overload in dysfunctional ones.
🧠 Artificial Intelligence
: Six milestones for AI automation
Ajeya Cotra defines three AI automation milestones per sector: adequacy (machines barely function on their own), parity (AI contributes more than humans), and supremacy (humans become deadweight), predicting that AI research parity could arrive by 2028.
: Finding Comfort in the Uncertainty
Annie Vella shares takeaways from a retreat on the future of software development: nobody has AI figured out yet, cognitive load is spiking, and the engineer's role is being redefined without a clear destination.
: Mythos Quest
Zvi Mowshowitz covers Anthropic's Claude Mythos, a model that discovered critical security vulnerabilities in every major OS and browser, and Project Glasswing, their initiative to help cybersecurity companies patch the world's software before chaos hits.
: Claude Code Pricing: Subscriptions vs API, Token Visibility, and the Models That Actually Work
Paweł Huryn breaks down Claude Code pricing after Anthropic cut third-party tool access, compares subscriptions (15-30x cheaper than the API), ranks models by real agentic performance, and shares an open-source token-tracking dashboard.
: There will only be four jobs: Slop Cannons, SREs, Hot People, and Adults
Yoni Rechtman believes AI-native companies will organize around four archetypes instead of traditional roles: slop cannons (high-velocity generalists), SREs (system stabilizers), adults (judgment providers), and hot people (the human interface layer).
🖥 💯 🇬🇧 AI4Agile BootCamp #7, May 28 to June 25, 2026
The job market’s shifting. Agile roles are under pressure. AI tools are everywhere. But here’s the truth: the Agile pros who learn how to work with AI, not against it, will be the ones leading the next wave of high-impact teams.
So, become the one who professional recruiters call first for “AI‑powered Agile.” Be among the first to master practical AI applications for Scrum Masters, Agile Coaches, Product Owners, Product Managers, and Project Managers.
The class is in English. 🇬🇧
Learn more: 🖥 💯 🇬🇧 AI4Agile BootCamp #7, May 28 to June 25, 2026.
Customer Voice: “The AI for Agilists course is an absolute essential for anyone working in the field! If you want to keep up with the organizations and teams you support, this course will equip you with not only knowledge of how to leverage AI for your work as an Agilist but will also give you endless tips & tricks to get better results and outcomes. I thoroughly enjoyed the course content, structure, and activities. Working in teams to apply what we learned was the best part, as it led to great insights for how I could apply what I was learning. After the first day on the course, I already walked away with many things I could apply at work. I highly recommend this course to anyone looking to better understand AI in general, but more specifically, how to leverage AI for Agility.” (Lauren Tuffs, Change Leader | Business Agility.)
➿ Agile & Leadership
: Blessed With Constraints: Stop Railroading Your Teams and Start With Sandboxing
Maarten Dalmijn contrasts "railroading" (too many internal constraints killing agency) with "sandboxing" (enabling constraints such as vision and strategy), suggesting that organizations need fewer rules and better guardrails to unlock team performance.
: SAFe Was Bad for Agility. For AI, It's Catastrophic.
Jeff Gothelf proposes that SAFe's rigid PI planning, Release Trains, and lack of continuous discovery were already problematic for agility but become catastrophic for AI development, which demands rapid hypothesis testing and real-time course correction.
: Cycles of disruption in the tech industry: with software pioneers Kent Beck & Martin Fowler
Gergely Orosz hosts Kent Beck and Martin Fowler as they discuss how past tech disruptions like Agile parallel today's AI shift, including snake-oil risks, the "re-soloing" of programming, and why TDD matters more than ever.
📯 Stop Telling Professionals How to Do Their Job — Commander’s Intent at Work
Most micromanagement is not a control problem; it is a clarity failure in disguise. This article introduces Commander's Intent: a five-part briefing model that replaces prescriptive instructions with shared purpose, hard constraints, and room to adapt.
Bonus: As a Claude user, you can download the Commander's Intent1 Skill.
Learn more: Stop Telling Professionals How to Do Their Job — Commander’s Intent at Work.
🛠 Concepts, Practices, Tools & Measuring
: 5 Ways Product Discovery Breaks Down (Part 2)
Itamar Gilad explores three internal reasons product discovery breaks down: lack of prioritization, missing infrastructure, and under-validation of ideas, proposing that teams should time-box discovery but rarely skip it, even under AI-fueled delivery pressure.
: You need better storytelling
Kyle Poyar identifies five storytelling archetypes for 2026: Advocate, Analyst, Teacher, Provocateur, and Builder. With AI flooding channels with mediocre content, only distinctive, archetype-aligned storytelling cuts through the noise.
: Your Offshore Team Is Probably as Frustrated as You Are
Jenny Wanger proposes that offshore team frustrations are mutual, not cultural: both sides feel stuck in a black box, and the fix is building trust through shared context, consistent interaction, and treating contractors like partners.
📅 Scrum Training & Event Schedule
You can secure your seat for Scrum training classes, workshops, and meetups directly by following the corresponding link in the table below:
| Date | Class and Language | City | Price |
|---|---|---|---|
| 💯 🇩🇪 May 19-20, 2026 | Guaranteed: Professional Scrum Product Owner Training (PSPO I; German; Live Virtual Class) | Live Virtual Class | €1,299 incl. 19% VAT (If applicable.) |
| 💯 🇬🇧 May 28 to June 25, 2026 | Guaranteed: AI4Agile BootCamp #7 (English; Live Virtual Cohort) | Live Virtual Cohort | €499 incl. 19% VAT (If applicable.) |
| 🖥 💯 🇬🇧 June 1, 2026 | GUARANTEED: Claude Cowork: Stop Prompting. Start Delegating. (English; Self-paced Online Course) | Self-Paced Online Course | $129 incl. 19% VAT (If applicable.) |
| 🇩🇪 June 30-July 1, 2026 | Professional Scrum Product Owner Training (PSPO I; German; Live Virtual Class) | Live Virtual Class | €1,299 incl. 19% VAT (If applicable.) |
| 🖥 💯 🇬🇧 July 1, 2026 | GUARANTEED: AI 4 Agile Course v3 — Master AI for Agile Practitioners (English; Self-paced Online Course) | Self-Paced Online Course | $149 incl. 19% VAT (If applicable.) |
See all upcoming classes here.
You can book your seat for the training directly by following the corresponding links to the ticket shop. If the procurement process of your organization requires a different purchasing process, please contact Berlin Product People GmbH directly.
📺 Join 6,000-plus Agile Peers on Youtube
Now available on the Age-of-Product YouTube channel to improve learning, for example, about useful AI practices:
- Stop Writing Prompts. Let AI Do It for You — Hack #01, AI4Agile Online Course v2.
- Socratic Prompting — Hack #10, AI4Agile Online Course v2.
- Check Your AI’s Plan Before — Hack #7, AI4Agile Online Course v2.
- From Product Requirements to Experiments to Learnings — Supported by Generative AI.
- Never Accept an LLM’s First Offer — Improve GenAI’s Usefulness w/ Feedback Loops and Challenges.
✋ Do Not Miss Out: Learn more about AI Practices — Join the 20,000-plus Strong ‘Hands-on Agile’ Slack Community
I invite you to join the “Hands-on Agile” Slack Community and enjoy the benefits of a fast-growing, vibrant community of agile practitioners from around the world.
If you would like to join, all you have to do now is provide your credentials via this Google form, and I will sign you up. By the way, it’s free.
Help your team to learn about how AI Intensifies Work by pointing them to the free Scrum Anti-Patterns Guide: