#20 | Sunday reads for EMs
My favourite reads of the week to make your Sunday a little more inspiring.
👋 Hey, it’s Stephane. This is a new series in which every Sunday I share with you my favourite reads of the week. To accelerate your growth see: 50 Notion Templates | The EM’s Field Guide | CodeCrafters | Get Hired as an EM | 1:1 Coaching
Paid subscribers get 50 Notion Templates, The EM’s Field Guide, and access to the complete archive. Subscribe now.
What are the 2025 benchmarks for the key DORA metrics?
tl;dr: Only 16% of teams deploy on-demand while 24% deploy less than monthly. 56% of teams need days to recover a failed deployment, creating a vicious cycle where each deployment discourages future deployments. If your team sits in the lower half of any DORA metric, AI may accelerate problems faster than it delivers value since these tools compound existing inefficiencies.
Advice for New Principal Tech ICs (i.e., Notes to Myself)
tl;dr: Work that made you successful (coding) becomes side tasks, while your real job is making everyone more effective through vision, sponsorship, and connecting dots. Most actionable insights include the “owner/sponsor/consultant” framework for managing multiple projects, the principle that “if you can’t explain why this needs a principal, you’re working on the wrong thing”, and the advice to actively remove yourself from the critical path once you’ve established yourself.
Councils of agents
tl;dr: Create multiple LLM agents representing different roles (Principal Engineer, Security, QA, etc.) in Claude Code to simulate multi-stakeholder discussions asynchronously before bringing ideas to real teams. You can test proposals against a “technical council” or “executive council” and iterate through 1-2 meetings worth of feedback without distracting your team. The author (a CTO) reports his proposals become more nuanced and researched when he does finally connect with humans synchronously.
AI is Making Us Work More
tl;dr: AI tools create psychological pressure to work constantly because the machine never tires, transforming “can work” into “should work” and making rest feel like wasted potential. Every idle moment becomes a missed opportunity, leading to the paradox that tools meant to free us are accelerating 996-style work culture in Silicon Valley. AI adoption should not be framed as a pure productivity win but as a cultural challenge requiring explicit boundaries.
Developers are choosing older AI models — and the data explain why
tl;dr: Production data from millions of coding sessions shows that developers are treating model upgrades as alternatives rather than replacements. Sonnet 4.0 usage jumped from 23% to 37% while 4.5 declined, revealing task-based specialization instead of “always use newest”. Sonnet 4.5 does 27% more reasoning but 21% fewer tool calls, making it slower but better for multi-file refactoring, while 4.0’s deterministic speed makes it the “safe default” for API generation and structured edits.
Most popular from last Sunday
Measuring Engineering Productivity
If you enjoy articles like these, you might also like some of my most popular posts:
What did you read recently that you would like to share?




The experience of “I could and should be doing more w AI; I’m falling behind” is very real and unpleasant.