Product · Delivery 06 / 06

Cutting the release cycle from 6 weeks to 2

Slow shipping was hurting morale and competitive position. A quarter of process changes that tripled our release frequency without adding headcount.

Release frequency
6→2 wkCycle time
1 QTo see results
Team morale

The team was shipping every 6 weeks on a good month. Sometimes 8. Stakeholders were frustrated. Engineers were demoralised by the gap between when something was "done" and when users could actually use it. And we were consistently losing the ability to respond quickly to what we were learning post-launch.

This wasn't a resources problem. The team was capable. Something in the process was creating drag.

I spent two weeks shadowing the process rather than prescribing solutions. I sat in on sprint planning, spec reviews, and handoffs. I talked to engineers, designers, and QA about where they felt stuck.

Three bottlenecks emerged:

01
Specs were being written too late. Engineers were starting to estimate work before the spec was finalised, leading to re-scoping mid-sprint that rippled through the timeline.
02
Review cycles had no time-box. Stakeholder feedback rounds had no defined endpoint, so they stayed open until someone escalated. Average: 8 business days per round.
03
QA was sequential, not parallel. Testing only began after engineering signed off, which meant defects discovered late caused timeline overruns rather than priority reprioritisation.
Sprint process — before & after
Old process — 6-week cycle

Old process — 6-week cycle

Replace src="images/process-before.png" with your image file.

New process — 2-week cycle

New process — 2-week cycle

Replace src="images/process-after.png" with your image file.

Left: the old 6-week cycle with sequential QA and open-ended review loops. Right: the new 2-week cycle with parallel QA and time-boxed stakeholder windows.

"We didn't need more people. We needed cleaner handoffs and shorter feedback loops."

01
Introduced a spec-ready gate. Work doesn't enter sprint planning until the spec has been reviewed and signed off by engineering lead and design. This pushed spec work earlier and reduced mid-sprint pivots by ~70%.
02
Time-boxed stakeholder reviews to 3 business days. Feedback received after the window is logged for the next iteration, not incorporated into the current one. This was the hardest change to socialise — and the one that made the biggest difference.
03
Shifted QA to run in parallel. QA begins testing completed components as they're merged, not waiting for full feature completion. This surfaced defects earlier and reduced end-of-sprint crunch significantly.

I introduced the changes incrementally over 6 weeks so we could isolate what was working. The spec-ready gate went in first, followed by the time-boxed reviews, then the QA process change. Each change had a defined owner and a retrospective checkpoint after the first two sprints.

The hardest part wasn't the process. It was getting stakeholders comfortable with the idea that feedback arriving late would wait for the next cycle. Framing it as "your feedback will always be heard — just in the right sprint" helped significantly.

Release frequency

From roughly every 6 weeks to every 2 weeks within one quarter.

−70%Mid-sprint re-scoping

Measured by sprint completion rate across 6 sprints.

Team satisfaction

Engineering NPS improved by 18 points at next survey.

Process problems are usually people problems in disguise — but not in the way you'd expect. The team wasn't dysfunctional. The reviews weren't malicious. Everyone was operating rationally within a system that had bad incentives baked in.

The fix wasn't to work harder or add more meetings. It was to change the defaults: what needs to be true before work starts, and what happens when feedback arrives outside the window. Small structural changes with clear ownership made most of the difference.

← All work
Next case study
Back to all work →