Toolkit

Impact Scan

Net Positive Sprint Kit - Part 7 of 9

Part 7 of 9 · Sprint Review

Use this tool

  • At the end of a sprint, during the retrospective
  • Before shipping a major release
  • As part of a regular technical health review

You'll get

  • A structured way to spot high-impact decisions or missed opportunities
  • Quick wins to add into the next sprint
  • A record of recurring issues to address over time
Setup

Getting started

Set aside 10-15 minutes in your retro or review session. Look at recently completed work, ask the relevant prompts from the list below, and capture one or two improvements to bring into the next sprint.

The prompts

Prompt list

AreaQuestions to ask
Real-world behavioursCould this feature influence a positive action outside the product (e.g. reduced travel, shared resources, energy-saving habits)?
Experience designCan the same outcome be achieved with fewer steps or interactions? Are we adding features users didn't ask for?
Interface and front-endAre we loading assets or scripts before they are needed? Are we using heavy frameworks where lighter options would work?
Data and APIsAre we making unnecessary or duplicate API calls? Could we cache results?
Libraries and packagesAre we carrying unused or bloated dependencies? Could a smaller library or built-in function do the same job?
AI and inferenceAre we calling external AI services more than necessary? Are outputs being cached where appropriate? Have we considered whether a lighter-weight model or a non-AI approach would meet the same need?
InfrastructureAre environments running when they are not needed? Could we scale down or switch off in low-usage periods?
Default behavioursDo our defaults add unnecessary load (e.g. high-res images for all users)? Could we offer a low-impact default instead?
Guidance

Tips

  • Focus on one or two changes per sprint rather than trying to fix everything at once.
  • Use Impact Tags to flag follow-up work if you cannot act immediately.
  • Share both the positives and the issues so the team can see the benefits of the changes.
  • A page that is technically lighter but rarely visited has less real-world impact than a heavy page with high traffic. Pair performance data with usage data for a clearer picture.
  • Check for trade-offs: consider any knock-on effects for accessibility, usability, or security.
In practice

Example

During a retro, a team found that a new dashboard was requesting fresh data every five seconds. Switching to event-driven updates cut API calls substantially and reduced server load.

During a retro, a team reviewed a new returns flow and realised the default option was "refund and reorder" rather than "exchange." Changing the default to exchange first reduced return shipping and repackaging, cutting emissions and operational cost.

Output

A small set of actionable changes for the next sprint. Visibility of recurring patterns that may need bigger fixes.

Measurement & Validation

Track the ratio of tagged backlog items resolved to new ones raised, as a signal of whether the team is keeping up with impact debt. Pair performance metrics (load time, bundle size, API call volume) with actual usage data where available. For behaviour design features, define a proxy metric at the point of build so you can measure whether the feature is having its intended effect.

Want help running a net positive sprint?

We facilitate net positive sprints for teams who want to embed sustainability into their digital delivery.