Product design

ByMichele Ide-Smith

Measuring Impact: open space at Agile Iceland

I recently spoke at and attended Agile Iceland in Reykjavik, a fun, friendly and inspiring conference, hosted by the capable crew at Kolibri. It’s a one day event, but they devote about a third of the schedule in the afternoon to an open space. If you’ve been to a BarCamp then the format is very similar. The open space sessions are entirely participant driven and they provided the opportunity to discuss ideas and topics that came up earlier in the speakers’ talks and workshops.

A grid is put up on the wall and participants come forward with session ideas written on post-its, introduce their topic and post them on the grid. Here’s Dadi from Kolibri explaining introducing the open space and explaining a couple of rules:

Kicking off the open space

The board soon filled up and once we’d figured out what interested us most, we all trotted off to different sessions.

Open space session grid

The second session I went to was titled ‘Measuring Impact’. I found the open space session really helpful so thought I’d write up a quick summary.

Some inspirations for this session from talks earlier in the day were:

  • Mary Poppendieck’s keynote about the Lean Mindset and impact-driven development
  • Rich Smith’s talk about culture at Etsy and their principle “If it moves, graph it!”

Pétur, who proposed the session, wanted to find out more about measuring impact. He had worked on a project where they measured a lot of things, and collected a lot of data, but didn’t know which metrics were important. So consequently they stopped looking at the data. Sounds familiar? Well it did to me.

The group agreed that the most important question to start with is why? Too often people try to measure everything, without thinking about what the right metrics might be. To be sure you’re measuring the right things, you need to think about what impact you are trying to measure.  A good technique for understanding what you need to measure, and why, is impact mapping. I’ve not used this technique before, but there’s an example explained here.

In the case study which inspired the session, a Telco company wanted to allow customers to self-serve to reduce costs. They also wanted to improve customer experience. I talked a little about my experience of measuring the cost of a transaction, sometimes called the ‘cost to serve’ when I worked in local government. I explained how we used standard transaction costs for customer contact by phone, face-to-face and web to determine the reduction in overall transaction cost. I showed the GDS service performance dashboards, which I think are a great example of monitoring the impact of service design changes.

We talked a little about the different contexts in which a customer might be driven to use self-service over mediated service channels. Someone gave an example of a petrol station in Iceland that is entirely self service but is rated as one of the best petrol stations. We also discussed the fact that your customer might be forced to use the product. So it’s important to measure customer satisfaction. A measure that can be used to track satisfaction rates is the Net Promoter Score ®, where customers are asked how likely they would be to recommend the product or service to someone else.

Net promoter score

 

Although product teams can stress over getting the right metrics, we discussed the fact that you don’t need a perfect measure to start with, but it’s important to iterate and learn as you go.

Mary Poppendieck advised that it’s helpful to show progress by making metrics visible. It’s a great motivator for teams working on a product.

A couple of people gave examples of when measuring impact had worked for them.

One person described how, when launching a new system, he analysed the number of transactions and monitored errors via the logs. He called customers up if he saw errors during their sessions to find out more about what had happened. Although this seemed a little creepy at first, the customers were more than happy to talk about their experiences using the website.

Sometimes you can measure what Eric Ries, author of the Lean Startup, calls ‘vanity metrics‘ rather than metrics that show something meaningful about how customers use your product. Someone described how their company measured usage going up but then also noticed their sales were going down. They carried out some usability testing and found out that registration process was far too complex and hard to use, so they simplified it and sales started to increase again. In this example focusing on usage alone was unhelpful.

Although we only scratched the surface, this was a useful discussion and I learned something new. In summary the key takeaways from this discussion were:

  • Start with asking why, to find right metrics
  • Create an impact map, to understand what to measure and why
  • Iterate the measurements, and improve as you learn
  • Make measurements visible, to motivate the team
Share