One of the biggest accidental benefits of our innovation lab seems to be that people have started asking “Why?”
Why can't we just get on and do things? And why do we need to test something - isn't it quicker just to get out and pilot something?
Really this is just a necessary control to prevent jumping straight onto madcap ideas and having them backfire because no one has given them enough thought.
How people envisage innovation works – a one stage model
What actually happens when you rush to implement new ideas
This is why testing and piloting has been an instrumental part in each iteration of our innovation pipeline – it allows us to have a wisecrack answer for when someone says ‘prove it’ or 'show me the value in...'
While at surface level they both seem similar, a test and a pilot are nearly polar opposites in the way they go about things...
Tests tend to focus explicitly on the building blocks of a new service. They are time-limited, closed off experiments that help us evaluate the component in isolation, without any of the noise that ‘real life’ might generate. Not that this real noise isn’t relevant – it just muddles things early on.
Light, fast and ‘dirty’ tests come with relatively low risk so we can afford to do lots of them – indeed we can even fail lots of them without worrying too much. Much better to fail quick, fail cheap, right? More often than not it’s not the idea that fails, just the way we’ve chosen to implement it. Tests are also a great way of identifying weaknesses in your method, or the data you’re trying to collect. They might eve highlight a new problem – e.g. engagement problems – that you were otherwise unaware of and could present significant delays on a pilot.
It’s important to document a test so we prep all concepts with a test framework that outlines all the aims/objectives, the methodology and all the data we’re trying to collect. Data requirements are small and focus solely on what effects you're trying to observe. Try not to build in measures that cross over into another idea you might have. @£"$"&%
Tests can be iterative and exploratory – encouraging colleagues or customers to find a way past a particular problem, or even identify the problems in the first place.
When you’ve finished testing you should be able to pull together all the information and produce a service offer document or something similar that outlines your idea in depth, with any changes as a result of your testing included.
Pilots evaluate the whole, assembled service and usually take place over a protracted time-frame so you can spot the interactions you might’ve missed in testing stages. This is ‘adding the noise’ back in to see if your idea holds up.
Because of the resourcing, duration, risks, costs and difficulty in mobilizing – you don’t really want to fail pilots. Better to fail a pilot than have a rubbish service implemented, granted. Then again, much better to drop an idea as a result of a couple of failed tests too…
A whole swathe of measures will likely be drawn up for data-hungry pilots, which drastically limits your ability to adapt and change the way the pilot is ran… or if you do, you can’t trust your before/after measures anymore.
Pilots are the only way you can test your idea out in real life situations, so are probably important or whatever... but don’t get caught in the trap of fetishizing about how many pilots you have on the go at any one time. Calving more unwieldy pilots into existence is not a badge of honour, it’s a badge of not valuing your own time.
Finally, Pilots should never be implemented or scaled into the business without being evaluated. If you're not going to let the idea fail, there's no point piloting it in the first place.
So, having read all the above – would you focus on a test driven model, or a pilot driven model? How about a metaphor to help you decide…
Test Driven Design
Adam wants to know if his fort can withstand an enemy invasion. Adam’s not wearing a dress – it’s a kilt, he’s Scottish.
His fort needs a little work, so Adam thinks about all the possible options and what ramifications they’d have on ‘invasion-proofness’. Does he make it 10ft taller or 50ft taller, use traditional bricks or spherical bricks, fill the moat with fire or water? Adam decides to build little towers to prototype each element.
Bigger is better, a water-moat is easier to maintain than a fire-moat and in hindsight spherical bricks were probably never going to work. The results of these tests are evaluated and Adam builds his fort accordingly. He still doesn’t actually know whether it’ll hold up to an invasion – he’d probably get his mates round for a faux attack, see how everything holds up. Maybe he’ll even let them in for a beer afterwards...
Pilot Driven Design
Dave wants to know if his fort is invasion proof too, but the cunning fox has spotted a funding opportunity that might help him make his fort an ass-kicking machine. He successfully negotiates a tender with his local NHS clinical commissioning group, on the premise he can demonstrate the fort improves the wellbeing and psychological resilience of his local community.
Dave embarks on weeks of interviews with his local community, collecting baseline data and identifying individual needs. Soon enough he’s had to change his designs and make a number of compromises all around.
Unexpectedly, accessibility was an issue so he had it converted into a bungalow. People also asked for columns, and they seemed to show an improvement in wellbeing scores so were incorporated. An invasion was due, so Dave quickly spent the rest of the money on anti-aircraft guns to deter an onslaught.
Unfortunately, most of the preparation time was spent on delivering the NHS targets and the objective of ‘building an impenetrable fort’ sort of fell by the wayside. When the invasion came, it did so on foot, immune to the weaponry. As it happens, the quality and craftsmanship of the pillars diffused the invaders, who went inside to drink tea rather than pillage. Aside from a few stress related deaths, the local community were also showing higher well being scores.
Confused? Dave is too..
I appreciate the metaphor is pretty nuanced, so here’s the take-home message: