As you may have read, I believe that pilots – that is, the limited deployment of web 2.0 tools – can be useful. Fundamentally, my case was that pilots can be useful when deploying tools related to strong ties (e.g. wikis) while they are not necessary for tools that connect weak ties (e.g. micro-blogging). There’s an important caveat to this proposition: measure value (even soft value), not ROI. Use pilots to identify additional opportunities, raise awareness, and build the foundations for a wider deployment. In short, pilots are not good at measuring ROI because of luck (or the absence of it).
Issue 1: causation and correlation
It is easy to believe that the miracle of a pilot you ran with 100% adoption, a reduction of 15% in org-wide email, and an increase in staff satisfaction by 25% were all due to your ingenuity and perseverance. After all, we are much more susceptible to believe that success is due to our own skills and failure to circumstance. In psychology, they call this the alpha bias. The worst part about this bias is that it is almost impossible to detect if you’re working by yourself or with a group of like-minded workers.
Most web 2.0 tools don’t lead directly to more money or more whatever. They help people do things more quickly and easily, and thus usually enable more people to focus on their priorities, which in turn can lead to better outcomes if people have prioritized the right things. The individual cases of “Paul met Dominique through their company’s profile search, and within a month started a new and incredibly profitable business because of their mutual love of cheese and cats” are just that – individual cases that appear less frequently than the person who spends a lot of time on social media with no direct benefit to the organization. You just don’t hear about the non-users.
What to do?
The most compelling business case for web 2.0 tools will come initially through narratives about how they changed how someone worked, or how much more efficiently and easily they were able to do something. This is always why design is so important in web 2.0, as good design can lead to better behaviors and bad design can ruin a lot of good intentions. But more on design another day.
As it relates to a pilot, set realistic expectations from the start for the value-add of the tools and processes you are testing, not ROI as it is traditionally measured. Making a claim about the impact of a pilot on top-level organization priorities is a dangerous path to walk on because it is almost impossible to prove directly. Also, surround yourself with people outside the web 2.0 world who will help offer diversity of thought, experience, and insight into your work.
Issue 2: probability of success
One of the biggest dangers of running a pilot is the chance of success for the wrong reasons. When you pilot, as the saying goes, you fail small and win big. This is a great strategy for investing in the stock market or playing poker. It can also be great business sense. It is less great for trying to scale web 2.0 technologies, because you have no context – the probability of having a successful pilot is about the same as having a failed pilot, if not less so, but you are more likely to attribute the success of the pilot to great strategies and hard work than to mere luck, and you would be wrong.
As you look to take your “lessons learned” from either the success or failure, you don’t really know at the scale of a pilot whether you have the right lessons – and that’s dangerous. So how do you reconcile this?
What to Do?
There are a few things you (or someone on your team) can do to mitigate the risks of overconfidence:
1) Follow the rules of statistical significance. This will help you determine, especially in cases of adoption, marketing strategies, and email outcomes whether or not your results were related to chance.
2) Learn the rules of confidence modelling. If you think weather forecasters are bad at what they do (e.g. you’ve been caught out in the rain without an umbrella), you’d be partially right. However, they’re much more accurate than almost any other type of forecaster because instead of saying “it will rain today”, they say “there is a 30% chance of rain today”, and it rains just about 30% of the times that they say that. Not bad. This concept is related but not quite the same as statistical significance.
3) Use a control and placebo group. This sounds like a lot of work, but it will help you immensely learn the right lessons. If you plan to run a pilot, also track a similar group that is not using the tool while also tracking another group that has similar aims (e.g. reduce email), but isn’t using the tool. You want to narrow down the factors of what the tool implementation actually did – not what you think it did (because it’s what you wanted it to do).
4) Look for results beyond your intended consequences. It sounds obvious but is often overlooked – the analysis of your pilot should not only be a progress toward goals type dashboard, it should include a section about unintended results. Some common indicators might be “connectedness to mission,” “net promoter scores” around likelihood to evangelize/share the tool with others, and others I’m clearly not thinking about (so share them in the comments!).
What other suggestions do you have about running pilots and measuring their impact?