Evaluation in L&D (part two)

This is the second in a series of three posts about evaluating learning and development interventions. In the first part, I looked at the utility (or not) of Kirkpatrick's four levels. In this post, I'll examine Jack Phillips' fifth level, return on investment (ROI) and the unrealised benefits of piloting interventions.

Written by Owen Ferguson
Published 09 November 2015

The fifth level

Jack Phillips is the king of ROI. An over-simplified way of looking at his contribution is to consider it just to be a fifth level added onto Kirkpatrick's original model. I think the way that Phillips’ work has been positioned is a mistake, because its real value is that he's done the most thorough examination of methods to uncover the bottom line impact of a learning intervention. However, there's a health warning that should come with following his approaches. The common refrain I hear in discussions about ROI is that it's time consuming and hard to do. Now, I don't agree that it's difficult. Thought about clearly upfront, and with a small amount of numerical and methodological literacy, getting an acceptable ROI figure is relatively straightforward. What is true, is that it's often much more work than it should be. This isn't normally because of any failing on the part of L&D, it's actually a management information failure. In most of the organisations I've worked with, getting hold of the basic management information required has been a slog and getting it in a format that's easy to analyse has been even harder [1].

When to calculate ROI

With all that said, it is possible to plan ahead and put in place all you need to get a decent ROI figure. Before doing so, it's worth bearing in mind that Phillips himself suggests that you only do the work to calculate ROI for learning interventions when they meet the following criteria:

  • they affect a large target population, having a significant impact on the organisation as a whole
  • they are strategically important
  • they have a long life cycle
  • they are expensive in terms of monetary value or time
  • they are high profile and are of interest to senior management

Isolating the impact and 'pilots'

Phillips recommends isolating the effects of a learning intervention from other environmental influences. Before even attempting to do this, ask yourself whether you should bother doing it at all. Often, learning and development solutions come as part of a package of measures that impact on performance outcomes. It could be that it's the whole package that you need evaluate, the whole performance solution, rather than just the 'training' part.

In fact, isolating the impact of a learning and/or performance intervention is easy; you use a suitable control group. We almost, but not quite, do this whenever we pilot an intervention but we need to be more robust in ensuring the pilot isn't just a waste of everyone's time.

Running a leadership development programme? Measure performance indicators of those leaders that have gone through the programme against those of a comparable group that haven't. You don't have a comparable group because everyone is being put through the programme? Why on earth are you putting your entire leadership team through a programme when you don't know how effective it is? Because of time pressure? Because there's a belief that it will be effective? Because it worked in a different organisation? None of these are valid reasons for the simple reason that, whatever the pedigree of the programme, in your organisation it could result in poorer performance - and what would the cost of that be? Only when you know that the intervention is effective, should you roll it out more widely.

What about a new performance management system. Surely you should provide training to all managers on that? Well, why? Why not provide some of your managers with the training you believe will result in better performance outcomes and leave the rest on the existing system? It's worked fine for the last X number of years so another six months isn't going to make a big difference. If you find that the new system results in no difference in performance outcomes compared to the control, or even worse performance, then you know that you should stick with what you've got.

The key thing to keep in mind about using a control is that you shouldn't be comparing what you do against nothing. You should be comparing the new approach with what you do at the moment. Radically changing your induction? You won't compare that against not having an induction at all, but against your current induction.

To be honest, this part shouldn't be thought about as evaluation - it's a core part of the design of the learning intervention. Testing learning interventions and content isn't something we tend to do in L&D very much, perhaps because of the humanities bias in our community. But it should be.

Meanwhile, in the real world ...

I've been challenged on this point many times in the past. The push-back is that it's just not realistic to hold things up while you 'waste time' gathering evidence, or making sure something works. Sometimes, the business leaders just expect things to happen on short timescales.

Of course there are times when moving ahead without testing something properly is the right course. It's really a matter of risk judgement. We need to ask ourselves whether, based on what we know, how likely it is that our learning intervention will cost the organisation money by having a negative impact on performance. The problem is that human beings are poor at judging that kind of risk, which is why we've invented piloting, control groups and evaluation methods.

It comes down to this: if we can't persuade our internal customers that to push ahead purely based on blind faith is as poor a strategy for learning interventions as it is for business in general, something is up with our level of influence.

[1] I'd note here that it's easy to get good performance information for sales departments and call centres, less easy for other departments.

In the final part of this series, I'll look at what I consider to be the most rounded method for evaluating learning interventions: Robert Brinkerhoff's Success Case Method.

About the author

Owen Ferguson

Owen Ferguson

Product and Technology Director
A self-confessed nerd, Owen is passionate about taking an evidence-led approach to developing digital products that solve real-world problems. He is also a regular on our weekly podcast.

You may also be interested in…

How to fit in exercise

How hard can it be to get more exercise? Draw up a fitness plan and stick to it. There you go, end of article! But we know it's not that easy in real life. If it was, we’d all be looking like Olympic athletes.

June 2020

Read More

How music can boost your productivity

Can Hot Chip help you hit deadlines? Bach make you better at research? Does the Wu-Tang Clan work wonders with spreadsheets? Let’s explore how music affects your brain – and whether the right tunes can boost your productivity.

June 2020

Read More

Color hacks to help you work smarter

Scientists claim that colors influence us on a mental, physical and emotional level. [1] The right shade, they argue, can even enhance our cognitive performance. But are they being colorful with the truth?

June 2020

Read More