Evaluation of Training

Summary and Conclusions


I discuss below five possible ways to evaluate the impact of developmental training. None of these can give you proof that the training has a quantifiable impact on the bottom line of the organisation. There are simply too many variables that affect this and it will be impossible to disentangle them. However, the methods can give you useful qualitative information about the impact of training and ways to increase its impact.


All the methods take time and energy. You have to weigh up the costs and benefits the organisation will gain from each method to decide on the right one.


Purpose of the Training


Companies invest in training to achieve a purpose. It will be successful if you can see that it contributes to this purpose in a cost effective way. You need to be clear what the purpose of the training was to be able to evaluate the training against it.

One way to clarify the purpose is to find people in the organisation who are already doing what you want more people to be doing. Suppose the company wants to use the training to develop a coaching culture. Who are the people who are already good coaches? What do they do that makes them good? The purpose of the training might be to have more people like them.


Training is more than running courses


Truly effective training has at least four parts

  1. The right people go on the right courses, (or other activities) for them and the organisation, and are willing to learn.
  2. The course or other activity works. Participants learn the new ideas, practice the new behaviour, develop the new attitudes and want to apply the learning when they return to work.
  3. The learners apply the new learning at work, finds it works better than what they did before and become more eager to continue learning new things.
  4. The new behaviour directly leads to improved organisational performance, (profitability for example).




For training to work, in the sense that it has an impact on the company’s effectiveness, all the above steps have to be right. You could have excellent, well-run courses that fail to have any impact, if you train the wrong people in the wrong things. They will also fail if people are discouraged to apply their learning either directly (by ridicule) or indirectly (by lack of interest or support) when they return from work. Applying the learning has to directly improve organisational performance. You could have people learning new skills and applying them and still fail to make an impact if using the skills makes no difference to organisational performance.


This means that a narrow focus on the effectiveness of the training courses or activity will not help you. You might decide that the training courses are no good because they have no impact. Then when you change them for another provider, there is still no impact, because the real issue (for instance) is that the organisation is so lean there is no time to try out or do new things.


Approaches to evaluation


  1. Purely pragmatic

Let us assume the purpose of the training is to have a coaching culture. You could simply count the number of managers who coach their people well. You would need to be connected enough to hear about good coaches and good coaching experiences. If not, you could set up some process for doing this.


After the training you should have more and better coaches and more and better coaching. Involve the people who want to know about the effectiveness of training in creating the informal monitoring process. If this shows that, after the training, you have more and better coaches and more and better coaching, then you have some qualitative evidence of the benefits that the key people will believe.


Then you would simply need some idea of the costs. Managers are used to making judgements on the basis of partial and subjective information. What might come out of it would be something like: –


We have spent $500,000 over three years on the training, our monitoring has shown that we now have 300 managers who coach their staff actively and well. When we started this we had 30. Our own personal experience of coaching convinces us of its personal and business value. This looks like a good investment”.




This is a very broad-brush approach. It does not tell you how to improve what you are doing now, or how it is working. It is simple and cheap and may be “good enough”. I helped to set up an internal career counselling service in ICI. After much agonising about evaluating it, we decided that if people said the service was helpful that was good enough. If your investment is considerable then you may want to go further.


  1. Appreciative Evaluation

Clearly the total training system (i.e. Before the courses, the courses and after the courses) is already having some good impact. If you can find out what this is you will know what the impact of the training is. If you find out what supports or encourages the good impact, then you can spread these good practices and get more from the training and have more good impact. When people have thought about the benefits they will also be willing to tell you about their wishes or dreams for the total training system. These will help you make it even better.


People would tell their stories about the use they have made of the training, what helped them use it successfully (Before, on or after the course), their most important learning from the course, what ideas they have where the learning could help the organisation and their wishes for the training in future. The best way to tell stories is in conversation with another person. People could interview each other or you or other people could do the interviews.


As interviewers learn as much as those interviewed, it might be best for people to interview each other. This would require a small amount of additional training. They would then distil the information to discover common patterns and ideas. You could do this locally to account for cultural differences. You could do the interviews in the normal course of day-to-day work or in a workshop format.


The results of the interviews would be a mine of information about what the courses are delivering, what helps the courses be effective and have an impact afterwards and ideas for making the whole system even better. However, personal experiences and examples would make the information vital and sharp.


Enquiry creates what you enquire into, so asking people about what they have gained and what has helped will increase their energy and commitment to their training.


Your evaluation this time might be “We have spent $500,000 on the training and we have many stories to show that people at all levels are listening to each other better, communications and support are improving. There are also stories that show direct financial benefits.


Just one story showed that we have gained a piece of work as a result of improved collaboration that will make us more than the entire cost of the training this year. This justifies our investment.


We have also learned how managers are helping their staff use the training and therefore how we can get even more benefit from it. They are also keen to do so. These more than justify the extra cost of the evaluation.”




This approach is also pragmatic. It focuses on what works and assumes that if you find out what works you can arrange to have more of it. In the end, the decision to go ahead with the training will require an act of faith. You don’t have proof that the training will go on delivering the results. If you want to do be more objective, you can try an objective scientific approach.


  1. Objective Scientific Evaluation

This is just an outline of some approaches. There are probably many.

The essence of the scientific method is to create a hypothesis that something is true and then to design and run some experiments or measure something to confirm or refute it. If the results of the experiments or measurements are what you predict from the hypothesis, this tends to confirm that that the hypothesis is indeed true. So you design a new experiment to test it further. If neither works, you refine the hypothesis and try again.


The initial hypothesis you wish to test, I think, by evaluating the training is “That our investment in the training produces financial benefits that justify its costs.” This is not a testable hypothesis as “justify” is a purely qualitative word. If you are going to use a scientific method of evaluation, you may need to refine this to something like “That our investment in the training will produce financial benefits over the next five years that are ten times the cost of the training (at constant dollars).”


You will now need to measure the impact of the training. Objective evaluation requires that the more detached the observer is, the more accurate the information will be. So, you should not rely on self-reports or stories, but on observation or measurement.


You could measure the increase in value added per person in a team and see how correlates with the investment in training per team member. If an investment of $x per person produces added value per team member of $12x you would think the investment is justified and could put a figure on it.


Another approach is to develop competencies, skills and knowledge required for each job, measure people’s performance before the training against the competencies. Then run the training and measure their performance again. Then work out the added value they create as a result of their new better performance.


Or you could ask participant’s managers to report on examples where people were demonstrating the skills they had been taught on the courses together with an estimate of the added value of their enhanced contribution. They would need to have an accurate assessment of the skills shown before the course to know that their extra skills were the results of the training.


If the combined added values after five years are more than ten times greater than the cost of the training, then the hypothesis is proved and the training is justified.


The evaluation would be “The training cost $500,000 in constant dollars and after five years produced a return of $6,000,000 therefore our investment was justified”




This approach is very bureaucratic. It will give you quantitative information but even this is very shaky as it based on subjective estimates. It looks a credible and defensible process but is very expensive of time and effort. It does not give useful qualitative information directly.


There are problems with the methods too. How do you objectively measure added value, performance and show that improved bottom line performance flows from the training? The latter could have many causes; you may not even know what some of them are. Variables that are outside a team’s control influence its performance. You can’t prove that improved performance comes from training easily this way.


  1. One Organisation Development approach

Organisation development is a planned approach to manage change. You could apply it to evaluating the impact of training. The people involved work together to understand the issues and plan what to do.




A client manager has a needWho is/are the clients for the evaluation work? How will he/she/they judge whether it is a success?
Data is collected from those who know what is going onThis could be stories, interviews, observations or performance feedback or financial figures
The data is fed back and discussed openly to the people who provided itThis could be to a group or several groups. They represent all the interests involved. This process enables everybody to validate the data.
They decide together what the information meansThis would be the evaluation
The group (or groups) decide the action they will take to make improvementsThere will be actions they can take immediately and some that will take time and planning
People in the organisation actYou have to do something to improve
Review the outcome and how you got thereThis helps you learn how to make improvements next time

One way to do this – there will be many ways


  1. a) Establish who the client group is that requires the result of the evaluation.
  2. b) Set up three representative but independent
  3. c) Have team (A) investigate the aims of the training
  4. d) Have team (B) investigate the outcomes of the training
  5. e) Have team (C) investigate the costs and financial benefits of the training
  6. f) Each team uses the general methods in the process above to collect valid information from people who know about the areas they are investigating. They check their findings by feeding the results back to the people they consulted and modify them as necessary. (You may have to give people some training in this method of working). You could use large groups or workshops or representative groups if lots of people are involved.
  7. g) The teams then share their validated information and conclusions with each other and with the client group. They have one or two people they consulted to support them.


  • Team A might say, (E.g.) “These were the aims of the training to create a coaching culture and as a result improve our internal communications, get more and better ideas and we improve our service to customers. Our customers will rate our customer service more highly and we will grow our business by 5% and increase our profits by 10% – or equivalent)
  • Team B might say, (E.g.) “We have found evidence that managers are doing more coaching and that the coaching is being valued. The best people are staying in the organisation longer and we are improving our ability to recruit good staff. Internal communications are improving. Before the training 60% of our customers thought we provided a good or excellent service, now it is 75%. We have discovered examples of improved cooperation between departments.”
  • Team C might say, (E.g.) “We estimate the training cost $500,000 this year this is $100,000 in fees to the provider and $400,000 in salary costs. Last year the board predicted we would make $100 m profit and made $100m. However, we had things happen that we could not predict such as the major fire in plant Y and some unexpected bonuses such as a favourable exchange rate. They estimate the net down turn from these to be $12 to 20m. Our best guess for the added value from the training is $12 to $20m”
  1. h) The groups explain their conclusions and the reasons behind them to the client group and each other with lots of listening. People clarify by discussion what the information means to them and identify what needs to be done to help the training deliver even more benefits.
  2. i) The client group decides if the investment is justified and whether to continue it.
  3. j) Form small temporary teams to progress the issues identified in (h) above. These could be at any stage of the training system.
  4. k) Act to make improvements.
  5. l) Review what has happened, the benefits and what you have learned by working this way.


This is a rigorous approach that uses the knowledge, judgement and skills of the people in the organisation to gather valid data and make the best possible decision. Even using this method, it is still very difficult to prove conclusively a causal link between training and financial results. There are too many things, other than training, that impact on these results. Even being this rigorous, you end up with a “best guess”.


This approach will give you a mixture of subjective and objective evidence about the impact of the training and help the organisation to make an informed choice about what to do.


Project based evaluation


This would require a different way to deliver the training. You may not wish to do this or be able to do so. Training would be offered to meet a specific business need. Suppose that you have a sales team that has low morale and is not meeting its sales targets. You could put a cost to this. The team’s salary and other costs are $x p.a. the contribution it makes is $2x p.a.; most other sales teams in the organisation make $4x p.a.


You run training in coaching, and influencing skills, and a team building course and some management training for the sales manager and the total cost is 0.1x. One year after the training the team’s contribution is $3x p.a. and morale has improved. This looks like a good investment that justifies the training.


Even here, there could be a flaw in the logic. Suppose that in the year trading conditions have been particularly good and that contribution for most of the sales teams in the organisation have risen to $4.5 x p.a. Can we now be sure that it was the training that made the difference for the team in question? How much difference did it make?




There are fewer variables and the training is more focused in this method. Even here it will be hard always to prove a causal link between training and bottom line results. You would be able to say that training was the probable cause in many cases. It would require a major rethink about how you deliver training.


Finally, I hope this helps. I have tried to be objective and be aware of my biases and not let them influence these thoughts. The exercise has been most instructive. I now understand why most organisations do not evaluate soft skills training. I also appreciate even more those people who are enthusiastic about developing people in organisations.

If you would like help using this idea, or have any comments or questions please contact me. Thanks, Nick