For any L&D program, how can HR ensure the outcomes are truly favourable? One industry expert explains the four levels for proper evaluation
It is not enough to know which metrics to use; you must know how to use them. This is especially true in a field such as L&D where evaluation can occur through multiple means in a variety of different ways.
HRD chatted with James Siew, regional consulting director at Cegos Asia Pacific, about how HR could link evaluation with the appropriate metrics within any company’s training and coaching framework.
Siew suggested using the Kirk-Patrick Model which breaks up evaluations into four distinct areas.
1) Basic reactions
Level one evaluates the reaction that people have to the training provided, Siew said.
“Say for example you are engaging XYZ Training. Depending on how you develop the questionnaire – it could be on a one to five points scale where five means strongly agree for example – what you do is you run a cumulative average.”
“For this particular company, if the trainer did well, the course is useful, the materials are good, etc, you would expect an average of about 4.7.”
This means HR can then make a decision whether to keep the company or not, he added. If the average suddenly drops for example, the quality won’t be satisfactory and the decision to switch training providers can be made.
“This is something you can do independently of the performance management system because it’s an operational KPI,” Siew said. “You can do an overall average of training provided for the entire year, slice it by individual vendors and make a decision based on that.”
2) Acquisition and knowledge
The second level is more complex, Siew warned, mostly because each training course gives different results which can’t be easily compared.
“If I do this with an engineering course on failure mode and effect analysis and I lump it together with coaching skills for managers, that is not giving me like-for-like,” he said.
He recommended looking at each course individually and providing a knowledge assessment both before and after training.
“Before the course, the average the marks may be 25% knowledge and then after the course it jumps to 85%,” Siew gave as an example. “So you know you’ve made a 60 point increase. That measures the acquisition of knowledge from a collective basis.”
3) Change and behaviour
The third level is all about evaluating the acquisition and usage of new skills, Siew said.
“You can apply skills assessments from a point of view of change in behaviour. If I can do it, that means I’ve demonstrated the skill,” he told HRD. “If you are competent and you can follow all the steps to run the process from start to finish, then you’ll be certified.”
Sales is one role that really benefits from this type of L&D evaluation, he said. For example, you can assess the capabilities of staff in aspects such as diagnostic questioning techniques to see if they’re actively engaging customers instead of just selling products.
Due to the more intangible aspects of this type of training, methods such as role playing and assessment through observation are used. Siew also recommended using parts of the employee engagement survey as metrics to find a pattern over time.
“For example, last year our engagement score may have been 70. We did this thing, we provided training for people, we equipped managers to give more coaching, etc, and lo and behold this year the engagement score jumped from 70 to 75.”
4) Return on investment
The fourth level for evaluation, return on investment (ROI), is the most complex and requires a combination of the other three levels, Siew says. “It’s multivariable. It’s not two dimensional so you have to make that correlation.”
“You can’t apply ROI on everything. It’s very hard to measure and in fact it’s a lot of work.”
This is because ROI can only be used with a few strategic initiatives, Siew said. In fact during his 23 years of working in L&D, he had only once used a level four evaluation. This was in a course he ran for 12 operations managers to see how they applied negotiation skills in securing better rates with vendors.
“We mapped it and we worked with finance to make sure there was a combination of both behaviour based and financial metrics. For the amount we invested in training 12 key leaders in the region, we gained a net 200 times ROI.”
Related stories:
Avoid analysis paralysis with these three L&D metrics
Data analytics – is your HR team missing out?
Four steps to maximise the value of HR analytics
HRD chatted with James Siew, regional consulting director at Cegos Asia Pacific, about how HR could link evaluation with the appropriate metrics within any company’s training and coaching framework.
Siew suggested using the Kirk-Patrick Model which breaks up evaluations into four distinct areas.
1) Basic reactions
Level one evaluates the reaction that people have to the training provided, Siew said.
“Say for example you are engaging XYZ Training. Depending on how you develop the questionnaire – it could be on a one to five points scale where five means strongly agree for example – what you do is you run a cumulative average.”
“For this particular company, if the trainer did well, the course is useful, the materials are good, etc, you would expect an average of about 4.7.”
This means HR can then make a decision whether to keep the company or not, he added. If the average suddenly drops for example, the quality won’t be satisfactory and the decision to switch training providers can be made.
“This is something you can do independently of the performance management system because it’s an operational KPI,” Siew said. “You can do an overall average of training provided for the entire year, slice it by individual vendors and make a decision based on that.”
2) Acquisition and knowledge
The second level is more complex, Siew warned, mostly because each training course gives different results which can’t be easily compared.
“If I do this with an engineering course on failure mode and effect analysis and I lump it together with coaching skills for managers, that is not giving me like-for-like,” he said.
He recommended looking at each course individually and providing a knowledge assessment both before and after training.
“Before the course, the average the marks may be 25% knowledge and then after the course it jumps to 85%,” Siew gave as an example. “So you know you’ve made a 60 point increase. That measures the acquisition of knowledge from a collective basis.”
3) Change and behaviour
The third level is all about evaluating the acquisition and usage of new skills, Siew said.
“You can apply skills assessments from a point of view of change in behaviour. If I can do it, that means I’ve demonstrated the skill,” he told HRD. “If you are competent and you can follow all the steps to run the process from start to finish, then you’ll be certified.”
Sales is one role that really benefits from this type of L&D evaluation, he said. For example, you can assess the capabilities of staff in aspects such as diagnostic questioning techniques to see if they’re actively engaging customers instead of just selling products.
Due to the more intangible aspects of this type of training, methods such as role playing and assessment through observation are used. Siew also recommended using parts of the employee engagement survey as metrics to find a pattern over time.
“For example, last year our engagement score may have been 70. We did this thing, we provided training for people, we equipped managers to give more coaching, etc, and lo and behold this year the engagement score jumped from 70 to 75.”
4) Return on investment
The fourth level for evaluation, return on investment (ROI), is the most complex and requires a combination of the other three levels, Siew says. “It’s multivariable. It’s not two dimensional so you have to make that correlation.”
“You can’t apply ROI on everything. It’s very hard to measure and in fact it’s a lot of work.”
This is because ROI can only be used with a few strategic initiatives, Siew said. In fact during his 23 years of working in L&D, he had only once used a level four evaluation. This was in a course he ran for 12 operations managers to see how they applied negotiation skills in securing better rates with vendors.
“We mapped it and we worked with finance to make sure there was a combination of both behaviour based and financial metrics. For the amount we invested in training 12 key leaders in the region, we gained a net 200 times ROI.”
Related stories:
Avoid analysis paralysis with these three L&D metrics
Data analytics – is your HR team missing out?
Four steps to maximise the value of HR analytics