Making psychometric assessment work for you

Psychometric assessment has proven itself time and again over the last 100 years, but industry pioneer Scott Ruhfus says organisations will need to rediscover how to make it work effectively in a new, tech-driven age

Scott Ruhfus has witnessed psychometric assessment evolve in phases over the last 30 years, since he left his career heading the HR function at a series of multinationals to embrace what was then becoming a promising new science.

That’s why today, as the managing director of Saville Consulting Asia Pacific, Ruhfus is watching the new innovations in psychometric testing with caution. With a heavy emphasis on technology and process efficiency, he says organisations can risk losing their way and getting psychometrics wrong.

“HR departments are correctly looking to improve process efficiency, and are increasingly looking to Automated Tracking Systems to control process flow,” Ruhfus explains. “However, they don’t do any of the heavy lifting psychometrics do, and it can be a case of technology wagging the dog to try and make it all work.”

The problem with process efficiency alone is it can prioritise the wrong things. While trying to save on costs, HR can actually end up entrenching selection errors.

“Sometimes, there is confusion about what assessment methods to use at what phase of the selection process,” Ruhfus explains. “In graduate selection, for example, aptitude tests are commonly used as an early screen. Over-reliance on this method can mean you lose candidates with other outstanding qualities like building relationships, influencing outcomes, and driving results.” With the new breed of behavioural-style questionnaires like Wave™ and its screening counterpart Work Strengths, Ruhfus says combining aptitude and behavioural assessments at the beginning of the screening process makes more sense.

What to use
Cognitive ability testing, otherwise known as aptitude testing, has been the jewel in the crown of psychometric testing science. That’s because research has shown for a long time that test scores correlate well with actual performance on the job across all jobs. While it does work better for more complex jobs – such as managers and professionals – it is still predictive for lower-level jobs as well.

This is backed by the most recent research by Saville Consulting on behavioural-style assessment (measuring propensities towards certain kinds of behaviour and styles of interaction with others). Together, they do the ‘heavy lifting’ in quality assessment. “Both have demonstrated pulling power and are cheap and efficient,” Ruhfus says.

There are new and growing trends. Situational judgement tests, for example, ask candidates to respond to tailored scenarios and are increasingly being used strategically to project organisational branding in front-end screening. “For example, they can include things like a full video portrayal of a situation, which can be very lifelike. It’s an interesting task for a candidate to do and has respectable validity, though not as good as aptitude and behavioural tests.”

Then there’s gamification, which turns more standard testing into animated problems and scenarios in a way that is said to appeal more to Gen Y candidates. While it’s fashionable and holds potential, Ruhfus says HR professionals should be cautious. “I haven’t seen any compelling evidence for their validity to date. The literature kind of says, ‘Good idea, keep working on this, it makes sense’, but I don’t think we’ve actually cracked it yet.”

Some gamified products have even gone “backwards” in time with “quite trivial sorts of questions” that are not job relevant, Ruhfus says, which may be an HR “backlash waiting to happen”. He says to ask for proof. “They may be a new, sexy and fun way of testing, but where is the job relevance in them?”

When to use it
Due to the low cost of testing compared with return on investment, Ruhfus says testing is best deployed as early as possible. “We have now made tests extremely accessible at a fraction of the cost. They should really be used more up front, and they should be testing more people. The more people you can survey, the better the return on investment and outcome for a company.” Ruhfus actually trains HR practitioners to evaluate tests and understand the financial impact they can make if used properly.

In practice, psychometrics are not always used this way. Ruhfus gives the example of continued reliance by many on interviewing at the front end. “Interviewing is very time-consuming because you usually need busy people to do it, so it is actually a significant cost internally to organisations.” Often, this means an hour or more of management time is wasted when, “in many cases, a candidate could already have told you that they are not for you through more efficient testing”.

An added bonus is that the best tests are ‘blind’ to common biases that work against organisational performance. “They are relatively blind to things like gender, ethnicity and age. Many companies are moving away from more traditional approaches because they are not getting enough good people. Testing now has the potential to turn that on its head. Testing is being used to increase diversity and inclusion and doesn’t have the sort of unconscious biases we all have as managers,” he says.
 
CAN CANDIDATES ‘GAME’ A PSYCHOMETRIC TEST?
Influencing the results of a psychometric test – through outright lying or ‘impression managing’ – is a possibility. So is getting a smart cousin to complete an aptitude test for you, particularly in an age when tests are performed online. Scott Ruhfus says there are ways to mitigate these risks.

1) A supervised test: A shortlist of candidates can be retested under supervised conditions. Saville, for example, offers parallel versions of the same test, to allow comparison under supervised conditions.

2) Get professional: Recent developments in personality assessment have found better ways to combat impression management, which actually home in on the areas where distortion is likely to have occurred. This allows practitioners to target probing follow-up questions, 

For those who think interviewing will yield a more truthful gauge of a candidate, think again. “Research shows 97% of candidates lie during an interview, or in some way distort the truth,” Ruhfus says.

How to use it
The effectiveness of psychometric testing comes down to getting the highest ‘validity’, or the extent to which results correlate with the outcome HR wants. Ultimately, this means scores that predict job performance. For example, a heavy reliance on résumé culling actually has low validity, in that it is much less effective than psychometric testing in predicting job performance.

Confusion can arise when test results conflict with what a manager believes based on an interview, Ruhfus says. For example, a manager may claim the results of aptitude are ‘wrong’ based on their interview impression, which is heavily influenced by social rather than cognitive factors. HR practitioners sometimes don’t know how to address this apparent quandary. “The best kind of interview done by the best kind of interviewer makes at least as many mistakes as a test, and the truth is, you don’t actually get to see the job performance of the people that you end up rejecting,” he says. Everyone thinks they are a good interviewer, but the research is unequivocal. Ruhfus says aptitude testing combined with a good interview allows you to achieve incremental improvements in validity, compared to interviewing alone.

Organisations can also go wrong before they even start if they fail to do a proper ‘job analysis’, to set the right parameters for testing.

Competency frameworks are important to organisations, and help to underpin change initiatives. However, Ruhfus says they are often implemented without an eye to assessment. He recommends organisations consult an experienced specialist before developing in-house competencies and assessment criteria.

The future
Technology will play an increasing role in testing in future – as long as it is used effectively. “I think psychometrics will be intimately connected with technology, because it is a way to deliver testing efficiently and in an attractive way. Down the track, things like gamification could well have legs.”

Ruhfus has already introduced a range of virtual assessment centre exercises – Virtual AC™ – into the assessment mix.

And while Ruhfus expects continued development in areas as cutting-edge as iris identification technology to stamp out cheats, and one day perhaps even the exploration of genetic testing, HR managers will still need to know when to show a human touch.

“In the end, you want to attract the good candidates to you instead of your competitors, and you have to impress them as much as they need to impress you. To do that, you need to spend quality time with the most suitable candidates. The question really is, what is the optimal point at which to do that, and how can we get those good candidates in front of a hiring manager sooner rather than later.”
 
ESSENTIAL STATS
30% – Increase in revenue achievable by managing talent against validated behavioural fit equations

40% – Typical cost of recruitment error as a percentage of annual salary

50% – The increase in validity achieved by changing from a test with a validity of 0.4 to one of 0.6. As return on investment from testing is linear to validity, the potential impact on organisational productivity is huge

Source: Saville Consulting

SAVILLE CONSULTING ASIA PACIFIC
Saville Consulting Asia Pacific, part of the Willis Towers Watson Group, represents world-leading technology and excellence in the research and development of talent assessments. Our world-class psychometric suite, coupled with our passion for enhancing organisational productivity, creates solutions that break bounds in the industry and provide innovative solutions across the talent cycle.


For further information on Saville, click here