The failure rate at the end of trial periods has been around 20% for a long time. This is despite HR departments becoming more professional. So why is this? On closer examination, this surprising continuity has a simple explanation: recruitment errors remain a reality. To put it simply, the techniques used to predict professional behaviour and success do not always fit the company’s objective.
It is no longer unavoidable to keep using unsuitable methods. New techniques are emerging: Big Data, predictive recruitment and matching. They seem to offer more secure ways to rank candidates and therefore overcome this madness. Should we blindly put our trust in machines though (and the people selling them)?
The new assessment systems inspired by Big Data offer numerous solutions. To help those wanting to improve their practices, here are some key questions to consider to confirm the relevance of the tools offered.
Has the model been validated on a relevant sample?
The size of the tested sample is obviously an essential criterion. But is the model also tested on a comparable subsample with the characteristics of a specific company? A model can be predictive in general but far less relevant for a specific job, company or labour pool. It is therefore important to check whether the general rule can be transposed to an organisation.
Has the model been validated with relevant statistical techniques?
The psychometric qualities of a test are the most complex part for non-specialists to understand. How can you assess the quality of analysis or statistical techniques if you are not an expert yourself? The simplest solution is to check whether the model has been established by researchers. The results should have been published in scientific journals. A scientific journal is peer-reviewed, meaning the articles it publishes have been anonymously evaluated by a scientific committee made up of expert researchers in the domain.
Does the model used describe conditions that are essential for performance without being short-sighted?
To make an individual characteristic a recruitment criterion, you need to be sure that those who have it will succeed but also that those who do not have it have no chance of succeeding. It must equally guarantee the performance of some and the impossibility of success of others.
Does the model offered really explain what makes the difference for success?
Big Data clarifies the relationship between variables. However, neither quantity of data nor sophisticated statistics are enough on their own. What meaning should be given to links between eye colour and mention of a qualification, or between geographical origin and salary? Do they describe characteristics which really affect performance? Making sense of the data is primarily a theoretical exercise. The figures can only be used to validate models when put to the test. The model offered must therefore explain – and not just describe – the links between individual characteristics and performances. This gives the knowledge which can significantly improve recruitment and assessment processes.
Are the rules of selection unexpected or do they confirm pre-existing common sense?
HR professionals should be wary if a psychometric model does not produce any surprising information: no new concepts, no unexpected relationships between individual characteristics and performance. In such cases, there is a high risk that the model will just reveal stereotypes.
The way the data is used can have a significant effect on the employees’ destiny but also on the performance of the organisations. The question of methods to assess and manage the selection phases using this technique must therefore be asked in scientific, ethical and political terms. The important part is in fact not the tool but rather these construction principles and then the spirit and specific practice of its use.
Dominique Duquesnoy, Development Director, PerformanSe & Jean Pralong ‘New Careers’ Chairholder, NEOMA Business School