36% of permanent contracts are dissolved within their first year… When we realise how great the financial impact of this is, and that’s without counting the indirect costs (employee morale, integration/training time, productivity losses, etc), we can understand why many organisations are looking to reduce their margin of error when it comes to recruitment.
The fact of the matter is that recruitment mistakes will remain a real issue as long as the evaluation methods and techniques used to predict professional behavioural patterns and success do not fully match up with organisations’ goals. It is therefore perfectly understandable for companies to be asking themselves how they can raise the efficiency of the recruitment process.
The sorts of questions organisations are asking about their recruitment technique focus on how to increase the speed and scale of the digitalisation of testing and the resultant data analysis in order to develop selection standards.
And what if we leave selection to big data?
It is convenient for CEOs or managers to think of their companies as armies of workers who are all highly-skilled, motivated, effective and, above all, chasing the same goals. Their cutting-edge thinking tells them that they should bet on what “works”, i.e. endlessly reproducing and replicating the same profiles of the individuals that have made their company successful to date.
Predictive recruitment: criteria, individual characteristics and statistical links
Let’s begin by defining predictivity. Predictive analysis encompasses a wide range of data-based knowledge extraction techniques that analyse current and past events to produce predictive hypotheses for future developments.
What these models actually do is pick up on the relationships between numerous different factors, which enables us to evalute performance with a view to directing decision-making.
The experts are in agreement when it comes to an item published in the Harvard Business Review in April 2014: “ in the case of recruitment, algorithms fare better than intuition”.
These criteria ought to explain why some succeed better than others or more closely match the required skills of a given organisation and therefore, demonstrate how candidates should be ranked. Such criteria could be validated by updating significant statistical links between two data families: performance indicators (income, turnover, goal achieval rate, length of service, etc.) and individual characteritics (personality, motivation, aptitude, etc.). In an ideal world, predictivity could be applied to recruitment.
However, it is worth noting that not all evaluation methods have the same predictivity coefficient. We are therefore unable to make recruitment decisions on the basis of personality or cognitive aptitude alone. Schmidt & Hunter (1998), Smith (2005), Pilbeam (2006) and a number of other researchers have shown that the predictivity coefficient of individual evaluation methods is typically low. However, the coefficient increases significantly when we use various evaluation methods in conjunction. Take the following example: when we combine personality and aptitude metrics, we see an actual predictive benefit, especially when applied to complex job profiles.
In the real world, there are a number of potential dangers
The exercise should not be limited to identifying individual characteristics that correlate with performance. Further reflection is an absolute must if we want to fully grasp how significant, relevant and robust these statistical links really are. Just observing phenomena isn’t enough, we also need to analyse their causes.
The fact of the matter is the standard’s legitimacy rests on its basis in fact. What value can we assign to a standard whose final diagnostic’s “basis in fact” hinges on the results of statistical verification alone? If that were the case, then instrumentation would be an end in itself – an uncontestable basis for making decisions uniting the entire spectrum of opinion. Should we give any thought to a correlation between eye colour and A-level results, for example?
Similarly, we must also question any interindividual data analyses. There are certain characteristics that directly explain differences in performance. Some individuals stand out by virtue of their positive behaviour and this genuinely does create value. They are capable of generating relevant (albeit inventive or unexpected) ideas and modifying their practices to accommodate restrictions produced by changes benefitting the organisation. These differences relate in large part to productive capacities, which ought to be taken into consideration.
Cognitive characteristics (aptitude, cognitive schemas), which determine candidates context-based problem solving capabilities clearly fall under productive capacities. They play a role in all professional tasks/activities and promote efficiency.
Another example would be technical skills as well as knowledge gained and then converted into skills that are useful for the organisation, such as competency using certain techniques or tools. These are the sorts of things that ought to be identified and established as selection criteria.
Is it better to operate within the norm or do you need to be different if you want to create value?
Just because there is a link between an individual and the norm, it does not mean you will automatically be able to generate value. All it does is validate the beliefs, stereotypes and criteria governing the company culture. That’s why it can be very tempting to use big data to make a clone factory and thus reduce the company’s understanding down to the norm value. In fact, vice president of A Compétence Egale Stéphanie Denis frames this question better than anyone else. Moreover, is this norm – which is essentially a construct of retrospective data – suited to the company’s requirements in terms of organisation, implementing beneficial changes to help it adapt to its market, keeping pace with new developments and preparing for the future? The key challenge when it comes to instrumentation and data analysis is identifying which variables are useful for performance and, of those, which are genuine productive capacities and which are just a reflection of the norm.
Some food for thought to avoid inundating yourself with certainties
Are your selection criteria based purely on data or on the theoretical interpretation of links between data?
What value can we assign to a standard whose final diagnostic’s “basis in fact” hinges on the results of statistical verification alone? Because a statistical link does not indicate causality. Data processing software is impervious to ridicule, even when it states that the link between people of a certain status who own a Rolex and a correlation with performance.
Do selection criteria come with simplistic logic that seeks to classify everything and everyone?
The magic power of data gives us the option of assigning each candidate a binary quality label (“good” or “bad”) without considering the scale and complexity that varies from one to the other and without even adopting validation principles for hypotheses.
Are selection criteria unanticipated or do they validate the pre-existing consensus?
This seems like a good point to reflect on conclusions Do they question the organisation of resources contributing to performance, while also highlighting new criteria? Or do conclusions reaffirm the same old clichés spread about the organisation like a mirror of the current status quo?
It appears necessary to encourage debate in order to better understand what is a reasonable estimate of the actual potential required to hold a position of responsibility or follow typical operational standards.
Tools and data manipulation comprise an endless number of layers and can become formidable instruments of quasi-divine power when it comes to the fate of those employed within the company. This is why the issue of the final objective of the evaluation and management process must be discussed both in the context of its ethical and political dimensions. The power of the norm relates to the concept of what directors and managers do with their power and the way in which they wield it. They must be careful not to get tunnel vision with respect to the norm and should try to avoid falling for the tool-based logic proposed by big data.
When all is said and done, it isn’t so much the instrument but rather the construction principles, the thinking behind it and the practical reality of its use that really matters. Only levers like this will be effective when it comes to humanised management within organisations. As we bring the discussion to a close, we remind you that the formal interview is still one of the best recruitment methods out there.