Organisations often ask us: “How do you know for sure, that your network has the best talent in it?”. So, for the first time in my life, I thought it would be a good idea to write a blog post about something inherently scientific. Let’s start by taking a look at that question and dissect it piece by piece. In this post I’ll deal with how we “know” things, the second part will be dealing with the degree of “certainty”, and the last part will focus on “top talent”.
Part 1. Belief versus knowledge: how do we find our talent facts?
At the Nova top talent network we hold ourselves to be mostly quite easy-going and have some wildly creative processes. It therefore may come as a surprise to some of you that, when it comes to science, we are a little over-scrupulous.
To understand our scientific process, and without diving too deep into philosophical epistemology, we necessarily have to distinguish between knowing and believing. Whereas belief may be wrong, knowledge must be demonstrably right. This is why at Nova we do measurements and make sure that these are done in the most objective manner possible. The exact way we get to the truth is assisted by technology, just to make sure that we do not make observational errors that could potentially harm our reputation as talent experts.
Mechanical or clinical measurements, which one is better?
In November 2013 the journal of Applied Psychology published a meta-analysis of decision science that clearly proved a substantial difference in validity between measuring things “clinically” and “mechanically”. To make these terms very easy to relate to you could see them as the difference between intuitive (subjective) and calculated (objective) decision-making. Here is an example showing how straightforward this is:
Imagine buying a new family car and that you care most about its’ safety features. Instinctively you might know that a brand such as Volvo stands out in safety features and that a brand like Toyota has had a lot of product recalls in the past. If you decide to aim for a Volvo you have made a belief-based decision and could risk losing out on potentially superior products from Toyota. If you were to focus on a more factual approach, you would possibly take the computerised impact scores from crashtests and looks at the number of airbags as well as the relative failure frequency reports and combine them into a total score.
To sum up: the research tells us that, in the case of an accident, the car that scores best on that total score scale is probably the one that will be most likely to prevent severe injury to you or your family. The most difficult part of making this comparison is that the intuitive decision often makes us feel safer (belief), whereas the calculated decision is probably safer (knowledge).
The main reason for this difference comes from the nature of these measurements. In mechanical decision-making, an algorithm or formula is applied in exactly the same way for each decision. When making decisions clinically we use our own judgment, intuition, or insight. You can imagine that the latter method brings in a lot of noise and unwanted information, moreover, it is heavily affected by recent events, our personal preferences, moods, or even the weather. We call this noise a “bias”, an unwanted effect gained by taking in unnecessary information and therewith not measuring something directly anymore.
Compensating for subjective biases
Our model effectively eliminates all of these problems and moves into an area that is called decision-support systems. This is really important for us because the quality of our talent is crucial and as it is generally hard to get into Nova we do not want to make mistakes.
Firstly, the format of our talent application process (including the video interview) is, albeit dynamic, 100% identical over all the independent questions. This is important because a slight alteration in the questions or answering time makes it very difficult to accurately compare answers. On top of that, every question or segment measures a singular and small part of something important (such as emotional intelligence). In the analogy of the car, we would ask questions such as: how many airbags does it have? What is the average braking distance when driving a hundred kilometers an hour? Ultimately all these small questions combine into a consistent measurement for safety.
Secondly, a large part of our review model is based on validated mechanical tests that directly create a score for us and therewith requires no direct decisions. For the other half, Nova has built a proprietary system that enforces our well-trained and educated team of psychology majors to make very precise observations (we will talk more about this in part II). The way we do this is by only ever asking our reviewers to add information they have clear evidence for, and compare that to the exemplary information our system provides for each scoring (anchor points). We also developed so-called “fences” between different scores, which are clearly defined guidelines or cutoffs that help our reviewers make very granular decisions and avoid being influenced by irrelevant information.
Lastly, we combine all the points from the observations we, our system, and our tests have made and push them through our final scoring algorithm. This is the stage where our computers really assist mankind because it forces us to take all information available into account and helps us calculate a final score out of them. Typically, reviewers who make decisions do not use all information or forget parts of it.
In part III of this blog we will talk in more detail about all the elements that go into our talent indicator algorithm. For now all you need to know is that we determine how likely it is that a particular talent will score in the top productivity segment of society at large. In other words: our scoring perfectly reflects what we know about the talent and what we know stems from a meticulous process of gathering and comparing evidence. Or, as the legal court would say it: by the end of our process we have enough evidence to prove our candidates guilty of being top talent.