When research informs us that a practice is not viable it would be expected that organizations would avoid relying on it. Yet one on one unstructured interviews continue to be the most widely used basis for selecting new hires from groups of candidates. Since research has shown that the practice has virtually no validity or reliability there must be something that prompts its continued use.
Being subject to numerous cognitive biases is a human condition. Kahneman & Tversky won a Nobel Prize for research that identified about 100 such biases, as well as developing an explanation of how we think. One of their useful insights differentiated between fast (System 1) and slow (System 2) thinking.
System 1 is activated automatically and is quick to find explanations for what is seen and for making snap decisions.
System 2 is calculative and is activated when System 1 is puzzled, senses contradictions or is exposed to evidence that is contradictory to held beliefs. In the book Blink Gladwell also pointed out our propensity to make snap decisions based on first impressions (a characteristic of System 1 being in charge). Some of the common biases also work against making good decisions:
- Overconfidence that one “knows” something (most common among experienced people);
- Tendency to filter evidence in a manner that results in being overly receptive to information that confirms existing beliefs or what one wishes were true;
- Over reliance on samples that are too small to be statistically sound;
- Over simplifying complex issues resulting in simplistic decisions;
- Susceptibility to physical characteristics being considered, eroding decision quality.
I was hired for my first job out of undergraduate school into a high potential program managed by a highly respected organization, based on a one on one unstructured interview. I was not well suited to the organization. To this day I am convinced that my success in being selected among many candidates with advanced degrees from more prestigious schools was due to the fact that the interviewer and I had both served as paratroopers in elite airborne divisions. How that discovery occurred I do not remember, but since the discussion was unstructured chance occurrences were made possible Since the role I was selected for did not require the combat skills I learned in the service the selection criteria I believe resulted in my hiring were inappropriate. The fact that my being hired subsequently resulted in excellent training in all aspects of business that I have used throughout my career makes it difficult for me to be so critical of a process that led to that selection. But my PhD training in Behavioral Science has made me a believer in sound research and has equipped me to override impulses and intuition when research findings so dictate.
Using flawed processes can also produce consequences for an organization that are more serious than just making low quality selection decisions. Another one of the biases we are subject to is preferring people who we like and who are like us. This can result in inappropriate decisions and result in statistically significant impact on protected classes, opening the door to employment bias litigation. And globalization has made talent more mobile, resulting in culturally diverse workforces and candidate samples. This diversity can create another problem. If one candidate has been socialized in a culture that encourages humility while another candidate has been taught to aggressively market one’s qualifications two equally qualified candidates can seem to be unequal when unstructured one on one interviews are used. Interviewers are of course influenced by how a candidate answers questions and represents him or herself and this may degrade the quality of a selection decision. Trompenaars prescribes 3Rs for dealing with cultural diversity:
- Recognize that cultural differences are present,
- Respect the right of people to hold different beliefs and values,
- Reconcile the issues those differences raise.
Kahneman cites research that shows interviewers who have abundant sources of evidence that seems relevant do not generally do better than someone using a straight forward algorithm to make final decisions. For example, an algorithm for selecting new students for a university that relies on high school grade point average and a relevant aptitude test to select students usually outperforms interviewers with a wider variety of information. If the university is trying to predict if the person will make contributions as an alum knowing about family circumstances may be relevant. And if the school aspires to produce graduates who will be active in the community then high school activities may be relevant. But if the goal is to predict who will successfully complete their studies and do well that additional information is a contaminant that lessens decision quality. Suprisingly the act of predicting the future quality and price of a Boudreaux wine was done better by an algorithm using Spring rainfall, Summer rainfall and Summer temperatures than it was done by a panel of experts. Medical algorithms have diagnosed more reliably than trained specialists. All of these facts do not argue for a “machine replacing human” approach, but rather a blending of expert opinion and a structured decision process used consistently.
It has been shown that parole decisions by judges are impacted by how long it has been since their last meal and/or the cases that preceded the current one. This illustrates the fallibility of people across decisions. When left free to make decisions without common factors and factor weights the reliability of their decisions will suffer since physiological differences over time will produce inconsistent standards.
It is a “both – and” rather than an “either – or” approach that will produce the best decisions made by people. When an algorithm can be developed based on data, using AI and machine learning tools, its consistent use will produce more reliable decisions over time. But the validity of the decision is generally maximized by using both data and human judgment. Panel interviews with structured questions can help to moderate individual bias and to ensure everyone is asked to respond to the same questions posed in the same manner. When panel members have individually rated candidates they can then meet to compare their ratings and to discuss what led them to their conclusions.