All of the attention being given to data analytics, artificial intelligence, machine learning and other emerging technology is largely warranted. But its usefulness is often over-rated. When a situation is unique there is no “big data.” When innovation requires invention of new outcomes historical data is of little use. Although technology is very often better than people at analyzing data it cannot help if there is no data. Technology can help to answer questions but it cannot decide what questions to ask.
This is not a Luddite condemnation of new technology. I marvel at what is being developed that can help us do analysis we either cannot do or cannot do very well. In graduate school I was forced to do Gauss Jordan pivots manually to solve Matrix Algebra problems. They took up reams of paper and if you made one mistake you were dead. I also had to find an optimal strategy for a hypothetical Admiral who was deploying an entire fleet, using a Simplex algorithm. Just formulating the objective function and the constraint equations came close to blowing a cognitive fuse. I got why I was being subjected to this torture… it made me understand what was going on inside software models and it illuminated relationships that would not have been apparent by just looking at the output of the model. MBA candidates do not warm easily to being shown just how limited their cognitive capacity is. But it is better to know one’s limitations.
So since these technology tools can do some things better than humans is everyone going to be replaced by computers? No, but a lot of people are going to have to cede activities making up some or even all of their jobs to technological tools. That means lifelong learning is mandated…to ensure that what people do to earn a living is still of value. People need to capitalize on their strengths and work around their weaknesses in order to remain viable. Story-telling and deciding what questions to ask are unique human strengths, as is inventing that which has never been. These are things data analytics can help with, but not initiate. This means a partnership between people and technology is the optimal approach. Years ago research into socio-technical systems theory told us technology and those who use it must be compatible for positive outcomes to result. A modern version of people-technology integration is blending inductive pattern recognition with deductive hypothesis-generated discovery. Algorithms and heuristics both need to be used as tools.
An example of how human judgement and technology can complement each other would be an analysis of what impacts employee pay rates. A multiple regression model grinds through data sets to look for correlations between an employee’s pay rate and factors such as education, experience, performance ratings, the grade assigned the job and the pay rate at entry/hire. The model calculates how well the factors used explain pay rate differences. If the r square number is .8 (80% of variance in rates is explained by the factors used) the analyst may conclude that things are working the way they should. However if a second analysis is done using age, race and gender as explanatory factors and the correlation is significant the news is not all good. In a discrimination case involving a large banking organization the statistician was told that an analysis had shown that females were paid equivalently to males in the same roles. The statistician took analysis a step further and using reverse regression found that the females were more qualified and that pay rate parity was not evidence that discrimination was not an issue. So in these cases asking the right questions was as important as calculating correlation and causation in discovering the extent to which discrimination was in evidence.
Given today’s realities it is apparent that humans need to do their part to become a competent partner with technology. This means acquiring an understanding of quantitative methods, no matter what one’s occupation is. An adequate knowledge of statistics, regression analysis, hypothesis formulation and testing, causal path analysis, predictive analytics, formal logic and other tools must be in one’s possession to take advantage of the technology that is available and developing. Going back to school may not require attendance at proms or competing with others to earn a high class standing. Online education has progressed rapidly and a wealth of knowledge is available free or affordably. People’s crowded lives may have to be restructured to replenish knowledge and skill sets and create new ones. It may feel like being on a treadmill… running at high speed just to stay in the same place. But it beats what happens if one stops running on a treadmill that will not stop.
Kahnemann and Tversky, pioneers in Behavioral Economics, identify System 1 and System 2 thinking. System 1 is fast but buggy. People are affected by cognitive bias and that recognition may lead problem-solvers to prefer the “perfectly rational” models created by data scientists. But aren’t data scientists human? Won’t their biases influence the models they create? Confirmation bias is the tendency to more readily seek and accept data that supports what one believes/wants to believe. If you only look for confirming data that is what you will find. There is also a bias that leads people to substitute a simpler rule of thumb to use in decision-making when facing a highly complex problem… the urgent need to find an answer may result in finding one that is not optimal. Oliver Wendel Holmes once said
“I would give nothing for simplicity this side of complexity, but everything the other side of complexity.”
Simplistic answers are worse than uncertainty.
We humans love to make predictions… we just are not very good at it. Evidence suggest we would improve decision quality by experimenting more… making limited tests rather than big bets. Rather than “just knowing customers would love a product like this” why not “let’s test reactions to some of the proposed features and benefits and use the information we get to increase likelihood that our expectations are realistic.” And forcing an algorithm to predict the future when the data used is not adequate to do so is a fool’s game.
Both artificial intelligence and human intelligence are necessary but not sufficient to give us the answers we need. They are both valuable and critical elements of sound decision-making. We must recognizing that they must be complementary and not competitive.
But as already pointed out humans have limitations as well. Both are necessary but not sufficient. They both are critical.