Following Trends: A Good Idea?

Professionals and managers are exposed to an avalanche of information about what is going on in their field of practice. Some of it is derived from discussions with others, some from practitioner journals and some from ideas presented at conferences. Continuous environmental scanning is a prudent strategy, particularly in the kind of dynamic environment that exists today. But data and information must be tested to see if it is relevant evidence that should impact decision-making. Having been a consultant, practitioner and teacher in the compensation management field for several decades I have seen a pattern that suggests many practitioners attempt to identify and follow trends. Trends are practices that a lot of others seem to be adopting. There have been periods where a number of “new” approaches seemed to have been granted the status of being the “new best shiny thing.” The publications are suddenly full of testimonials that broad-banding or competency-based pay will lessen or eliminate the difficulties associated with rewarding people.

When does something become a trend? When every account one sees in print extols the virtues of an approach and suggests adopting it does not present challenges it seems rational to follow the herd. The two practices just mentioned dominated the literature for a time, which was understandable since 100% of the adoptions were deemed to be successes (recent research on salary structures showed that a minuscule percentage of participating organizations use broad-banding today).

Yet upon reflection one has to ask who would write an article describing a failure they were responsible for? In a perfect world those who tried something that did not work would inform the profession that success was not guaranteed by communicating the outcome. Admittedly this may not be the best way to advance one’s standing in the field, but it does provide useful information. Learning what does not work or what only works in certain contexts can be every bit as valuable as learning what does work in certain contexts. This severe bias in the literature exists in the academic world as well. Journal editors are unlikely to accept papers about research studies in which the results did not support the hypothesis that was being tested, despite the fact that valuable intelligence could be gleaned from the study.

There is also a bias against publishing the 50th article concluding that a practice like “paying for performance” has a positive impact on motivation and performance. The “yes, I already knew that” reaction makes the information uninteresting and once a practice has experienced an adequate amount of supporting research further supporting documentation seems unwarranted. So Editors are always looking for something that seems new. But this distorts the probabilities that different practices will be successes or failures. Ten articles in the same year claiming massive success with broad-banding pay systems seems to elevate the practice to the status of “best new shiny thing.” But if thirty failures went undocumented this is not good intelligence. And certainly a sample consisting of ten successes is hardly compelling, given the number of organizations in operation at any time. People are subject to accepting samples that are too small to be statistically valid and this bias is the source of a bandwagon effect. The claim of newness is often exaggerated, to increase the appeal of a practice. I have uncovered at least two previous lives for the “new” broad-banding approach popularized a decade ago. Since few study history rigorously it is often possible to simulate newness by changing the name of something.

When a practitioner is determining whether a specific practice that was reported as being a success at a respected organization should be emulated there are often critical items of information missing, such as a detailed description of the context within which the practice was successful. Although there may be some information about the organization it is very rare for enough knowledge about the culture, the external environment and the internal realities to be available for someone to make a reasoned assessment about the similarity of the two contexts. The common assertion that “our organization is unique” is in fact almost always true… there are no two identical organizations. So why would a practitioner make a decision that was influenced by what happened at another organization (or ten organizations) that functioned in contexts that were at least some degree different? In addition to knowing about context similarity there must also be careful scrutiny of how the adopter defined and measured success. Did growth rate increase? Profits soar? Employee engagement increase? Unwanted turnover decline? And by how much? Once the success measures are calculated did the improvement warrant the resource investment in making changes? Research studies and benchmarking processes have been two of the most widely used tools for practitioners when they attempt to predict the probability of success when adopting a new strategy or program. But using them is fraught with peril if they are not done well.

The prevalent reporting of an increasing use of workforce analytics warrants the trend lablel. It seems obvious that relevant evidence will always be valuable when making decisions. Technology advances in AI and machine learning have created tools that enable organizations to use data on what has happened to improve the ability to predict what will happen. But lest adopters assume that analytical tools have great prediction power it would be prudent for them to acknowledge that the future may not be like the present or the past. If that is the case the data used may be inappropriate for deciding what will happen if a practice is adopted. There is a similar danger in using one’s past experience as a guide for going forward. Experienced people who may even be universally proclaimed to be experts may have a knowledge base that is less relevant today and in the future than it was when acquired and applied. And when a knowledge base is used to create an algorithm the quality of that tool will be impacted by the continued relevance of the knowledge used to create it. Since algorithms prescribe decisions based on rigidly prescribed logic they will not question “is this still going to work today?” Having just published a second edition of my first book I was startled at how much had changed in five years. Classics can be useful but generally for understanding fundamental principles, rather than discovering how a particular program would work today.

Thankfully the principles of evidence-based management are increasingly being used to improve decision quality. EBM prescribes the use of all relevant evidence to inform decisions. But meeting the relevance test is devilishly difficult. And the interpretation of the body of evidence can require a decision-maker to “grade” the quality of the different sources of evidence, especially when the sources seem to disagree. Making a good decision has not gotten any easier, despite the proliferation of data, information and new technology. All these sources can increase decision quality if properly interpreted and applied.

Jumping on the bandwagon when a “new” approach is dominating the literature without a rigorous process of examining the evidence can lead one down the foot worn path… which can be the wrong path for a particular organization at a particular point in time.