Evidence-Based Human Resource Management

Practitioner: “That may work in theory, but will it work in practice?”

Researcher: “That may have worked in practice, but will it work in theory?”

Bystander: “Shouldn’t it work in both?”

The turnover rate among critical skill personnel is unacceptably high and the HR Director believes it is due to pay levels that are not competitive. The CFO resists increasing pay. How should HR support the attribution of high turnover to low pay?

Two articles recently read by the HR Director contradict each other. One claims that conscientiousness has more impact on performance than intelligence… the other claims the opposite. How should the relative importance of the personal characteristics be weighted in the selection process when the articles do not provide a clear answer? A popular book claims that rewarding people with money will reduce the intrinsic motivation they derive from their work. A review of numerous field research studies seems to firmly establish that rewards can motivate performance, as long as they are linked to performance. How should the HR Director reconcile these conflicting claims?

Human resource management practitioners must make critical decisions relating to how their organizations manage their workforces. There is a substantial body of research that is relevant to making workforce management decisions. These research findings have been based on both laboratory and field studies and can be used to predict with greater accuracy how effective alternative strategies are likely to be. This body of research can be a valuable tool for practitioners, informing their decisions and increasing the probability that their decisions will have a positive impact on the organization.

Denise Rousseau made “evidence-based management” the theme of her 2006 presidency of the Academy of Management[1] and Gary Latham did the same during his 2009 presidency of the Society of Industrial & Organizational Psychology (SIOP). Both challenged the academic community to do research that is relevant to the issues being faced by practitioners and to do research in a manner that enables practitioners to understand its relevance and to apply it to address the issues they face. The Conference Board did a study of how human resource management evolved – the theme was that leading-edge HR must be evidence-based.[2]

Yet despite these initiatives, the level of understanding about what research has found is inadequate in the practitioner community. In a study of over 1,000 HR practitioners Rynes found that they believed things to be true that were not supported by research.[3] Her study consisted of a 35 question True-False test about what research has found. The median score was 20, which indicates that practitioner beliefs are not well aligned with research findings (guessing would result in a 17.5 average score). Some of the misconceptions held by practitioners that were discovered in the Rynes study can negatively impact the quality of decisions relating to employee selection, development, and motivation. Some of the questions with low scores by practitioners were:

  • Q: “Companies that screen job applicants for values have higher performance than those that screen for intelligence.”
    A: false. 16% answered correctly.
  • Q: “On average, asking employees to participate in decision-making is more effective in improving organizational performance than setting goals.”
    A: false. 18% answered correctly.
  • Q: “On average, conscientiousness is a better predictor of job performance than is intelligence.”
    A: false. 18% answered correctly.
  • Q: “On average, encouraging employees to participate in decision-making is more effective for improving organizational performance than setting performance goals.”
    A: false. 18% answered correctly.
  • Q: “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true performance in actual decisions.”
    A: false. 35% answered correctly.
  • Q: “Being very intelligent is actually a disadvantage for performing well on a low-skilled job.”
    A: false. 42% answered correctly.

A 2010 survey was conducted by World at Work that attempted to measure the understanding level of its members about what research has found on key issues relating to rewards management.[4] A random sample was used and over 600 members (12% of those solicited) responded. There were ten true-false questions asked and the participant scores were much higher than those in the Rynes study. The average score on five of the questions was 70-80%; the score was 80-92% on three questions; only two questions had very low scores (38% and 29%). The questions were focused on performance and rewards management and the respondents were rewards practitioners. This suggests that the knowledge level of specialists about research findings in their area of expertise may be higher than that of those HR generalists in the Rynes study, who were asked about a broader range of HR issues. But since the scores on seven of the ten questions in the World at Work study were less than 80% there is still considerable cause for concern.

When interpreting the results of these two studies it should be kept in mind that those responding to them were very likely to be those who were the most knowledgeable. If a potential respondent discovers that (s)he has no clue as to the answer to the first several questions that person is more apt to fail to respond. Unless the survey responses were mandatory (i.e., when professors assign the task to students) this differential fallout is going to cause the scores to be higher than they would have been if all those sent the materials had responded. Because of the manner in which the two studies were conducted it was also possible for respondents to search for the correct answers to the questions. The most diligent are the most likely to respond and they are also most likely to take the time to search for evidence that will inform their answers.

What Do Practitioners Need To Know & How Do They Get To Know It?
It is easy to be critical of practitioners if they do not know what the research evidence suggests is true. After all, if they are making decisions on critical issues related to workforce management shouldn’t they be aware of what research could tell them? Shouldn’t decision-makers utilize all relevant evidence when they attempt to fashion strategies and programs that will be effective? The answer is clearly “yes.” But often management decisions are complex, obscuring where evidence regarding alternative strategies might be found. For example, if HR is at odds with the head of IT about whether “innovative” work can be measured and evaluated in a formal performance appraisal there is no obvious source for identifying and applying relevant evidence.

There are two types of obstacles that get in the way of research being incorporated into practice:

  1. research results may conflict with deeply held beliefs
  2. in order for research to be used practitioners need to be aware of the existence of research findings, understand how this evidence is relevant to their decisions and know how to apply it appropriately. And as the two aforementioned studies show there is a large gap between science and practice.[5]

Existing beliefs do count. Everyone is prone to cognitive distortions of reality. [6] We more readily notice and accept information that is consistent with our beliefs. We discount evidence that our instincts tell us is not true. We sometimes don’t care about what is rationally true if it violates what we think is “right.” For example, the finding that intelligence is a better predictor of performance than conscientiousness can be at odds with what we think should be the case… that hard work and dedication should lead to success. After all, everyone knows of intelligent people who never get around to doing anything, or at least the things that lead to meeting organizational objectives. And we see people overcome limitations through extraordinary effort. Part of the reason we are prone to rejecting some findings is that we believe they should be different.

If one digs deeper into the research on the predictors of performance it will be found that if both intelligence and conscientiousness are used to predict performance the chances of being right skyrocket. But if the focus is only on comparing the two predictors it turns out intelligence is more predictive and in a way is a precondition for high levels of performance. So when practitioners read about research findings they must be sure to get the whole story. But the world is full of sound (and word) bites and often a study producing surprising results tells a very limited story – that “intelligence had the highest correlation with performance,” leaving out the more helpful finding that “intelligence and conscientiousness produced an even higher correlation.” And claims that get people’s attention tend to get published, read and talked about. This can lead to authors trying to write the next best seller struggling to find data that produces surprising results, thereby biasing their search for information that is relevant to the issue being addressed.

There are realities that contribute to the gap between research and practice. The gap is caused by several realities:

  1. practitioners don’t read the journals where most research findings are published
  2. the research is often not clearly related to the issues practitioners face
  3. the research tends to be highly theoretical, with little guidance as to how and where it can be applied
  4. the research is presented in a form that is not easily accessible to untrained practitioners
  5. there is limited communication between the research and practitioner communities. So when practitioners are criticized for not utilizing the available evidence it must be understood that there are obstacles in their way, some created by them and others created by the way research is done.

Part of the disconnect between the two communities is attributable to what HR practitioners read. [7] It is clear that what they read is for the most part not where rigorous research is published. Researchers are primarily in academia or in consulting organizations. Academics have a rewards structure that positively values publications in “A journals” (Journal of Applied Psychology, Academy of Management Journal and the like). Publishing in these journals requires that their research is focused on theory development and often that it is heavily based on quantitative analysis. Academic promotions are based on contributing to these publications and often no credit is given for trying to reach practitioners by contributing to the publications they read. In some cases academics are censured for dissipating their energies. Cascio suggests that researchers focus on the creation of knowledge, rather than its diffusion,[8] which is understandable if all of the incentives available to them motivate that focus.

Practitioners most often rely on books and articles in practice-oriented publications, although they are exposed to studies like the World at Work survey described earlier. Unfortunately, the popular literature is littered with fads and claims that have little or no support in the form of responsible research findings. For example, a recent best-seller claims that research shows:

  • that you cannot motivate someone else… they must motivate themselves
  • that rewarding performance extrinsically diminishes their intrinsic rewards derived from doing the work, which can decrease their performance[9]

These contentions can lead rewards practitioners to make decisions that are clearly wrong, but that is only evident if they know they were based on lab research that should not be generalized to actual work settings. Those trained in research methodology are taught that in order for research to be valid it must be based on studies that are internally valid (designed in accordance with established protocols) and externally valid (the findings must be generalizable to the setting in which they are to be applied). The book just cited was based on research that was conducted in settings unlike those found in organizations and therefore should not be relied upon to make decisions in real work settings.

Certainly popular books can be helpful and illuminating. Latham’s book Becoming The Evidence-Based Manager[10] applies sound research findings to management issues, but it was written by a distinguished researcher and academic. Gladwell’s Outliers contains a chapter on fatal Korean Air accidents caused by cultural impediments to crew members challenging Captain’s decisions. Identifying the nature of the problem (strong hierarchical culture) and offering training seemed to reduce re-occurrences. This insight can increase sensitivity to how cultural influences may affect behavior and how issues might be dealt with. So deriving lessons from books and articles in professional publications can be helpful to decision makers. But there is often no rigorous oversight exercised to be sure claims made in books are warranted, opening the door to widespread acceptance of questionable claims. Best seller lists often create fads which later prove to be ill-founded. But the “wisdom” provided in a book everyone seems to be reading is hard to resist or accept cautiously. Rejecting the claims made is even harder when one’s CEO cites claims made in a book.

Practitioners also often rely on “benchmarking,” which consists of evaluating the practices of successful organizations. This certainly can be a valuable source of guidance, although caution should be exercised for several reasons.[11] First, emulating others precludes gaining a competitive advantage… at best it only allows the emulator to come up to par. Second, the information available on what other organizations do is almost always incomplete. At best it enables the emulator to know what others say they did and what the results were, without indicating why the results were what they were. Finally, in order for what worked “there” to work the same way “here” it is necessary that the two contexts are very much alike. This is a problem even when an organization compares itself to other organizations of similar size in the same industry. Culture and internal realities have a major impact on what works in an organization. Rarely will the benchmarking organization have adequate information available to be able to identify cultural differences and internal forces and to assess their magnitude and their impact.

On the other hand, benchmarking can be extremely valuable. For example, compensation surveys are an invaluable tool for determining an organization’s competitive position. Knowing what type of rewards programs are being used by competitors and the compensation levels that prevail in the relevant labor market enables rewards practitioners to make more confident recommendations to management. Attempting to operate in the dark is likely to result in over- or under-compensating employees and/or compensating them in an inappropriate manner. Unfortunately the consequences are discoverable only after bad things have happened (valued employees have exited or workforce costs have become non-competitive). World at Work conducts an annual survey of salary budgets that is the standard for making budget decisions for thousands of firms.[12] This type of study makes it possible for organizations to compare their actions to those of competitors, which would not be possible without the survey. Asking competitors how big their pay adjustments will be next year would produce little in the way of responses and surveys conducted by third parties can overcome this obstacle.

Information about what made strategies successful or unsuccessful in other organizations can be available through professional association conferences, as well as articles in professional publications. There is, however, a strong built-in bias in these presentations and in the literature. Very few people will publicly detail their failures. As a result if one reads the available articles on the adoption of a particular strategy they are apt to be positive… the failures will go unpublished. This is one of the causes of the frequent fad outbreaks, most of which fade away after organizations attempt to implement the strategies and make them work in contexts unlike those in which they succeeded.

What Researchers Research & How

There is a great deal of criticism aimed at what researchers choose to study and how. And lately much of this criticism has come from distinguished scholars who do groundbreaking research and teach in the top business schools.[13] One of the criticisms is that much of the research has no relationship to the pressing issues being faced by the practitioner community. In addition, the research addresses very narrow issues. And, as previously discussed, when it is reported out it appears in journals that are not read by practitioners. Even when attempts are made by practitioners to access these journals it is found that the results are presented in a way that makes them inaccessible to those without formal training in research methodology.

To be fair to researchers they are required to adhere to strict guidelines as to how research must be conducted and reported. And journal editors are not tasked with ensuring that people lacking advanced training in quantitative analysis techniques can access the information. It would severely compromise the usefulness of research if researchers were to deviate from the accepted research methodologies. The principles guiding sound research are relatively well defined and understood in the research community. In order to be credible research design must be internally valid… it must result in measuring what is intended. And the findings must be reliable, meaning repeated trials would result in similar results. But in order to be of value outside of the context within which it was conducted it must be externally valid, or generalizable. And there is far less attention paid to generalizability by journal editors than necessary if practical application is considered to be an important objective.

The guidelines established for researchers also demand transparency of their methodology, so others can attempt to replicate it. These strict guidelines often result in numerous studies of the same issue, although done somewhat differently. Multiple studies are sometimes combinable into what are termed “meta-analyses.” Since the samples available in individual studies are often limited the ability to combine findings from multiple studies makes is possible to strengthen the veracity of the findings. And repeated studies that confirm a hypothesis add face validity, strengthening the confidence level.

One of the criticisms of research by outsiders is that it often seems to be guided by the principle that “if you can’t count it you can’t measure it.” This perception is due to the quantitative emphasis and what seems to be an overwhelming amount of statistical data being included in the findings. There certainly is a risk of making what one can count important and the focus of measurement, whether or not it is really important (but convenient). Principles of sound research methodology help to control the urge to ignore factors that must be measured qualitatively (aka subjectively). And there is an increasingly used type of research that is called a “systematic review.” Within strict guidelines it is possible to interpret what has traditionally been termed “anecdotal evidence” and to make it a part of these reviews. The philosophy underpinning this approach is that all relevant information should be discovered and be integrated to produce findings based on the complete body of evidence. However, it is important to use information appropriately, which means that subjective opinions, even if they are a consensus among a large number of people, are treated as subjective opinions, and they must not be irresponsibly combined with objective data. When researchers publish in journals they are tasked to discuss the limitations of their studies, which helps to make clear the potential threats to the validity of the findings.

So when researchers are asked to provide relevant and accessible findings to practitioners it should be understood that the integrity of that research cannot be compromised by failing to apply sound methodology in order to better serve the practitioner community. No one is well served by compromised research.

How Can The Research – Practice Gap Be Narrowed?

Increasingly professional associations and consulting organizations are attempting to act as intermediaries, to identify research that is relevant to practitioners and to translate the findings so they are accessible to practitioners.[15] The SHRM Foundation has for the last few years been commissioning qualified people to aggregate the research available on topics relating to the issues commonly faced by practitioners[16] and to write summary reports in a form that is both accessible and usable by practitioners. World at Work has done research on topics of interest to its membership, often in partnership with consulting organizations and/or academics. The Society for Industrial and Organizational Psychology is initiating projects to bring research findings to practitioners in an accessible form. The Academy of Management attempts to reach practitioners with its Academy of Management Perspectives. In addition, several professional associations have developed certification programs and courses that prepare people for the certification examinations. There has been criticism that these courses inadequately present relevant research, but it is difficult to balance theory and practice when the available time is limited. And despite Kurt Lewin’s observation that “there is nothing more practical than a good theory” practitioners will often show little patience for attempting to understand theory, being more interested in being told what they should do to address their problems (and to pass the examination).

The consulting community can play a liaison role, between the research and practitioner communities. Consultants tend to hold more advanced degrees and to have access to multiple organizations. Consulting firms are often willing to invest in research if it is it makes economic sense and/or if it results in valued intellectual property that provides a competitive edge relative to other consulting firms. But much of the work done by consultants is focused on application, with contributions to theory being a secondary concern. Their clients rarely are willing to underwrite the development of theory. Although consultants publish articles and white papers these products are generally intended to promote consulting services. One of the biggest contributions made by consultants is their ability to gather confidential data, to aggregate and analyze it and then report it out in a manner that retains each organization’s confidentiality. Surveys relating to practice are a big business and serve a real need.

Professional associations, universities and consulting firms can add to the research-practice dialogue by holding joint meetings designed to enable free exchange between the academics and the practitioners on matters of current interest. The Center For Effective Organizations, a research/consulting entity attached to the U. of Southern California, has held sessions for years for just that purpose. Most major professional associations have conferences, but a review of the audience discloses many practitioners and few researchers. Inviting researchers to these conferences could increase their participation but a case must be made that this would in some way benefit them. Many universities give credit to faculty for “public service” activities and certainly contributing to the knowledge of the practitioner community could be viewed as such a service if Deans and tenure committees are willing to do so.

But even if efforts are made to inform practitioners about the research evidence available they must be able to use it. There has been an increase in the percentage of HR practitioners with MBAs, MSHRs and other advanced degrees. Yet many educators believe that these graduates do not know what they need to know in order to deal with the issues they are going to be facing. For several years I have served as a faculty member for DePaul U. in their MBA and MSHR degree programs, and based on my experience I believe that there should be more required coursework that will prepare students to understand and even conduct research. Courses that include quantitative analysis techniques would better prepare students/practitioners to understand research publications and to be able to do more analysis on their own. The reality however is that most courses are challenged to include even more information and unless the curriculum is expanded there often is not enough instruction time to equip students with the desired level of knowledge. Since the costs of degree programs have become difficult for individuals or organizations to afford expanding curricula would only exacerbate the problem by increasing the money and time required. And regrettably many HR practitioners have not taken enough undergraduate courses in statistics and quantitative analysis to prepare them to cope with fast track introduction of research methodology when it is included in course work.

One of the most effective ways of focusing research on issues being faced by practitioners is for practitioners to take the initiative to commission studies and/or seek advice from the research community. Organizations can form partnerships with universities, consultancies and associations and communicate their needs for evidence relating to issues they are facing. This can allow the practitioner to ask what the aggregated body of research has to say about a subject of interest. When research findings are not available that address the issue adequately the organization can support a custom designed study. If there is an extensive amount of research needed the organization might put one or more university faculty members on retainer. The national research laboratories in the U.S. make extensive use of “joint appointments,” which allow researchers to split their time between the organization and a university with which the lab has a relationship. A number of organizations staff internal Organizational Development functions with PhDs and research support personnel. By developing a close relationship with the OD function HR can focus research efforts on issues they are facing currently. In some cases the OD function resides within HR, making coordination even easier. The bottom line is that practitioners can have an influence on what is researched and how.


It is increasingly argued that HR needs to be reshaped into a decision science, much as Accounting was reshaped into Finance and Sales into Marketing.[17] In order to measure the effectiveness of strategies and programs better metrics are needed, and investments in people must be justified in business terms. But decision sciences use scientific methods and practitioners must be able to understand and to use these methods if practice is to align better with science. A bridge must be built between the research community and the practitioner community. And the traffic across that bridge must flow both ways. Practitioners must carry their needs to researchers so they can consider conducting research that has relevance. Researchers must make their findings more accessible to practitioners. And practitioners must make the effort to understand what the research says and how to apply it. All parties… researchers, practitioners, professional associations, consultants, educational institutions… can contribute to closing the research – practice gap. But it is necessary to recognize the challenges and to mutually seek constructive solutions, rather than blaming those believed to be delinquent in fulfilling their mission. Both evidence-based practice and practice-based evidence are required. It takes an entire community to raise the quality of practice.

[1] Rousseau, D., Academy Of Management Review, Vol. 31, No 2 (2006).
[2] Gibbons, J & Wook, C., “Evidence-Based Human Resources,” The Conference Board Research Report E-0015-07-RR (2008)
[3] Rynes, S., Colbert, A. & Brown, K., “HR Professionals Beliefs About Effective HR Practices,” Human Resource Management, 41, 149-174
[4] “The Connection Between Academic Research and Total Rewards Professionals,” World at Work, 2010
[5] Cascio, W. & Greene, R., “The Employee – Organizational Relationship and the Scholar – Practitioner Divide,” in The Employee – Organization Relationship, L. Shore et al (Eds), Routledge, New York, 2012.
[6] Hastie, R. & Dawes, R., Rational Choice In An Uncertain World, Sage, Thousand Oaks, CA, 2010.
[7] Rynes, S., “The Very Separate Worlds Of Academic And Practitioner Periodicals In HR Management,” Academy of Management Journal, Vol. 50, No. 5 (2007).
[8] Cascio, W., “Evidence-Based Management and The Marketplace For Ideas,” Academy of Management Journal, Vol. 50, No. 5 (2007).
[9] Latham, G., “Observations Concerning Pathways for Doing Useful Research,” in S. Mohrman & E. Lawler III (eds), Useful Research: Advancing Theory and Practice, Berrett-Koehler, San Francisco, 2011.
[10]Latham, G., Becoming The Evidence-Based Manager, Nicholas Brealey, Boston, 2009.
[11] Greene, R., “”Can We Discover What Will Work Through Benchmarking,” in World at Work Journal, 10, 2008.
[12] World at Work Salary Budget Survey (annual), World at Work, Scottsdale, AZ
[13] S. Mohrman, & E. Lawler III, Useful Research: Advancing Theory and Practice, Barrett-Koehler, San Francisco, 2011.
[14] Briner, R., Denyer, D. & Rousseau, D., “Evidence-Based Management: Concept Cleanup Time,” in Academy Of Management Perspectives, 19 (2009).
[15] W. Cascio, “Professional Associations: Supporting Useful Resarch,” in S. Mohrman & E. Lawler III, Useful Research: Advancing Theory and Practice, Barrett-Koehler, San Francisco, 2011.
[16] SHRM Foundation’s Effective Practice Guidelines, SHRM Foundation, Alexandria, VA
[17] J. Boudreau & P. Ramstad, Beyond HR: The New Science Of Human Capital, Harvard Business Press, Boston, 2007.
[18] G. Latham, Becoming The Evidence-Based Manager, Davies-Black, Boston, 2009.