After a recent analyst update I received the following question from one of our members:
"Claude, I am trying to find some good basic source material/research in developing a bench or index quality scale for managers made up of a multitude of measures and ideally including leading and lagging indicators. Is there some specific work/research/articles you can point me to in this area? I have a firm grasp on the link to strategy, and of course have a plethora of indicators."
This is an interesting question and one that warrants further discussion and insight as we contemplate measurement and metrics in the HR arena. When it comes to leading indicators how do we work with them and how do we combine them? How do they contribute to developing a manager quality index?
It is important to know what the leading and lagging indicators are within the framework of your organization. Leading and lagging are somewhat relative concepts if you think in terms of cause and effect. The effect of a cause is the cause for the next step in the chain. In the context of a managerial quality index, leading indicators would be: behavior, assessment, ratings and drivers of performance. On the lagging indicator side you would have: outcomes, performance measures, results, achievements and so on.
Leading and lagging indicators are, by definition, not available at the same time. At any given time the lagging indicator is the effect of where the leading measures were at some time before. If you are measuring performance today it may reflect behavior of three months ago.
To delve deeper into the development of a manager quality index let´s take a look at a specific example using a large retailer as our focus. The retailer has over 100 outlets and the accepted wisdom within the company is that the quality of the store manager is the most important driver of store performance. How do you measure and define that quality?
In this particular example, a wide variety of measures were available at the store level: sales, sales per square foot, profitability, inventory shrinkage, etc. Some measures were also available at the manager level, including various ratings and measures derived from the performance management process.
To begin the process we needed to work upstream and to define an outcome index. It is important to define your lagging indicators first and then work back to define the drivers of that index, your leading indicators of performance. The basic challenge here is one that we come across frequently. How do you get all the data together in the same place, at the same time for all managers? Multiple databases have different owners, are found in different places within the organization and are in different systems. It is becoming more and more important to integrate data in a dynamic format. Although a lot of progress has been made on the analytics front there are still some challenges in pulling data together. Sometimes we want data to discover what the pattern is. In this case, we did not have a definition of manager quality. We wanted to develop an index of manager quality based on the data. Sometimes the approach requires you to define what you are looking for quite precisely so that it can be pulled together. The challenge is to collect the kind of data that also allows for exploration.
Our solution to this particular problem was to use an employee survey as the database on which to shape the solution. The survey had both outcome and manager behavior measures. The employee data had within it both leading and lagging measures - at the same time, in the same place, for all managers. What we wanted to do was use the employee survey to explore the viability of defining a manager quality index. The next step, having used the employee survey to do that, was to start linking it to external measures.
Then we had to define a leadership outcome index. This was done by making a list of outcome items from the survey and investigating whether these items could ´hang together.´ If the items ´hang together´ then you can create a leadership outcome index.
In this particular example, because we were using a large employee survey, one of the weaknesses became one of its strengths. The survey questions had accumulated over time and it was not the most focused tool. It had over 100 items on it and seven of those items were identified as managerial outcome items. The idea was not to focus on behavior but on impact. We determined that if you had a strong manager you would observe an impact on these seven items:
We discovered that by adding the scores on these seven items we had a reliable index of managerial performance. That became our manager outcome index.
In the next step, we identified the drivers of the leadership outcome index. We made a list of behavior items from the survey that could be directly tied to what a manager does or does not do, as opposed to other aspects of the environment where a manager does not have control. Then we identified those items that best predicted the leadership outcome index and investigated the structure of the drivers.
We started off with 42 items that could be deemed behavioral. The kinds of responses to the survey that best reflected managerial behavior included: I trust my manager would support me. I trust my manager in dealing with work issues, etc. There was a double criterion that each item had to predict or be related to the outcome index. We factor analyzed the most useful items to decide what was being measured. In this case, we got four underlying themes that were being measured. We defined what quality meant and in this case it meant trustworthiness, supervision, being a good coach and fostering teamwork.
One of the problems with this particular approach is that all of the measures came from the same employee survey. One of these problems is called the percept-percept problem, which occurs when you deal with the same instruments because things tend to correlate together just because the same tool was used. All sorts of things tend to overestimate the correlation but won´t necessarily explain away the factor structure. It is still something that we need to look beyond. Having used the survey as our sandbox to explore the linkages and define the models, we were then ready to go outside and look at how things linked together.
This example shows us two key challenges in terms of HR metrics and the data-driven approach to human resources. The first challenge lies in getting all the right data together in the same place at the same time. It is important to note that the solution is bound by the available data. In this case, if we had had to go out and find different measures it might have been a different task. Here we used the employee survey as a way of getting at more measures quickly. The survey was somewhat unfocused and asked a lot of questions, which is normally a weakness but in this case it provided more opportunities to explore. The second key challenge is making sense of the data and seeing patterns and underlying structure. We have access to a lot of information but what does it mean and how do we pull it together? I think that challenge will remain even as technology allows us to extract and integrate data quickly and easily.
Over the course of our analyst updates we will continue to address the issues that are important to our members. Should you have any recommendations regarding subject matter for HR.com´s HR Metrics and Measurement analyst updates, please email me at cbalthazard@HR.com.