People Analytics 101: Say Hello to Objectivity
Tracing the history of human resources attempts to achieve objectivity through data is an exercise that begins with Frederick Taylor in the late 1800’s. Fast forward to 2016 and people analytics tops the list of trends for yet another year continuing to be a challenge for most organizations. Of course, this applies to many other fields in human resources but in terms of analytics, we are still hovering rather far from our destination.
Data has historically been used for compliance and rudimentary processes. The time has come to move beyond traditional reporting and use analytics to provide insights on performance management, staffing and collaboration. Let’s start by talking about performance management. With every passing cycle, the clamor for greater objectivity only gets louder. How do we use data to figure if someone is doing well? While it may be easier to measure deliverables at entry-level roles, it certainly gets harder as we move up the responsibility chain. We also do very little to account for chance while measuring outcomes. Let’s take a quick look at three aspects that will help bring greater objectivity in our performance measurement systems.
Performance management is increasingly shifting focus to process vs. outcome. For a while we’ve been focusing on how a job is done instead of just the output. However, we are still considering too few objectives. Research shows that we omit nearly half of the objectives that people consider relevant to their task. Also, the more uncertainty in the environment, the less control an employee has over the outcomes and yet most performance systems don’t account for this. One way to overcome this is by identifying the fundamental drivers of value. Let’s take an example from the sports world. If you were to measure performance in hockey, a good start would be to measure shots vs goals scored. This would be a better predictor to try and weed out chance. We could take it a step further and measure possession. Teams that keep the puck are more likely to shoot and hence more likely to score. The aim is to move further into process measurements vs. outcomes. The idea is to take data to the problem and know where in the process to start adding assessments.
Secondly, adjust for biases. Given that human beings are prone to biases; three should be kept in mind when looking at data. The first is the narrative seeking bias. Our brain is wired to link inputs together and drop facts that do not fit into the story. There is a tendency of looking at data with a lens that helps us support our preferable predetermined outcomes. This is especially dangerous when we are measuring performance. Outcome bias and hindsight bias are two other biases that add to this danger. We tend to believe good things happen to those who work hard and bad things happen to those who do not. We tend to judge decisions and people by outcomes and not by process. In addition, once we’ve seen something occur we believe that we anticipated that it would happen. In our decision-making, we do not account for chance. When these biases are clubbed together, it doesn’t allow data to give us a clear picture.
Lastly, I want to talk about three statistical terms. There are few concepts as important as regression to the mean in understanding the world. Regression to the mean shows us how anytime you sample based on extreme values of one attribute, any other attribute not perfectly related will tend to be closer to the mean value. An example of this would be to consider the 43 high performing firms in the early 1980’s that Peters and Waterman cited in their book In Search of Excellence. When analyzed 5 years later, the 43 organizations had spread rather uniformly over the mean value. Time is an imperfectly related attribute to performance and hence we observe a regression to the mean.
Secondly, I want to caution against small sample sizes. From small samples, we get greater variations from expectation. We are apt to draw too strong a conclusion from a small sample. Central limit theorem is a good starting point to understand this better. The message is always get more data, be careful of the inferences you draw.
Signal independence is the last term that I want to mention. This notion comes from the wisdom of crowds that says that the average of large number of forecasts reliably outperforms the average individual forecast. Idiosyncratic errors offset each other provided the opinions are independent. It is important to keep the signals independent and push correlation to close to zero.
These concepts are easy, doable and a step in the right direction to enable us to make better people decisions using data.
Ankita Poddar
HRBP and blogger at HRpartnerstory
Ankita Poddar is an HR professional based out of India. Ankita’s experience as an HRBP gives her an opportunity to interact closely with the business leaders, innovate and execute running programs in the field of employee engagement with focus on rewards & recognition, communication, performance management, incentive schemes, ESAT surveys and more. Ankita holds a Post-Graduate Diploma in Management with a focus on Human Resources and a Bachelor of Engineering in Mechanical Engineering.