Data-Driven HR 

Creating Value with HR Metrics and HR Analytics

Buy now

Data-Driven HR

Many organizations wrestle with questions, even as they sit atop a treasure trove of valuable information collected by their HR information systems and other business software. This book, a best seller in the Netherlands (Dutch version), shows how you can answer those questions and how you can create value using HR metrics and HR analytics.  
Read more

What others say about the book

Irma Doze & Toine Al

Irma Doze and Toine Al
If you have questions, are looking for a keynote speaker or want to organize a webinar, don't hesitate to contact us!
  • Toine Al

    Toine Al is an HR management specialist and the author of three other books about HR, a topic that he has been writing about since 2001. From 2001 to 2006, Toine was editor-in-chief of the Dutch HR management trade journal IntermediarPW, where he developed a deep understanding of a wide variety of subject areas within the HR domain. He holds degrees in communication science and law.

  • Irma Doze

    Irma Doze is a specialist in HR analytics and enterprise intelligence—data management, research, analytics, and reporting. She is the founder and director of AnalitiQs, a consultancy that helps organizations transform their data into business value. From 2007 to 2012, Irma headed the department of market, business, and customer intelligence at TomTom. Her academic studies were in business economics and quality management.

Contact Us

Updates

By Irma Doze 07 Mar, 2020
Our brain needs a comparison to 'evaluate' a number, to establish whether the result is good or bad. Are we supposed to be happy about spending on average 1.200 euro to recruit an employee? We don't know. That is why we love benchmarking. But benchmarking has its limits. The main disadvantage is the risk of comparing apples and pears. Of course, if you and your team are top players, the results will be motivating. If that is not the case, the results should motivate you to learn and improve. In practice, however, this situation often leads to defensive arguments as to why the benchmark is incorrect. And often rightly so. Therefore, I recommend never to use benchmark data in your internal management reports as 'targets'. The best benchmark is your own, personalized goal. So, abandon benchmarks completely? Not at all. A regular benchlearning exercise will help you learn from others and improve. Do not benchmark against 'the market' though but define exactly who you want to watch and why. And, because no two organizations are alike, adopting best practices without further ado is no guarantee of success. Only further analysis will reliably provide insight into the success factors at play within your own organization.body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source.
By Irma Doze 25 Feb, 2020
Do you have the guts to use artificial intelligence to separate the good from the bad in your recruitment process or for talent management? Quite scary, but without that data, we definitely won't do better. The results were clear when the American marketing company Dialog Direct used algorithms to recruit new employees. The organisation saved 75% on interview time and the retention increased by 7 percentage points. This way the organization saved millions of dollars. The French glass producer Saint-Gobain (with 180,000 employees worldwide) also reaped the benefits of the use of algorithms: with the help of an algorithm they spotted various internal talents that would otherwise have been left out of the picture. No matter how beautiful those examples may sound, algorithms have stirred much discussion lately. Because how honest and fair are they? What if candidates are incorrectly rejected or promoted, based solely on data? We read more and more often that algorithms would be unreliable. Research shows that HR officials want to use HR analytics, but at the same time have huge reserves. Are algorithms actually so much better than we humans? Or worse? The wrong decision I understand if you find it difficult to trust data. And rightly so, because an algorithm is never 100% reliable. But neither are our brains. On the contrary, ask a group of 25 people how likely it is that 2 of them have their birthday on the same day. They will estimate that this chance is very small, but in reality, it is almost 60%. In the field of statistics, our intuition often fails us. This is demonstrated by Nobel Prize winner Daniel Kahneman in his book "Thinking, fast and slow". When making a decision, we are partly guided by prejudices. Suppose an applicant has a C on his diploma. You know that a list of grades does not say much about someone's talent as an employee, but your brain nevertheless records this as something negative. Whether you like it or not, during the job interview, the C haunts you. Your brain automatically searches for confirmation. You see what you expect to see, and you ignore all signals that contradict that feeling. What do you put in it? An algorithm would not have that problem. Data does not suffer from self-overestimation or emotions. Data is neutral. It is the combination with human action that makes technology good or bad. There are 2 characteristics that you have to take into account with an algorithm: What you don't put won’t come What you put will come out Let's start with what you put in. Suppose you are looking for a new programmer and you have algorithms search for the right candidate. You do not find age and gender relevant, so you don’t include those variables. What do you put in? You are looking for talent and you want to know how good the candidate is at work. That is why you have algorithms analyse pieces of program that the candidate has written. Even though after this exercise you know nothing about gender, age or diploma’s of the candidate rolling out of this, you know one thing for sure: you have a programming talent! So, you hire him. But what appears after a while: this colleague does not fit into the team at all. The algorithm has not taken this into account, because: what you do not put won’t come out. You should therefore have taken that variable (match with the team) into account. The analysis process therefore starts with an important piece of human action: ensuring that the system starts with the right variables. Sit around the table together and brainstorm freely about all the variables that could be important. Think broadly and creatively, it can be hundreds of variables. Then it's the turn of the data: it analyses which variables have the most impact on what you want to predict with the algorithm, based on statistics. The past predicts the future But also, at the end of the ride, human action comes into play. Because even with the data that 'comes out', you run into a problem. Algorithms always base their predictions on data from the past. That old data was generated by people. And people are prejudiced. Take the programmer in question. Perhaps women have a different programming style than men. And that in the past you employed more men than women. Then it becomes a self-fulfilling prophecy: the programming style that evaluates the data as "good" is mainly based on the style of men. That means that the data unknowingly discriminates against gender. The data builds on human choices from the past. Fine tuning What should we do with that knowledge? Consider the automatic pilot of an aircraft - all algorithms. In principle, the pilot has to trust them blindly. But if his intuition says the meters are broken, he will really have to take the wheel himself. We will have to do that too. It is therefore important to keep in mind: do not automate fully immediately, but keep checking yourself. Be critical. Analyse the data, test the algorithms for integrity. Evaluate the results, also view the candidates who did not pass the algorithm. Do you find out that the data unconsciously still discriminates? Then find out why. Then you can adjust the algorithm so that this no longer occurs in the future. Through frequent use of algorithms and analyses, we can fine tune them further and further. In this way they become even better, fairer and more reliable. Already, algorithms select a lot fairer and fairer than the human brain. We are very aware of the few discrimination that is still in it. We evaluate, analyse, check and test. Something that is often not even done with human decisions. Previously, we were unconsciously unable. Now we are at least aware, and most of the times also competent. body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source. Originally published on CHRO.nl (Dutch version)
By Irma Doze 23 Feb, 2020
Reports often contain aggregated data. Aggregated data is data that has been merged in order to create numbers. Examples are metrics that summarize individual data, such as the average age of the employees of an organization or one of its departments. While individual, raw, data is usually data about individuals, such as a list of employees who have called in sick, all individual responses to a survey, or a list of all visitors to a website. Analytics requires individual data. The more individual your data is, the more possibilities there are for analyzing it. This is quite simple, because then you have the choice of how you’re going to aggregate the data and can draw more conclusions from it. Another reason is that with individual data, you simply have more data and can draw conclusions, such as correlations between variables, faster. Another important reason, however, to use 'raw' data instead of metrics, is the possibility of an aggregation bias. What is true for a group is not always true for an individual. The use of aggregated data can lead to erroneous conclusions. Therefore, while you use aggregated data in reports, to perform analyses it’s always preferable to have individual data if possible.
More Posts
Share by: