How good are you really at performance ratings?
Rating performance is an integral part of many performance reviews and assessment systems. Many organisations believe that with enough training and time, anyone can produce reliable performance ratings. Worse than this we feel that our people “should know” how to rate performance and service, even if they have not received any training. Unfortunately, the research shows that we are all really lousy at rating anyone. This means that most of our data ranking the performance in our organisations is fundamentally flawed. Our mistaken confidence in these performance and competency ratings leads us to use them to justify who gets promoted, who gets training and who gets a bonus. These decisions are based on the idea that these ratings accurately reflect the person being rated.
15 years of in-depth research has demonstrated that we are all particularly unreliable at rating the performance of others. This inability to rate others is called the “Idiosyncratic Rater Effect”, and is based on our own unique idiosyncrasies and context. e.g. our ratings on “potential” are based on our definition of the concept, our own potential, our intent when rating (to grow the employee or correct their behaviour), and on our relationship (approval/disapproval). The Idiosyncratic Rater Effect is persistent. Regardless of the training we receive, research shows that over 60% of our rating is a reflection of our own experience of the world – not really a reflection of the person we are rating. This means that when I rate you (on anything) my rating reveals more about me than it does about you.
Despite the extensive Idiosyncratic Rater Effect data in academic journals many businesses remain largely unaware of it (or just ignore it).
Can you really be objective and keep politics, competition and money out of your score?
Watching the immensely popular British Channel 4 TV programme “Come Dine with Me” you may have realised that the scoring is not always fair or even remotely realistic. The show gets four amateur chefs to compete for a £1,000 cash prize by hosting a dinner party for the other contestants. Each competitor then rates the host’s performance using a 1-10 score. A delightfully sarcastic and dry commentary is added by comedian Dave Lamb.
Watching the diners rate each other is fascinating. One may score the host a 6, saying they did a fantastic job, while another will rate the evening a 3 claiming the main course was a disappointment. The third will rate the evening a 7, sharing that they really liked the host as a person but their food was “pub fare”. Some score strategically hoping that the lower their competitor’s scores the higher chance they will win the £1,000. Some score based on a specific ingredient or a particular aspect they liked/disliked, which dramatically impacts the score. Sound familiar? The show highlights some of the most common rating effect flaws.
Why we are so terrible at performance ratings:
- It is hard to rate a performance under pressure (performance score due today)
- It is difficult to remember what happened 6 months ago (or longer) due to hectic schedules; it is often hard to remember what happened last month
- The most recent behaviour/project tends to be freshest in our minds (good or bad)
- We tend to pay more attention to “issues” over successes (squeaky wheels get more attention)
- Objectivity is hard. What does 4 out of 5 really mean?
- Does my 4/5 compare to your 4/5? What is the standard? Are you scoring based on the same information, or do you have extra data?
- Our emotions colour our ratings. We rate those we like higher than those we don’t
Poor performance ratings stem from our belief that we can rate others effectively.
Latest posts by Richard (see all)
- What does Treating Customers Fairly mean, and how will it impact you? - March 24, 2017
- Embracing and embedding TCF compliance principles - March 22, 2017
- Elements for customer onboarding in Financial services - March 13, 2017