E The Performance Evaluation Report Card (PERC) Methodology Explained

Because multiple factors are used to evaluate performance, multiple scaled scores will be calculated. These are then averaged into a single score using a logical weighting scheme. For example, 20% could be based upon one factor, 30% on another, etc, totaling to 100%.

Creating this pre-defined weighting can be a computational challenge similar to fitting a regression model, but with a critical difference in that the goal is to incorporate as many useful factors as possible. A system with ten different factors, each sourced and/or calculated somewhat differently, but every single one having some explanatory effect on performance, is far more useful and robust than a system with only one or two indicators. History has shown that poor performers tend to score poorly in most performance indicator categories, and because of this, high score correlation is a positive outcome meaning that several factors are relevant to performance.

This relatively simple scoring system may not satisfy some mathematicians or financial analysts, but that is not the target audience. Managers and executives that would resist and/or distrust a complex formula can readily understand an average and how far they are above or below it. At its core, the scoring system is a communication tool—a report card. It is for operational use, not bookkeeping purposes.

Even so, whatever scoring system is derived, it is critical that the system is viewed as sufficiently predictive. This can be proven by using past historical data and recalculating performance rating scores for prior months or years for members of different peer groups. Then, as both the scores and historical outcomes are known, it should be easy to determine whether the system is functioning correctly - if not, weightings may have to be adjusted or some additional factors developed and/or perhaps some indicators dropped altogether.

During the life of the project (and part of the next PERC phase to be discussed), it is common for performance rating systems to be updated based upon newly available data or simply better information. When this happens it is useful to provide both the old and new rating system for a period of time to facilitate comparison and transition. ​

Example: (From Jim Boswell on the Ginnie Mae project)

As the scoring system was being designed, there was a discussion about whether the output should be letter grades, percentiles, deciles, quartiles, or some other indicator. In practice, it mattered little. A single numeric score from 0 to 10 was selected, calculated to two decimal places.

Far more important to the success of the project was the score’s presentation, explanation, and consistency. The reports received by individual managers contained not just the score, but also individual components, indicators, and trendlines in both graphical and tabular formats, as well as discussion about the process used and the high level goals of the project.

The Feedback and Learning Phase

The final phase to PERC is to make the results of the performance rating system visible to a wide audience that includes executive management, any appropriate team of analysts, and the individual managers responsible for the performance of the units being rated. The last audience (unit managers) is the most important, as it is ultimately their efforts that will drive change across the broader population.

As managers are provided with their performance ratings (or report card) with a detailed explanation as to how their scores are generated using population norms and standards, some level of confusion or consternation may likely generate. Managers with units rated poorly will almost certainly question the scoring system and its individual factors, their own ability to affect change, and/or possibly the entire PERC process itself. This is another point where executive sponsorship and prior management involvement or knowledge is critical.

It should be made clear that the goal of the analytical team is not perfect precision, but rather a fair evaluation. Data quality is never perfect and volumes are never ideal, but for most organizations in today’s world this should not be a particular problem. Since negative performance scores are not based upon single variables but collections of factors and a unit’s evaluation is based upon peer group comparisons, it is difficult to argue against the report card (especially if prior evidence shows the relationship of problems with the indicators used). 

Of course, any potential data issues should be investigated and the system improved if necessary, but at this point the focus is not on the analysts, but the manager. How can performance be improved? What are others doing differently? What specific conditions exist? What obstacles can be removed? What processes can be streamlined? What problems can be resolved?

Initially PERC and its performance ratings might focus executive management’s attention on the poor performers in the population. This is where future problems are most likely to occur and where improvements can be made most rapidly, and thus the best use of scarce time and resources. However, top performing units also represent opportunities to showcase individual management efforts and spread information throughout the organization on what “best practices” actually are working and other units may want to try.

Performance ratings should be provided at scheduled intervals (e.g., monthly, quarterly), depending upon cost effectiveness and reporting issues. This carries with it another opportunity, allowing executive management and the analytical team to view performance trend lines, not just for individual managers, but for the population as a whole. If poor performers are improving their operations over time, then this will raise the averages for the entire peer group.

This means that an organization which rated just above average a few months (or other reporting cycle) ago, but made no additional improvements, could very likely find itself moving down in the rankings. If such a unit wants to arrest this decline, then they, too, will have to improve their operations. Thus indirectly the performance of the average peer group is also improved, and their better scores force the top performers to work harder to maintain their advantage.

Example: (From Jim Boswell on the Ginnie Mae project)

The initial data used to develop the scoring system was based on securities accounting data from issuers, a set that included information on dollar amounts and loan delinquency, but only at a high level. Later in the project, a new source of data became available, at the individual loan level.

Loan level data contained information such as where and when the loan was made and the interest rate, all of which are be indicators of whether a loan is likely to be or become delinquent or go to foreclosure. By combining the specific portfolio distribution of an issuer with higher-level averages from across the program, customized portfolio averages were created that helped to further distinguish economic from institutional performance.

Comparison of PERC with Other Improvement Methodologies

View single page >> |

Disclosure: No positions.

How did you like this article? Let us know so we can better customize your reading experience.


Leave a comment to automatically be entered into our contest to win a free Echo Show.
Bitcoin Bandit 1 month ago Member's comment

Haven't heard the phrase "Globanomics" before. Did you coin that?