E The Performance Evaluation Report Card (PERC) Methodology Explained

How PERC Works

PERC can be viewed as having three phases in its application: (1) an Endorsement & Evaluation Phase; (2) an Analytical & Scoring Phase; and (3) a Feedback & Learning Phase. Each is discussed below in turn.

The Endorsement and Evaluation Phase

The objective behind PERC is to improve performance throughout an organization, using both internal and external organizational indicators in a feedback loop that provides a way to monitor progress towards goals over time. In the Endorsement and Planning Phase of PERC an organization must decide what it is that one wants to do or become and then how best to establish a monitoring system that supports or facilitates movement towards those organization goals and aspirations.

From the outset, a program implementing PERC should have explicit executive endorsement. This can be challenging if executive management has not seen a PERC methodology in practice before and cannot understand the process or benefits which, at the outset, are entirely conceptual. In theory, however, data-driven analysis can always generate new insights and these insights can be used to help sell the program.

Although lower level management may find it tempting to initiate PERC without executive endorsement, unfortunately in practice, without higher level endorsement, it makes it more difficult to develop performance indicators that are accepted by all without encountering resistance throughout the organization. As a result, one of the most important and critical elements of PERC, feedback and improvement, is compromised at best or simply just will not happen at all

Assuming executive-level endorsement, the next step in this phase (with the help or participation of a team of working unit managers preferred) is to develop an understanding of the quantitative characteristics, also called variables, indicators, or factors, that can be used to identify the best and worst performing units in a population. A typical analytical approach in this phase is to segment working units into appropriate comparative peer groups and then perform statistical analysis using available variables to determine which factors are likely to be useful as performance level indicators.

Generally ratios or weighted averages will be used in order to measure consistently across different output volumes. Even so, it is still critical to maintain valid peer groups, otherwise the results will be dominated by biases, correlations, and outliers, and the statistical distributions may become meaningless. For example, it does not make sense to compare the loan performance of banks with thousands of loans versus those that have millions of loans--not just because internal operations might be different, but the variances between expected norms are likely to be as well.

Although initial analysis assumes data availability and quality, there may be gaps that need to be addressed. Simple technical fixes can become huge obstacles when organizational boundaries have to be overcome--for example, not everyone has an incentive to deliver or share their highest quality data. Again, this is one place where executive endorsement can play an important role.

The end result of this phase of PERC will be the identification of several factors that have the potential to measure performance and predict positive or negative outcomes. While a few of these factors may be “inherited” and some might argue beyond a specific organization’s control, most should in fact be indicative of, or the direct result of, unit effectiveness and work performance.

Examples of Analytical Issues with Performance Indicators

Jim Boswell, (while working with Ginnie Mae)

Every MBS loan portfolio monitored by Ginnie Mae has a market value, and when that market value falls below zero, a mortgage bank (or issuer) has an incentive to default and deliver its portfolio to Ginnie Mae HUD and by proxy, the US taxpayer. Because Ginnie Mae was not yet collecting quarterly balance sheet and income data as the Savings & Loan crisis began, something else had to be done quickly—issuers had already started defaulting. 

Using monthly accounting information reported on pooled securities, Jim quickly showed that high loan delinquency rates, when compared across peer groups, was a better leading indicator of whether an issuer would default than even the financial data was. At first, Ginnie Mae was skeptical, but quickly became convinced when other issuers that had been identified as troubled began defaulting, too. This inevitably led to the development of an “expert” scoring system (the next phase of PERC to be discussed).

Tristan Yates, (while working at UUNET):

Early in the company’s life, UUNET secured large contracts to build out the online networks that would later become America Online and the Microsoft Network, two of the largest internet networks in the world. Network data was widely available, and Yates, as a side project to his primary job, created reports using PERC-type logic for management showing gaps in network and service quality across several cities. But without executive endorsement, the managers responsible for the build out issues questioned the indicators, the data quality, and the usefulness of the reports. History later showed that the quality problems did exist, but without proactive effort, the costs to correct them were very high.

Later, these lessons were incorporated into a future project to address the accelerating growth of support costs. Yates received an explicit endorsement from the Vice President of Technical Support, and worked with individual managers to identify metrics relevant to their business areas. Workflow adjustments were made to improve quality and consistency of data, and, with the goal and process clear and the incentives aligned, managers embraced the new sources of information and used it to improve the performance of their individual units.

The Analytical and Scoring Phase

Once potential performance indicators are identified, during the next phase of PERC, it is time to develop a performance rating system for the different peer groups or organizational units that need to be monitored. Let’s follow how the factors identified above can be combined into a single score that expresses performance.

Starting out, each performance indicator has its own average, standard deviation, minimum, and maximum for its peer group of organizational units and every organizational unit has a raw value relating to each particular analytical performance indicator. The raw values need to be converted into a scale score. There are many ways to do that. One example, using statistical variances, is explained here. For example, if the average of an indicator is 40, and the standard deviation is 5, then a raw score of 50 (2 standard deviations away from the mean) for a unit might be given a scaled score of 10, whereas any unit score under 30 may be given a 0 (zero) scaled score. Then depending upon one’s desire or objective, organizational unit raw scores between 30 and 50 can be scored using a linear, standard deviation, or other logical approach.

View single page >> |

Disclosure: No positions.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Bitcoin Bandit 1 month ago Member's comment

Haven't heard the phrase "Globanomics" before. Did you coin that?