Sunday, June 14, 2009

Performance Management Analysis

Continuing our exercise in developing a performance management framework, a summary of the necessary steps for a successful program follows:

1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

In previous posts I highlighted the importance of reporting performance metrics in a consistent and well-defined manner. In this post I'll cover the second part of a successful performance management program, analyzing the data from performance measures. A truly operational program must go beyond simple data reporting. Let's take response time from an Emergency Medical Services agency as an example. Is it enough to simply record and report response times? How do you know that what you're reporting is considered a "good" average response time? What percent should be below a specified time? Hard to tell without actually looking at the numbers. The data must be analyzed in the context of identifying troubling trends and researching the root cause of potential problems. All too often, data is reported to meet some external obligation and nobody even bothers to look at it! While complex data analysis is something of a science, there are many cases where any public sector employee can make good use of performance data with no training whatsoever. But to be most effective, it's necessary to record the findings of any current analysis for use in future situations. Raw data alone is useless to an organization, which makes a narrative of performance measures analysis (and eventually corrective action) a requirement for any successful performance management program.

First, don't focus on targets or absolute numbers when analyzing performance data. In most cases, analyzing trends over time is far more useful, and simple graphing exercises in a spreadsheet can highlight even modest improvements (or declines) in performance. This not only gives the layperson in government a powerful data tool, but also provides a baseline of data from which to operate. By focusing on trends, all agencies will feel as if they're operating from their current baseline, no matter how poor it may be. What's more important: that incident response times in Emergency Services are improving over time, or that they're hitting some arbitrary target? By focusing on graphing trends over time, most agencies will have both a powerful tool to evaluate their activities as well as a starting point for a performance management program without enduring criticism of missed targets. This isn't to say that looking at performance data against specific targets isn't useful, particularly when proposing new initiatives within an agency (particularly if it has a budgetary impact). Hard targets may be necessary to justify the initiative's or project's cost and can be used in declaring the initiative a success.

Second, all performance data should be analyzed within the context of whether significant change is due to real improvement in programs and outcomes, or if there is some other lurking variable. In my own experiences, improvements and challenges are often the result of poor data collection processes or errors, and not because of material changes in performance. Always rule out data anomalies first when a particular data point is out of the ordinary (many times by graphing the data these will be easy to spot). In our Emergency Services example, if we're looking at average response times, a few wild outliers could bring up the overall average significantly. What if the outliers were due to a faulty response report or some change in staffing that results in data not being reported properly? Government services are fairly stable over time and one would expect the data to represent this.

Finally, assuming that there is some difference in trends and that data issues are not the underlying reason, find out operationally exactly what the reason is. This is the most difficult part and often entails getting into the "weeds" of the organization. Operations analysts make a living out of improving processes in an organization, but managers with relevant knowledge within an organization should be able to just as easily get behind the numbers to understand what's happening. In these cases, several different metrics may be used to understand what's affecting trends. Going back to our response time example, let's assume we measure both the total number of EMS responses in a month in as well as average response time. If some disaster happened in a given month to cause responses to rise and we consistently measure this number, it may be a telling data point in explaining why average response times increased due to the increased stress on staff. Using multiple measures can help substantiate ideas for certain trends in the analysis phase.

These are just a few examples of good analysis techniques, but there are many ways to slay this dragon. I reiterate, however, the importance of recording any and all analysis that is done in order to create a running narrative to be used in future analysis of why data is trending the way it is. We'll get into the corrective action phase in the next post, but reporting a relevant set of performance metrics followed by analyzing the data are two huge steps towards an effective performance management system.

No comments:

Post a Comment