Tuesday, December 15, 2009

Performance Management in Africa

I recently came across a seedling of performance management in Africa of all places through the Africa Public Service Performance Monitoring and Evaluation Forum. The only presence that they currently have is on Facebook but I encourage anyone interested to follow them online and assist in informing the discussion. It would be interesting if Africa became a hotbed of performance management discussion.

Wednesday, November 4, 2009

Software as a Service (SaaS) for Performance Management - Part 2

This is the second of a two-part post about Software as a Service (SaaS) for performance management. In the first part I provided an introduction to SaaS, defined what it is and touched on some of its applications to performance management. In this second part I'll detail the various benefits of SaaS including but not limited to cost-effectiveness, reliability, and ease of deployment.

SaaS Benefits

Here are the benefits of SaaS for Performance Management:


• Cost-Effective and Affordable

• Secure and Reliable

• Easy to Deploy

• Easy to Learn

• Low-Risk Investment

• Simple and Scalable

• Controls Costs


Cost-Effective and Affordable

How would a busy agency support a sophisticated technology used to manage its performance? They could rely on a SaaS vendor to maintain the technology for them. SaaS is the easiest way for government agencies that use technology to manage performance to get up and running quickly. All they need is a Web browser. For a low subscription cost, workers can enter, monitor and report on performance metrics using a Web browser. The public has Web-based access to reports. The agency has no server costs and no backups to maintain locally.

Secure and Reliable

Because SaaS vendors serve many clients, they invest more resources than a typical government agency in equipment and expertise to maintain reliability and security. Many SaaS providers leverage co-location datacenters, which offer highly secure, redundant hosting environments and off-site back-up systems. These are often “greener” solutions through the use of server virtualization and multi-tenant applications. Most SaaS providers offer service level agreements with information about privacy and data security, including: the physical security of servers and data, redundancy, back-up services, network security, and the location of the servers (hosting the application from at least two sites prevents loss of service in case of a fire, power outage, loss of Internet connectivity or other issues that typically cause a disruption in service).

Easy to Deploy

By leveraging cloud computing, SaaS applications can be turned on as needed because there is no client-side infrastructure to buy, install, configure, or maintain. When users log in they see what they need based on their role in the system, and the administrator simply adds new users by creating a new account.

Easy to Learn

SaaS helps government centralize information while distributing control. Instead of manually gathering metrics in separate formats, the information is entered in a consistent manner into a central repository. Workers and the public have role-based access to appropriate data and reports. They are empowered at the proper level with the needed data.

Low Risk Investment

Because SaaS-based solutions deliver service online and have a common code base, the setup and implementation for an agency is minimal. An agency can essentially test the service without a significant outlay of capital and time.

Simple and Scalable

The SaaS application can be rolled out to a few users or departments at a time. Once others are ready they can easily be added to the system. Since the SaaS provider developed the system capabilities they make the updates available to all customers, not just the ones that reported the issue.

Controls Costs

The greatest benefit for customers is the ability to control costs. The estimated price for an installed enterprise solution for a large government agency is more than 1/2 million dollars with at least 1 year implementation time, plus annual line items for support agreements, dedicated full-time employees (providing support), and hardware costs. With SaaS, the customer is able to start small and have a solution up and running in a matter of weeks. In addition there is a predictable amount to pay each year (usually paid monthly).

The Real Opportunity Is Change

Transforming today’s challenges into opportunities requires smart leadership and strategic use of technology to save time and money while capturing performance metrics that will allow an organization to improve over time. A reliable software as a service provider that specializes in and understands the unique needs of the Performance Management environment can help government produce better results today at a fraction of the cost of enterprise software. Performance management software costs should never be a deterrent in developing your jurisdiction's program.


Tuesday, October 27, 2009

Software as a Service (SaaS) for Performance Management - Part 1

This is the first of a two-part post about Software as a Service (SaaS) for performance management. In the first part I'll provide an introduction to SaaS, define what it is and touch on some of its applications to performance management. In the second part I'll detail the various benefits of SaaS including but not limited to cost-effectiveness, reliability, and ease of deployment.


The race is on for Government to focus on results, but the race is long and difficult because the path is rot with challenges. New technologies promise transformation of performance management processes, a great opportunity, indeed, but often at a high cost with a long implementation timeframe. SaaS solutions offer a cost effective way to simpler performance management implementation.


Municipalities, large and small, need to discover Software as a Service (SaaS) — referred to by some as applications that live in “the cloud”, Web-based, online, or on-demand as a way to save time and money without sacrificing the basic tenants of performance management: Results, Relevancy, Transparency, Timeliness, and Accuracy. SaaS also brings applications under a single platform, making it easier to implement performance management programs as well as an improved ability to benchmark across jurisdictions. SaaS is a technology trend that promises to provide a simpler solution for implementing performance management programs.

SaaS Defined

SaaS, on-demand, and cloud computing have become common terms in the technology world. In one way or another, they refer to the growing trend of software vendors providing their applications over the Web as a service, rather than as a set of code to install on a local server or desktop computer. Some actually provide both solutions. With SaaS, customers tap into one code base that is refined and enhanced (producing better results for the users) based on feedback from all users. The whole is indeed greater than the sum of its parts. Organizations and/or individuals subscribe to the service and access it using a computer, laptop or even their mobile phone. These applications are known as on-demand, Cloud-based, Web-based, or online. Cloud computing is used by many SaaS providers and refers to massive server farms that host applications online for many customers. Cloud computing enables flexible processing power and storage capacity to scale up or down, depending on actual usage.


Almost everyone who uses the Internet today has used SaaS. Email programs such as Gmail, Hotmail, and Yahoo! Mail are examples of SaaS. What is really nice is that there is nothing to download or install, users simply create an account and log in through a Web browser. Popular social networking sites such as LinkedIn, Facebook, and Twitter operate using a SaaS model.


SaaS as a Government Performance Management Tool

SaaS is ideal for government seeking technology solutions to help them implement performance management programs. Because these programs often encompass many disparate agencies/divisions that can geographically span hundreds or thousands of miles, it is useful to have a centralized, common technology platform that many users can share. In the event that some users are outside of your network, SaaS allows users to access your performance management data and application from anywhere. SaaS offers both simplicity and a cost-effective solution for government users.

Wednesday, September 16, 2009

Updates on Recent and Upcoming Performance Management Events

It's been a little while, but I have a few upcoming posts on performance management. In the meantime, here are a few upcoming events that are of relevance in the government performance management realm:

The Community Indicators Consortium (CIC) holds its international conference in Bellevue, Washington this year on Oct 1-2. CIC is a great group and Ben Warner runs a great blog on Community Indicators for anyone interested in this area. Very similar to performance metrics and from a slightly more citizen-centric standpoint.

The Association of Government Accountants (or is it Accountability?) holds its Performance Management Conference on Nov 5-6 in Seattle, Washington. I plan to be there along with all the other party people involved in performance management.

If anyone made it to the ICMA conference this week, feel free to chime in with some comments. I missed it and am curious how prevalent performance was discussed.

Finally, the Performance Management Commission released its Performance Management Framework for public review and as expected, it's too long! Who's going to read this thing when it's more than 30 pages? It's too bad because there are a lot of great concepts in it. For anyone who doesn't have the time to read such a document, I still advocate my own framework.

Sunday, August 9, 2009

The Future of Performance Management

I was at an AGA breakfast a few months ago and the point was raised that people in government had been talking about performance management for "some time", with few results. It got me to thinking abut whether there is a future in performance management and what it might look like. A couple of publications I recently read struck me as two different thoughts on where we're headed in this field, one being an "academic" piece on performance management and the other more grounded in practical approaches. The first was a point/counterpoint look at the history of performance management (largely using examples from New York city) and how it might inform better practices in municipal management around the country. The counterpoint in the article focuses on some of the failures at the Federal level in creating a useful performance measurement framework. The authors, Dennis Smith and Beryl Radin, have a largely academic debate which at times gets muddled in teasing out the nuances of performance "management" vs. "measurement". But generally speaking it's a good article with a few highlights that I took from it:
  1. Cities like New York took decades getting to an effective performance management program so we shouldn't be too hard on cities that don't get it right away.
  2. In the case of New York, moving from an input/output focused metrics to outcome based measures seem to be the turning point where measuring performance becomes effective.
  3. Local efforts seem to have been more effective than national efforts in this space.
There's a lot more to the article and it must be purchased from JPAM, but it's an excellent piece.

The other is a series published by the Urban Institute called "Legislating for Results" and offers a framework not only for performance management within individual agencies, but also a more broad plan for managing an entire jurisdiction. What I liked about the series is that it's a practical, step-by-step guide to getting results from government (in contrast to more academic approaches in other pieces). One of the nuances that Urban employs in the series is using the term "information" synonymously with metrics/measures. In fact, what we're looking for from performance measures is actually nothing more than information, and the Urban plan emphasizes providing quality information to government decisions makers in the hope of improving outcomes. My main problem with the piece is that there are many moving parts and it incorporates budgeting, communicating to the media, etc. into the framework. As a whole the series is a little too much, but I see most of the value from the three pieces on getting the right information, getting quality information, and using that information for planning purposes.

From the two pieces I got that there are clearly failures in performance management, but also some successes. Additionally, even though some academics have been talking about performance measures and management for "some time", the approaches are under constant revision and dare I say, improving. The important thing is to continue to push for jurisdictions to use performance measures, and to assist those with existing performance management programs to constantly improve them.

Monday, August 3, 2009

Where Are the Performance Metrics in the Recovery Act?

About a month and a half ago, OMB released the Federal reporting requirements for the American Recovery and Reinvestment Act. It's taken me a little while to look through the requirements, and I had read that the focus was on job creation, but I was still a little surprised at the lack of Federal interest in other performance metrics. It pretty much boiled down to "# of jobs created" and a fairly standard set of financial data points entailing how the funds were spent. The Federal government has always been a laggard in requiring its own agencies to adopt strong performance management practices, but it didn't seem out of the question that in doling out hundreds of billions of dollars they would put together a few other metrics to make jurisdictions more accountable. At the very least it would have gotten towns, cities, and counties in the habit of reporting performance data, and possibly start some of them down the road of a more substantial performance management program. Lets hope that future rounds of government reporting requirements contain a more diverse set of measures, or at least let jurisdictions name their own. It's one step toward a more ingrained culture of government performance management.

Saturday, July 18, 2009

Using Performance Metrics to Manage

In the final installment on a performance management framework, we'll look at using performance metrics and analysis in order to effectively manage agencies and their programs by using remediation and corrective actions. We've already covered the first and second steps, and will focus on the final two in this posting (all four are listed just below):


1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause(s) of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

Assuming that reliable metrics have been gathered and reported, and that data trends have been analyzed, an agency should have a good idea about where it stands operationally. The question then becomes how best to use the new information. For example, if I'm an FEMS agency that knows my emergency response times are trending in the wrong direction, and I know that the problem lies somewhere in my call center, what's the next step? (I'll answer this in a minute.) Given the diversity of agency missions that exist within any government, it would be impossible to give specific guidelines on how to fix troubling trends. For the purpose of our framework, however, the important thing is that the information is used to formulate some plan of action, and that the plan of action is clear, has timelines, and is documented for future consideration. Maintaining documentation of attempted corrective actions can be particularly helpful when there are several options for remediation. Each option can be tried over a given reporting period and performance data can be tracked. If there is some improvement in the numbers, the corrective action was likely effective; if there is little or no improvement according to the data, then another option on the list may be your best bet. The important thing in documenting the remediation is not to spin your wheels by proposing the same corrective action repeatedly and expecting a different outcome with each successive attempt.

Going back to our emergency response example in which we assume that the call center has been identified as the source for deteriorating response times, there may be multiple options to improve performance, including additional training, process re-engineering, etc. There may not be an obvious "best" remedial option, but the important thing is to pick one and continue to track response times. If additional training was implemented but the trend is not reversed in response times, then lack of training can be eliminated as both the cause of the problem as well as a corrective action. Continue the cycle of capturing and reporting the metrics, but with a different corrective action this time. Perhaps the response process is streamlined or adjusted and overall times improve. We then have some indication that our proposed solution had a positive effect on the operations that we are tracking. Through trial and error in the corrective action process, while concurrently continuing to track and report data, any agency can improve effectiveness in its operations.

The important takeaway from this exercise is that in order to demonstrate marked improvement in any public sector operation or program, all steps in the framework that we've outlined here (and in past postings) must be followed. Tracking and reporting metrics without proposing and documenting remediation in trouble spots won't bring about the change in negative outcomes that most agencies are seeking. The feedback loop of track-report-remediate-repeat is the fundamental process behind our performance management framework, and is essential in solving government inefficiencies.

Tuesday, June 30, 2009

Dispatches from the Personal Democracy Forum

As part of a Google fellowship that I received on behalf of Public Performance Systems, I spent the last two days at the Personal Democracy Forum up in New York. I’ll finish up our discussion on a Performance Management Framework in the next few days but I wanted to first report on a number of fascinating highlights and initiatives from the conference which is meant to be a confluence of government, politics, and technology. Perhaps the most notable observation is that many people who were involved in using technology to bring Obama to the masses and to victory have transitioned into developing tools for governing, now that the election is over. It was a well attended conference and there’s clearly a lot of interest in this space.

Several initiatives were launched or revised at the conference on both the Federal level from the likes of Vivek Kundra as well as from Mayor Bloomberg in NYC. New York will launch their Big Apps competition in the Fall in order to encourage developers to come up with interesting utilities to assist the city in providing its vast data sets to the public. Additionally, Kundra announced revisions to the data.gov website and highlighted updates including usaspending.gov and the new dashboard at it.usaspending.gov. The latter is a slick application and looks nice, although I’ve always contended that there is no shortage of dashboarding software out there and the truly difficult part in presenting information is improving data quality and feedback. Nevertheless, these were all great initiatives.

Another interesting set of initiatives are occurring at blog.ostp.gov and mixedink.com/opengov which are both meant to foster discussion from citizens on policy ideas. While these sites are limited to IT policy discussions, the question was raised whether we’re moving in this direction for more general policy formulation (think policy formed through wiki by citizens). This may be a little farfetched, but it was well received by the technology community who are clearly excited about playing a role in government initiatives that impact the use of data in government transparency.

While it wasn’t discussed as much, government accountability was a major theme and I had discussions with several people about what exactly this means. Surprisingly few of them had much experience with performance management but after making my own case several agreed that some level of gathering metrics to assess performance would be useful. Many were more focused on taking various disparate public data feeds and turning them into something useful. My own interest still lies in assisting governments in building those data sets to begin with.

It was a great experience and I hope to attend again next year. For a replay of some of the activities, check out personaldemocracy.com.

Thursday, June 25, 2009

Sunday, June 14, 2009

Performance Management Analysis

Continuing our exercise in developing a performance management framework, a summary of the necessary steps for a successful program follows:

1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

In previous posts I highlighted the importance of reporting performance metrics in a consistent and well-defined manner. In this post I'll cover the second part of a successful performance management program, analyzing the data from performance measures. A truly operational program must go beyond simple data reporting. Let's take response time from an Emergency Medical Services agency as an example. Is it enough to simply record and report response times? How do you know that what you're reporting is considered a "good" average response time? What percent should be below a specified time? Hard to tell without actually looking at the numbers. The data must be analyzed in the context of identifying troubling trends and researching the root cause of potential problems. All too often, data is reported to meet some external obligation and nobody even bothers to look at it! While complex data analysis is something of a science, there are many cases where any public sector employee can make good use of performance data with no training whatsoever. But to be most effective, it's necessary to record the findings of any current analysis for use in future situations. Raw data alone is useless to an organization, which makes a narrative of performance measures analysis (and eventually corrective action) a requirement for any successful performance management program.

First, don't focus on targets or absolute numbers when analyzing performance data. In most cases, analyzing trends over time is far more useful, and simple graphing exercises in a spreadsheet can highlight even modest improvements (or declines) in performance. This not only gives the layperson in government a powerful data tool, but also provides a baseline of data from which to operate. By focusing on trends, all agencies will feel as if they're operating from their current baseline, no matter how poor it may be. What's more important: that incident response times in Emergency Services are improving over time, or that they're hitting some arbitrary target? By focusing on graphing trends over time, most agencies will have both a powerful tool to evaluate their activities as well as a starting point for a performance management program without enduring criticism of missed targets. This isn't to say that looking at performance data against specific targets isn't useful, particularly when proposing new initiatives within an agency (particularly if it has a budgetary impact). Hard targets may be necessary to justify the initiative's or project's cost and can be used in declaring the initiative a success.

Second, all performance data should be analyzed within the context of whether significant change is due to real improvement in programs and outcomes, or if there is some other lurking variable. In my own experiences, improvements and challenges are often the result of poor data collection processes or errors, and not because of material changes in performance. Always rule out data anomalies first when a particular data point is out of the ordinary (many times by graphing the data these will be easy to spot). In our Emergency Services example, if we're looking at average response times, a few wild outliers could bring up the overall average significantly. What if the outliers were due to a faulty response report or some change in staffing that results in data not being reported properly? Government services are fairly stable over time and one would expect the data to represent this.

Finally, assuming that there is some difference in trends and that data issues are not the underlying reason, find out operationally exactly what the reason is. This is the most difficult part and often entails getting into the "weeds" of the organization. Operations analysts make a living out of improving processes in an organization, but managers with relevant knowledge within an organization should be able to just as easily get behind the numbers to understand what's happening. In these cases, several different metrics may be used to understand what's affecting trends. Going back to our response time example, let's assume we measure both the total number of EMS responses in a month in as well as average response time. If some disaster happened in a given month to cause responses to rise and we consistently measure this number, it may be a telling data point in explaining why average response times increased due to the increased stress on staff. Using multiple measures can help substantiate ideas for certain trends in the analysis phase.

These are just a few examples of good analysis techniques, but there are many ways to slay this dragon. I reiterate, however, the importance of recording any and all analysis that is done in order to create a running narrative to be used in future analysis of why data is trending the way it is. We'll get into the corrective action phase in the next post, but reporting a relevant set of performance metrics followed by analyzing the data are two huge steps towards an effective performance management system.

Sunday, May 31, 2009

Performance Management Reporting

In my previous post I outlined a 4 step process for a successful performance management program. to recap:

1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

I'll focus on the first step in this blog post as part of the overall attempt to develop a performance management lifecycle outline. Most of the set up work in a successful performance management program will be in this area. Each metric should have a well defined set of counting rules, methodology for collecting data, and a reporting period. Other attributes such as priority, stakeholders, etc may also be important, but the core is in the definition, methodology, and reporting period. Depending on the type of agency, there are a number of pre-defined definitions and counting rules (for an example, see this previous post) so no need to re-invent the wheel if those measures are agreeable. Data systems will vary by jurisdictions but the methodology will depend on their ability to generate performance data.

Once the difficult part of defining the metric and its data is complete, requiring managers to report their data on a regular time frame is essential to a successful program. The time frame should be regular and the metrics required should not change often. Getting managers to buy into the program will depend on the level of effort and predictability in each reporting period. If they are responsible for a large number of metrics then a less frequent reporting period is useful (annually or semi-annually). The trade off is slow feedback when metrics take a turn for the worse or when any new initiatives are launched. For less measures, more frequent reporting (monthly or quarterly) is helpful in root-cause analysis and less burdensome as well. The important thing to remember is that reporting performance data often takes time and resources and managers will grow resentful of heavy, frequent reporting requirements, particularly if the benefits of which are not apparent.

After data is reported it will often need some "scrubbing" for any errors prior to undergoing step #2 above, trend analysis. We'll look at the specifics of that in a future post, but the important takeaway here is to make performance reporting well-defined and as simple as possible for relevant managers. This will help ensure that the agency has the performance metrics necessary to make data-driven decisions.

Sunday, May 17, 2009

A Public Sector Performance Management Methodology

Performance management and measurement have taken on a number of different meanings with regard to application in the public sector. In some cases it's regarded strictly as data reporting and in others it takes on a more qualitative form. It may be useful to start a dialogue on coming up with an actionable, consolidated set of objectives and practices to better define what is meant by government performance management. Future posts will break down how to achieve each point in the outline I present here as well as an attempt at a comprehensive methodology for designing a performance management system. The hope is that the system not only provides data, but also a practical management tool for government leaders. First, a list of the stakeholders and their interests in a performance management system:

1. Government Administrators - Information and tools to help manage the day-to-day operations of their jurisdiction as well as to inform policy formulation. The ability to communicate operational data to the public.
2. Public Interests - Actively engaged citizens interested in keeping track of the services that their government is providing.
3. Academic Interests - Research groups hoping to harness public data for academic studies used in policy formulation.

The question then becomes how to organize a program that will meet the interests of all three stakeholders? Part of the current difficulty in getting governments to report performance data has been that guidelines have largely been written by external groups for the sake of providing data to a third party (i.e. academics, research orgs, etc), with few clear, tangible benefits for the governments providing data. There is always the promise of benchmarked data, the ability to compare metrics across jurisdictions, etc., but the governments providing the data are interested in more immediate benefits. I propose a simple system that is standard practice in the private sector but only seems to have recently crept into the public sector:

1. Report performance metric data on pre-defined schedule.
2. Analyze data for troubling trends or missed targets. Operationally research root cause of problems.
3. Provide corrective action for metrics where target was missed or data is trending in wrong direction.
4. Repeat process for next reporting period.

And in its simplicity the above process will satisfy all three stakeholders. The government has a running narrative of operational data and the policies/actions it is undertaking for improvement. The public also has access to both the data on services that it needs as well as information on government policies. Assuming that the jurisdiction picked a standardized set of metrics, academic groups will have access to data for research purposes. Everybody wins!

The above is a simplification of a system that I will flesh out further in future posts, but the idea is to plant the seed of thought. I've looked at various websites and have yet to find this sort of methodology being advertised on government sites and it would be interesting to see it in practice (though I in no way take credit for this as an original idea. It's basic root-cause analysis. The hope is to find tools relevant to the public sector to implement said analysis). As always, I welcome feedback on this concept, and look forward to providing more detail.

Monday, April 27, 2009

The "Fear" of Performance Management

In a discussion with a colleague recently the topic of why more governments don't have an active performance management program came up. I will admit that the discussion was more speculative than scientific, but we generally agreed that many jurisdictions and agencies likely don't implement performance management programs out of fear of both what they might find as well as how the data that is reported might be used against them (the remainder of non-practitioners likely have no idea what it means!). There are stories of early meetings of CompStat in New York city in which supervisors were skewered based on crime stats in their area (although later accounts suggested a softening in the tone), and perhaps this is what government managers reference when thinking about reasons not to further performance management programs. But anyone focused on that aspect is ignoring the second part of that story which is the potentially positive impact of programs like CompStat. 

New York city has one of the lowest violent crime rates and the lowest property crime rate of large cities in the US. This is in stark contrast to the late 80s when it had one of the highest. The rates dropped precipitously throughout the 1990s, around the same time that CompStat came into being. While this relationship may very well be spurious, I imagine that the new management style in the police department had at least SOME effect on crime outcomes. I use CompStat as an isolated case of one agency's efforts, but it would be an interesting exercise to look at cities and states with advanced performance management programs (Atlanta, Albaquerque, and North Carolina spring to mind) and analyze the short and long term impacts of those programs both socially and politically. The reason this might be helpful is that it would be informative to cities who "fear" such programs in demonstrating the long term value of these programs. Certainly there are painful short term realizations of inefficiencies that would be made from better data and analysis but several case studies on improved outputs might put those fears to rest.

Tuesday, April 21, 2009

Citizen Awareness of Government Performance Measures

One aspect of the performance reporting cycle that I don't see discussed much is the role of the citizen in holding governments accountable for tax dollars. There are often waves of citizen anger over an isolated project or issue that is a lightning rod for criticism of government waste, but few groups seem to focus on the need for evaluating the performance of governments as a whole. A quick google search of "citizens for government accountability" did not seem to yield much in the way of citizens interested in government performance metrics (but did reveal a lot of anger about other issues!). There are some organizations that I have mentioned in previous posts with an interest in this area, but they are more academic than citizen-focused (I forgot to mention in previous posts the Performance Institute who offers a tremendous array of workshops and forums on government performance).

Ultimately I believe that the success or failure of government performance management in the public sector will be attributable to citizen engagement in the topic. Many people have grown used to easily finding out about crime statistics in a given neighborhood and would certainly notice if the resource was taken away. It will be interesting to see if that will translate to ensuring that their trash was collected, that equipment at the local park is full operational, or that the number of potholes in the streets are going down and not up. A notable paper by the Urban Institute addressed this point back in 2000 but it would be nice to see an update in this area. Most experts will admit that performance measurement has improved a great deal in the last 10 years, but the real question might be whether or not the average citizen cares, and if not, what will the impact on this area of government be in the future?

(as an aside, I'm simultaneously writing this post and watching a special on Federal attempts to clean up the Chesapeake and other waterways. It led me to the EPAStat quarterly report which unfortunately doesn't tell me if the fish or crabs are coming back to the estuary, but does tell me how many Chesapeake Bay Significant Discharge Permits were issued. With uninformative data like this there's no wonder it's hard to engage people in performance management.)

Sunday, April 19, 2009

Will new Federal CIO, CTO Change Performance

There are several new faces in DC and it will be interesting to see what their impact on Federal performance management will be. The previous announcement of the Vivek Kundra as the nation's CIO was followed up with this week's announcement that Aneesh Chopra will be the nation's CTO and Jeffrey Zients will be the nation's Chief Performance Officer. This is a high-powered team of professionals that has worked in the city, state, and private sector and have been tasked to work together on bringing accountability back to the Federal government. My hope is that they will take a performance metric approach to accountability and ensure that agencies in the Federal government have a comprehensive performance plan which will be used to analyze effectiveness. I will post articles and comments on their progress moving forward.

Friday, April 3, 2009

Benchmarking Within a State

I mentioned in a previous post the attempts by ASCA to come up with benchmarking measures within the Corrections industry. There are several other groups attempting to benchmark data points within an industry (see HSRI Core Indicators for one example), but a couple of states have focused on a more intra-state approach to benchmarking across all government areas, but within the state. North Carolina and Florida have both embarked on projects to define metrics across jurisdictions in their states for the purpose of benchmarking against each other over time. It will be interesting to see if there is any consolidation among the varying groups attempting to come up with common performance metrics, be they state, industry, or professional organizations/commissions (such as the Performance Management Commission). While it may make sense to take from each group relevant measures, it may not be feasible to incorporate all of them for fear of drowning agencies in performance measurement reporting.

Sunday, March 29, 2009

Comparison of Federal Performance Plans: HHS vs. DOJ

Browsing through several different Federal agency performance reports online there are some clear leaders in terms of quality of measures and others that have some challenges. In municipal performance reporting, police departments have often led the way in statistical performance data, largely because crime information is well-defined and one of the most monitored by citizens. Social service agencies typically lag behind their public safety counterparts with respect to municipal performance metrics. At the Federal level, it appears that the opposite is true.

The HHS Administration for Children and Families website has the agency performance reports going back to 2000. A look at the HHS ACF 2008 performance report shows some decent measures (many are outcome-oriented, a break from the output-focused measures of many agencies) that have been created by ACF as well as what looks like honest reporting of those measures. I assume honest reporting based on the fact that the agency has met less of its targets over the last several years. The measures appear well thought out and given the honest reporting of missed targets, the data appears to be reliable. Additionally, the reports get shorter each year since 2000. That's probably a good thing with respect to the public actually reading the document, and the agency focusing more intensely an a narrow set of measures and goals. If you can't explain a measure on a cocktail napkin, it's less likely to be reported and recorded accurately over time.

On the other side of the public spectrum is the DOJ Performance and Accountability Report for 2008. The report is a combination of both performance and financial data and comes just under the whopping-300 page mark. The parts that focus on performance metrics are lacking in a number of ways. Most are output-oriented and don't inform management or the public as to the effectiveness of the agency (One of the measures, "Terrorist Acts Committed by Foreign Nationals Against U.S. Interests", is zero in most years with 2001 the notable exception and would almost certainly be known without needing to be included in the performance report. The measure is neither informative nor a helpful management tool). While the municipal law enforcement agencies have made great strides in performance reporting, the Federal level agencies seem to just be getting their feet under them. One reason may be because of the difficulty in attributing crime and arrest rates to a Federal agency whose jurisdiction is the entire country. Most responsibilities of the agency are shared with state and local jurisdictions, yet the DOJ has little or no control over those agencies. Regardless, there are almost certainly some performance measures that the DOJ could come up with that are more worthwhile. If not, how can their effectiveness truly be measured? A good start would be to narrow the report into something more digestible that might actually be used in agency management.

Wednesday, March 25, 2009

State Correctional Group Leading the Way In Performance Measurement

An excellent example of a national group attempting to standardize measures across an industry is the Association of State Correctional Administrators. They have an ambitious project titled the Performance Based Measures system that seeks to standardize measures within corrections through strict definitions and counting measures for actions such as assaults, accidents, health-related measures and many others. Information on the project can be found here.

I was fortunate to attend a training session for the program and I have to say that it was really impressive. The counting rules and definitions manual is thorough and fairly clear (and as thick as a dictionary) and can be found here. They even have a web-based tool for jurisdictions to enter their data. The biggest problem that I could tell from their efforts was the general lack of sophistication of a number of state and local corrections agencies. Several of the jurisdictions I talked to simply didn't have the data to fill out the requirements of the PBMS system.

Still, this could be a great starting point for other areas of government such as health care, human services, parks and recreation, etc. to set up their own set of performance measures. It will not be easy, though. The PBMS system was started in 2001 and took more than 6 years to finalize and get a production application running.
For public data on performance metrics and management, check out these cities:



New York City:

Good newsletters and sources of information

If you haven't already, sign up for the free ICMA newsletter on performance management here.

And check in periodically to see what's going on with the Performance Management Commision's efforts at: http://pmcommission.org/

Intro to Measures Matter

This blog is meant as a public discussion forum for anyone interested in sharing their thoughts on the topic of government performance metrics and management. Increased public oversight has called for greater use of metrics and data in evaluating the effectiveness of government agencies. The trend thus far has been a ground-up approach to the use of performance data as programs like CompStat in New York City, CityStat in Baltimore and CapStat in Washington DC have been successful in incorporating performance data into their operations. States and the Federal government have been slower to adopt performance management systems but the call for greater accountability of tax expenditures in the recent Federal stimulus package creates an excellent opportunity for advances in this area.

In addition, national initiatives to standardize metrics across municipalities such as the ICMA Center For Performance Measurement and the National Performance Management Advisory Commission have gained some momentum. The idea of having a standard set of well-defined measures for all public entities would provide the ability to benchmark across jurisdictions.

The struggle still lies in changing government culture, and convincing decision makers that this industry is no different from the private sector. While performance can't be rolled into the same ideas of expenses and profits, there are ways to objectively evaluate effectiveness in the public sector through a well thought out performance management plan. Public managers can make evaluate and determine corrective action based on metrics and data, not just conjecture and qualitative assessments.

I hope that you will join me in improving the discussion surrounding this area and will share this blog with your friends. Please feel free to post comments and if you think you would like to contribute to the blog feel free to contact me.