Evaluating organizational effectiveness

Tyler Cowen writes,

The US funds more science research than any other country — about $35 billion per year on the NIH and $8 billion per year on the NSF. How exactly do these institutions work? How have they changed over time and have these changes been for good or bad? Based on what we now know, how might we better structure the NIH and NSF? What experiments should we run or what kind of studies should we perform?

This is the first in a long and varied list of areas he thinks are worthy of further study. One more example:

Indonesia is a large, populous middle-income country. It faces no major near-term security threats. It has a small manufacturing base and no major non-commodity export sectors. What is the best non-bureaucratic 10 page economic development briefing document and set of prescriptions that one could write for Indonesia’s president? For Indonesia, substitute Philippines, Chile, or Morocco.

Many of the topics in Tyler’s list involve attempts to improve or evaluate organizational effectiveness. I would say that in evaluating an organization, look for common flaws, listed below. Give high marks to organizations that are able to avoid these pitfalls.

1. A good mission statement will serve to narrow the purpose of an organization. It will remind everyone what the organization will not attempt to do. In badly-run organizations, the scope of the organization is unclear.

2. The organization should have a formal planning process. About once a year, or once every other year, the organization should evaluate past performance and set future goals. Middle management as well as top management should be involved in this planning process, in order to try to achieve alignment between strategic goals and departmental activities. In badly-run organizations, departments run on auto-pilot without any strategic direction.

3. Borrowing terminology from Morrisey, et al, The planning process should include Key Results Areas and Indicators of Performance. For example, a city could have a Key Result Area that is reducing traffic congestion, and an Indicator of Performance that is the number of workers who are able to commute during rush hour in less than 30 minutes. Middle managers strongly resist KRAs and IOPs. Instead, they prefer to be measured on the basis of activities–how many traffic lights they installed, or how many potholes they filled. A grant-making organization that measures how many grants get approved rather than anything related to the results from making those grants is operating on auto-pilot. In badly-run organizations, departments do not articulate meaningful KRAs and IOPs.

4. Organizations need to periodically adjust their incentive systems. Top management wants maximum effort with minimum outlays. Employees and other recipients of funds want the opposite. Over time, the compensation system degrades, due to changes in organizational goals and due to recipients learning how to game the system. Badly-run organizations leave ineffective compensation systems in place.

5. Some departments or projects falter. Can the floundering projects or departments be put back on track at a reasonable cost? If not, then they probably should be shut down. Badly-run organizations are unwilling or unable to identify and deal with low-achieving activities.

6. Organizations need periodic adaptation, including restructuring. The environment changes–think of the effect of new computer and communications technologies on many areas. Badly-run organizations fail to adapt to changes.

My guess is that you could use this framework to evaluate many of the institutions mentioned in Tyler’s list. But in the case of government agencies or non-profits, will such evaluation make a difference?

7 thoughts on “Evaluating organizational effectiveness

  1. A lot of your prescriptions are some version of “Saying No” well.

    #1, 5 for sure.

    #3, 4 partly.

  2. I have worked in DoD my entire life, mostly in operational force headquarters at the Corps and echelon above Corps level. Measuring government performance in the defense sector isn’t helpful because of a lack of measurable metrics to evaluate performance and the manner in which organizational resourcing incentives are structured (organizations are penalized for not spending all resources allocated to them each year).

    That said, I have observed an inverse relationship between the size of a headquarters element and it’s proximity to combat operations; headquarters grow exponentially larger the farther removed from the kill chain. I used to believe it was due to the complexity of tasks performed. Instead, I realize it is bureaucracy run amok.

    (Read General James Mattis book (Call Sign Chaos) to understand his thinking in regards to headquarters size and the impact on operational decision making.)

  3. Sadly, all of those pitfalls definitely characterize my organization at the top, with things getting slightly better the further down you go, due not to better design or management, but to the automatic consequences of having fewer people work on more specialized missions with progressively narrower scope.

    There are regular planning processes, assessments, reorganization proposals, budget reallications, etc. but the trouble is that these have all become burdensome, check-the-box, hollow rituals that don’t amount to anything more substantial than blessing off via ‘authoritative sanctification’ on either the status quo or what the top folks want (often for their own personal reasons, different from those corresponding to organizational efficiency and effectiveness).

    Design is just hopeless in cases like these, and one can only rely on creative destruction when there is variance and competition, or else on getting lucky to have a leader (funded and empowered) with almost eccentric characteristics of being internally and obsessively driven to demand or achieve excellence with the determination of Ahab going after his whale.

    If one looks back at the rare impressive successes in Communist systems that weren’t just the result of stealing success from elsewhere, one always finds such a rare (thus indispensable) individual at the heart of the effort and project.

    These people seem to have a psychological quirk which makes them less sensitive to the ordinary temptations to bureaucratic corruption of time-serving apathy and maximizing ones personal career prospects – they really start to identify with and attach their pride to the project itself, occasionally to a fault, as with Moby Dick or the Bridge Over The River Kwai.
    Or throwing too much money and time into hobbies.

    These scarce bits of human capital constitute the real wealth of nations. If an organization doesn’t have one near the top, then absent adequate substitutes for motivation, the organization will tumble into the pitfalls.

  4. Mercatus used to do an annual evaluation of federal agencies’ annual Government Performance and Results Act (GPRA) which is a codified version of 1-3, 5 and 6. That was useful because GPRA has been a colossal failure as agencies game the reports and change metrics regularly to obfuscate their shortcomings. Their performance plans are full of buzzwords and do not narrow missions in the least. The federal pay system is of course one of the greatest scandals ever, pure autopilot with no attempt at all to manage payroll. And I completely concur with the observations above on metastasis in headquarters. In research, as in defense and everything else the feds, the ratio of overhead to line spending is the most sad and pathetic in the world. Trump and Mulvaney were supposed to take on this problem but if there has been any progress, it has been minuscule. A radically pragmatic overhaul is in order, something like what Clinton did by eliminating 300,000 federal positions. Props to Obama too for a string of years with no across-the-board pay raises.

  5. Periodic program review and goal setting by concerned outsiders also is an idea. To pick a somewhat extreme example, NASA is regularly assessed — and stressed and incensed — by OMB, GAO, CRS, and other organizations. It’s also kicked in various directions, with varying utility, every time the White House gets a new occupant.

    There are also every-ten year “decadal surveys” by NOAA, USGS, NAS, and other organizations to suggest to NASA what the most rewarding areas of possible future research might be at various funding levels, what sort of instruments and studies might be instituted to pursue that research, a handful of nominal missions, etc. These are taken quite seriously.

    Granted, NASA doesn’t operate with the sort of personnel stability and constant focus of say the Department of Motor Vehicles, and this sort of overview might be overkill for most government agencies. But the notion of outside review seems a good one,

  6. To get accountability from government, you likely need to do away with most elected officials and legislators. Government agencies simply have too many masters to serve to do much of anything well. Without a leader to demand performance, and to shield the organization from lawsuits and legislative investigations and interference, most government agencies need extensive CYA work and FTEs just to carve out a small amount of space to try to fulfill their mandates.

Comments are closed.