Way back at the letter C I suggested that Richard Florida’s The Rise of the Creative Class was maybe the second most influential book on arts planning and policy of the past forty years. Time now to consider the first.
In 1993, newly-elected President Bill Clinton gave his V-P Al Gore the wonkiest of tasks, the “National Partnership for Reinventing Government” (plus ça change…), which was meant to take the ideas of Osborne and Gaebler’s Reinventing Government and put them into practice. Mr. Oakeshott had only recently died.
It takes a while for things to make their way north, so when I went on leave to work for the government of Saskatchewan in 1998, the idea was just beginning to take hold. But boy did it take hold; my bosses were on a mission (this was an NDP government, but the mostly practical kind you tend to get when they are actually in office). So, my policy unit was tasked with going out to the senior people in line departments and saying, “you have to have a strategic plan now. It has to have your mission and the key outcomes you want to achieve. And you’ll need metrics: quantitative measures of how you are doing in getting to the outcomes.” No more muddling through. We were met with some puzzlement: what are the performance metrics supposed to be for the Department of Labour? Or the Department of Justice? What are measurable outcomes?
What did Osborne and Gaebler actually say? Their two big ideas are not completely terrible, at least some of the time.
First, the focus of the organization ought to be the reason it was set up in the first place. What is it meant to accomplish? This means not focusing on the inputs to the process, but rather on the outputs. The measure of success of your education system is not how much money is spent or how many employees it has; it is whether children get a good education in a positive environment. Don’t look at expenditures in health care as a measure of excellence (on those grounds the US has far and away the greatest health care system in the world), look at people’s health (where the US is … not the greatest). (As an aside, this serves as a warning to arts people who insist on touting how much of GDP is spent in the arts as a measure of something important. It isn’t).
Second, if on-the-ground management is meant to seek good outcomes, then allow managers some freedom to figure how best to do that without excessively burdening them with rules (you’ll need some rules of course). Evaluate management on how well they do at outcomes, not on how big their budgets are.
But it starts to get out of hand. A checklist from the book:
What gets measured gets done
If you don’t measure results, you can’t tell success from failure
If you can’t see success, you can’t reward it
If you can’t reward success, you’re probably rewarding failure
If you can’t see success, you can’t learn from it
If you can’t recognize failure, you can’t correct it
If you can demonstrate results, you can win public support.
Metrics, then. What could go wrong?
One problem is that separating inputs from outputs is not always that easy. Consider your local symphony orchestra. They play concerts. Are their concerts the “output”, or an “input” into some higher goal, and if so, what is that higher goal? (And if the “higher goal” is some sort of measure of social change, then all kinds of new errors in thinking enter the picture). Your museum exhibits objects, with a permanent collection and some temporary shows. Is that the “output”? Orchestras and museums and publishers just do what they do: concerts and exhibitions and books. So how do we “measure” that? It can’t just be the number of concerts, or objects exhibited, or books published.
Unfortunately, some board members will get into their heads that you need to measure something: how can you evaluate management without performance metrics? And so we get a second problem: bad metrics.
“What gets measured gets done”: sure. If you tell people their performance at work is going to be evaluated according to what things are measured, then the measured things will get done, no doubt. But if your metrics are chosen badly, then you’ll have people focused on the wrong things. There’s a classic old public admin paper by Steven Kerr with the great title “On the folly of rewarding A, while hoping for B,” and, yes.
Teachers teaching to the standardized test, professors trying to maximize whatever happens to be the current method by which we measure “excellence”,1 colleges juking the stats to try to better their placing in a ranking by some old magazine nobody ever read, police forces making questionable arrests in pursuit of that particular metric of law enforcement, you name it.
Goodhart’s Law holds that the instant some measure is chosen as a performance metric it ceases to be a useful performance metric, and all the evidence confirms it.
What are arts managers to do? There aren’t many things you can quantify in the arts. Attendance? Yes, you can measure that, but if increasing attendance becomes your performance metric, then all sorts of bad decisions might follow: revenue-losing pricing gimmicks that boost (perhaps only temporarily) attendance, a focus on reliable mass crowd-pleasers rather than more sophisticated works that will not attract as large an audience but which are significant in that genre, trendy games to bring in people with no interest in art at all. I’m old enough to remember this madness:
Attempts at more “sophisticated” metrics can be even worse. I saw a consultant once write that average revenue per attendee was a good metric to shoot for, without thinking through that increasing your admission price to such extremely high levels that you could count your attendance on your fingers would maximize this measure.
We used to look at different arts-organization strategic plans in class, and every time we got to the “metrics” section it all seemed forced, and something destined to give bad incentives.
What’s the alternative? Judgement. You don’t need quantitative performance measures. Is our theatre company achieving what we want to achieve? There will be many dimensions to that, and so there needs to be a weighing of this against that, and not all of them are measurable (don’t try to make an “index” of multiple quantitative measures, a “balanced scorecard”, since that manages to make things even worse, piling ad hoc subjectivity on ad hoc subjectivity in an effort to look objective). Some numbers will matter in this judgement: you cannot just ignore steadily declining attendance. But the numbers can only be a part of the much bigger picture.
On “excellence” in the university, a word which means whatever we want it to mean, I have to recommend the brilliant critique by Bill Readings, The University in Ruins (1996).
Going through a planning process now. Performance measures are always defined to support the needs of sponsors, governmental bodies, boards, etc. That’s fine, since they are the only constituencies that will ever look at a strategic plan. Better “measures” are those that at least attempt to consider the visitors and community served by the museum. But even the best attempts at surveys are awful, no matter how much time is invested in creating them or how earnest front line staff is in coaxing visitors to share their thoughts.
Lots of other things wrong with Reinventing Government/New Public Management. My thoughts on this, as related to the pandemic in Australia https://www.themonthly.com.au/issue/2021/september/1630418400/john-quiggin/dismembering-government