I’ve long been a fan of developing clear measures of success or metrics in planning. Too often, though, boards and staff at historic sites only use total attendance or the financial bottom line to judge their success. Certainly, having no visitors is not a good sign, but is a large number of visitors a mark of success? Not necessarily, because high attendance may be due many factors, including some that may have nothing to do with advancing your mission in a significant manner, such as weddings rentals or dog walkers or corporate retreats. I’m not knocking those activities and they may be an essential part of your programming, however, what I’ve most often heard at board meetings are conversations like this:
Board chair: I heard we did well last month. What was our attendance?
Director: We had 2,500 visitors in March, double what we had in February.
Boardmembers (in unison): Wow, that’s great!
Director: And looking at the guestbook, we had people from 14 different states and 3 foreign countries, including Latvia.
Board chair: This must be a record for us. Okay, let’s have the financial report–looks like the bottom line is positive. Is there a motion to accept?
Don’t assume this only happens at the local historic house museum–it happens at the big ones as well. As I’ve often said, attendance and assets shouldn’t be the only measures of success and they aren’t the most reliable, yet we’re tempted to rely on them over and over again. If you’re just looking to increase attendance, think about offering a puppy circus.
I’ve discussed measures of success at historic sites at length elsewhere, but I’m currently reading The Lean Start-Up by Eric Ries and he put into words what has been rolling around in my mind for years, namely that metrics only have value if they demonstrate clear cause and effect. For example, if your attendance goes up, can you clearly point to the cause? And if you can, what can you learn from it to improve future performance? Typically, all we can say is that we had a special event that brought many visitors, but we can’t do much more. Did they come because of the weather, a newspaper article, the admission price, the event theme, a Facebook campaign, or the Latvian Tourist Bureau? That’s much less clear, so all we have is an attendance number that doesn’t provide any more direction than “have more events” (a sure way to burn out an organization). He calls these utterly useless and distracting measures of success, “vanity metrics“:
Vanity metrics wreak havoc because they prey on a weakness of the human mind. In my experience, when the numbers go up, people think the improvement was caused by their actions, by whatever they were working on at the time. That is why it’s so common to have a meeting in which marketing thinks the numbers went up because of a new PR or marketing effort and engineering thinks the better numbers are the result of the new features it added. Finding out what is actually going on is extremely costly, and so most managers simply move on, doing the best they can to form their own judgment on the basis of their experience and the collective intelligence in the room. . . .When cause and effect is clearly understood, people are better able to learn from their actions.
So think about your measures of success–do they clearly show cause and effect? Do your metrics give you clear guidance for decisions and actions? If not, they’re probably just skin-deep. Although they may seen beautiful and attractive, you’ll want to rethink them.
Hmmmmmmm……… a puppy circus?
Sure! You can read about them at http://storybird.com/books/the-puppy-circus/ or watch the video at http://youtu.be/y1cOH6ZLpOs (okay, these are dogs not puppies).
Perfect timing. I’m presenting a metrics report this week. Thanks for your thoughtful comments.
Does the book provide any guidance for identifying cause and effect? This is something I’ve been thinking about quite a bit too – that we often use the “gut feeling” method to determine what’s behind the numbers. But in this time of economic difficulty we need to be sure – so we can put our resources in the best possible place.
You’re asking the right questions and much of Ries’ book deals with identifying the right metrics and then figuring out how to respond. It’s a tough slog for me because he uses several new concepts (e.g., value hypothesis, minimum viable product) and draws examples from manufacturing or online businesses (he maintains the concepts apply just as much to NPOs, but you have to keep translating an example from Toyota to an educational program), but it’s all there. His ideas are especially designed for uncertain conditions, but to succeed, you have to test lots of ideas and assumptions quickly to find the right metrics:
“A true experiment follows the scientific method. It begins with a clear hypothesis that makes predictions about what is supposed to happen. It then tests those predictions empirically. Just as scientific experimentation is informed by theory, startup experimentation is guided by the startup’s vision. The goal of every startup experiment is to discover how to build a sustainable business around that vision.”
So one of the first steps is to identify your assumptions in two major areas: how new customers will discover your product/service (growth hypothesis) and what will customers value about your product/service after they use it (value hypothesis). Translating this for an historic house tour, the assumptions could include “most people learn about our tours from the website”; “visitors will like the tour so much they’ll tell their friends”; or “visitors will be so moved by the experience they’ll become a member”. These hypotheses would then be tested in a simple experiment with a dozen visitors (I can hear the howls from my friends in visitor research!). He calls these experiments a cyclical “Build-Measure-Learn” process and they have to be quick and simple:
“A minimum viable product (MVP) helps entrepreneurs start the process of learning as quickly as possible. It is not necessarily the smallest product imaginable, though; it is simply the fastest way to get through the Build-Measure-Learn feedback loop with the minimum amount of effort. Contrary to traditional product development, which usually involves a long, thoughtful incubation period and strives for product perfection, the goal of the MVP is to begin the process of learning, not end it. Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.”
He provides several different ways to test hypotheses (such as A/B Testing, cohort-based reports). From the results, you’ll need to decide whether to “persevere or pivot”–and he has strategies for figuring that out as well. Those of us working in NPOs know this is all incredibly difficult work–and Ries recognizes this is a major challenge in the corporate world as well:
“Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils. The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback. One decision stands out above all others as the most difficult, the most time-consuming, and the biggest source of waste for most startups. We all must face this fundamental test: deciding when to pivot and when to persevere.”
If this intrigues you, you’ll probably find The Lean Startup to be useful. As I said, it’s a tough slog at times, but even just skimming sections has sparked ways to improve something I’m working on (like an interpretive plan for a heritage area!).
By the way, I’m now cancelling our kitten circus. Thanks.
Uh-oh. Did I just cause more layoffs in kitten and puppy circus industry?
The one metric that has truly haunted me for my entire career is measuring value. What metrics do we use to prove what we intuitively know is true…that we add value to the community we serve?
Attendance numbers alone can’t tell the story, neither necessarily does the financial bottom line–though both contribute to how the community perceives the institution for if it didn’t attend or didn’t give/support it would show that the institution’s value is low.
But what is it that we can use for a variety of stakeholders to say, “Yes, we’re indispensable, and here’s how we know.”
Once again, Bob, you’re asking the question every one of us in the history field should keep asking ourselves. I’ll even push it further by saying that if we avoid the challenge of examining our value and benefit to society, we do not deserve its support.
An approach that begins to get at this issue is – Worts, D. 2006. Measuring Museum Meaning: A Critical Assessment Framework. Journal of Museum Education 31(1): 41-49.