Get information on every U.S. Foundation with one search in FDO Quick Start. It’s free to register!  
Featured Jobs



Outcome Measurement: Measure What Matters, Keep It Simple

Outcome measurement is a broad term and does not have a widely shared definition. Nearly all grant makers ask some version:
  • What outcomes you will achieve?
  • What are meaningful indicators of success?
  • How do you define short-term and long-term success?
  • How do you know you are making a difference?

Grant professionals need to articulate a program's outcomes on the target population, community or existing conditions, and describe the method for collecting data. We are accustomed to this and know the difference between outputs (what you do, who you reach, what you provide) and outcomes (changes because of what you did or provided).

But working with program staff to develop outcomes and describe impact can pose a challenge. They may have difficulty articulating accomplishments and what they hope to achieve. They may not even understand why outcomes are important. Yet they are best positioned to know what success looks like. We need them as partners to get it right.

How do you work with program staff to develop key outcomes and indicators?

I keep a few simple principles in mind to help guide program staff and the process:
  • Good program design is aligned with the organization's mission and identified target population and community need.
  • Limit the number of key metrics to a few measurable or observable changes.
  • Choose metrics that are easily incorporated into existing processes and systems – if program staff is already using it, they are more likely to embrace it.
  • Review the data on a regular basis in order to communicate impact, inform program improvement, and identify trends.
Often, program staff is collecting data but do not recognize its value. Because it is often so simple, they may overlook the role of data collection, because it is so intuitive to them. I engage program staff until I latch onto something useful. Eventually an answer emerges such as “yes, we have that data” – an “aha!” moment for all.

Examples like these show that program staff are often already measuring impact:

We survey our students before and after field trips to see what they've learned, what they liked, what we could improve.” The Education Director compiles and documents results that show changes in knowledge.

Requests from other organizations to collaborate have doubled in the past two years.” This observation from the administrator at a community music school demonstrates the value of the organization to the community.

In the last seven years, we've had zero repeat pregnancies among clients.” Program staff at a shelter for teen mothers observed results and documented changes in behavior. 

“Even during the recession from 2008 to 2010, 87% or more of new refugees got jobs within 30 days of arrival. A refugee resettlement program tracked case management records and compiled results monthly, which indicated improving family economic condition.

“Our goal is to provide access and inclusion for an ethnically, economically diverse student body. The financial aid program ensures 15% of students are low- and moderate-income, and 50% of financial aid students represent an ethnic minority.” A community music school compiled data already captured as part of the enrollment process, as a baseline to measure access for underserved populations.

Measure what matters. What number and percentage of clients demonstrated a change in access, knowledge, attitude, skills, behavior, or overall condition? Keep it simple. When possible, choose metrics that are already being collected or can be integrated easily into existing data collection systems.

What techniques and tools do you use to engage program staff as partners to develop outcome measurements?

What difference are you making? How do you know?

Ellen Gugel, GPC is principal of Grants & More, a grants consulting firm based in Massachusetts.