Get information on every U.S. Foundation with one search in FDO Quick Start. It’s free to register!  
Featured Jobs



Input Metrics Feed Outcome Metrics

Michele A. Zacks, MS, PhD

Research Development Services
Office for Research & Discovery
University of Arizona

Online Publication Date:
October 20, 2015

I love data to inform a purpose. As my first year at the University of Arizona winds down, our new “unit” called Research Development Services has been asked to present some metrics of its inaugural year.

So often metrics of grant performance, which I will refer to as outcome metrics, are viewed as a justification-for-being of a grant professional. 

From Metrics 101 to Metrics 2.0

Without a doubt, there's no more powerful metric than more grants awarded. This holy grail of quantitative measures can be expressed simply as the number of new grants, but potentially more significantly, the total number of dollars awarded. It's a bonus if we can claim to have earned back our salary by tallying the total Facilities & Administration awarded (F&A or “indirect cost”). However, this measure is a bit misleading since F&A is dispersed between so many groups on campus.

So how do we, as grant professionals, hatch those impressive charts showing growth under our tenure?

The need for data to inform our activities and set targets, which I refer to as input metrics, are often overlooked. I argue that the very best outcome metrics are the impacts you've had on these input metrics. Unfortunately, an impeccable work ethic (such as being available for everyone) or sheer hard work (long hours devoted to a grant, regardless of its prospect) are not enough to move the needle on grant awards. For optimal grant outcomes, it is often a simple piece of data that you have gathered about past grant performance that equates to subtle, yet highly actionable, targets for outcome improvements.

(See graphic here)

Iterative Metric Versions and Beta Testing

The optimal input metrics reveal the parameters that you have the capacity to impact through your particular expertise. Don't leave out the “soft” measures of your institution's grant potential; they can be semi-quantitative. Below is a discussion of some categories of outcome metrics.

Quantitativemeasures of grant performance only provide meaningful attribution of your effort when viewed comparatively, that is, relative to past performance. Undue emphasis on the classic quantitative metrics can lead to disengagement between impact measures and authentic impact. Many other outcome metrics, equally meaningful, can only be extracted by analysis of the context.

Three other interrelated categories of outcome metrics are worthy of attention, as they can more readily be linked to your efforts:

QUALITATIVEmeasures (e.g., grant quality, grant fit to the targeted agencies and/or funding opportunities, and programmatic or research contribution). Some consequential questions seed intervention targets: What opportunities are being missed, and why? What resources are being utilized, and what are the tradeoffs?

INSTITUTIONALmeasures (e.g., number of grants awarded of total submitted, distribution of submissions across the institution/programs). Some questions: Does your institution have leaders who are capable of pursuing large, complex grants, and can you add significant value? Alternatively, what are the weaker areas, and what improvements would advance grant seeking?

CAPACITYmeasures (e.g., effectiveness of grant teams, timeline achievement, existence of grant training). Some questions: Is grant preparation cohesive, and how can grant packages be developed and/or coordinated more efficiently? Will your efforts enable the submission of grants that would not have been submitted otherwise and over what time scale do you anticipate an impact?
Impact Heat Mapping Gone 3D

Any of the intervention targets in the continuum within these metric categories, and between strengths and weaknesses, are valid. The choice of target is best informed by comparison to past institutional performance, consideration of your expertise/skills, and weighing alternative approaches. The corresponding intermediate outcome measures will differ from the classic quantitative ones.

When number crunching, dig a little deeper into the nascent data so that your input metrics (past history) inform outcome metrics (current achievements) &” and even more importantly, feed your future contributions.

Further Reading