New Year’s Awareness

It is the beginning of a new calendar year.  Circumstances have changed as they always do.  Prior assumptions about the pace and proportion of change probably need to be revised.  As a result, adjustments will be necessary (and not just annually).  Now is always good time to reflect and adjust by asking a couple fundamental questions:

  1. What have we learned about our value proposition from the benefit of hindsight that will inform our emerging strategy?  (Reflecting)
  2. What do we need to learn more about in order to fulfill our evolving value proposition and re-calibrate strategic areas of focus?  (Adjusting)

Performance metrics are an important source of information for reflecting and adjusting based on what can be learned from them.  For example, operational insights derived from performance metrics can be used to help calibrate strategic areas of focus, inform foresight and formulate strategic intent.  If management is attuned to the necessity of making adjustments over time, then what can be learned from performance metrics in order to adapt is essential.

Nevertheless, it’s also important to put performance metrics into proper perspective.  They are important.  But they are also subject to interpretation in terms of what the data may or may not validate.  Institutional mission, vision, intuition, and judgment shape expectations and influence analysis.  The following illustration one way to visualize how performance metrics are situated within a broader strategic and operational context:

Performance Metrics in Context

Performance Metrics in Context

Given the broader context, the challenge is to identify an agreed upon set of performance metrics that matter most.  One way to accomplish this is to determine which ones provide the kind of organizational learning that is needed to help guide pivotal decisions about; priorities for the allocation of resources, methods of producing value, focus of improvement efforts, and goal setting.  The following interrelated questions encourage comprehensive inquiry into these pivotal issues:

  1. Are we doing the right things?  (Guides Priorities)
  2. Are we doing things the right way? (Determines Methods)
  3. Are we doing things well?  (Provides Focus)
  4. Are we getting the expected benefits?  (Establishes Goals)

There are some commonly used techniques for assimilating information that can be used in an attempt to answer these questions.  Some of the more conventional means include:

  • Third party audits and self-assessments used to quantify the “maturity level” of specified capabilities.
  • Resource utilization studies used to quantify consumption of available capacity and gauge varying types of usage.
  • Surveys used to measure levels of satisfaction, indications of preference, perceived importance and/or demographics.
  • Indicators used to measure proportional investments, financial returns and/or non-financial benefits.

Each technique is capable of generating far more data than what most organizations can make practical use of in terms of meaningful learning.  However, if the temptation to focus biased attention on “vanity metrics” is resisted then it is possible to identify a limited number of metrics that matter most in terms of addressing unmet and anticipated needs.  In other words, the objective is to choose easily understood actionable metrics that can be used to guide priorities, determine methods, focus attention and establish goals for improvement (versus using vanity metrics simply to portray the organization favorably).  Doing so requires a high degree of integrity and trust.  As a result, metrics that matter must be used as a collective means of organizational learning rather than employed as the basis for appraising individual performance.

Within the domain of information technology, there are a number of methods that provide structured approaches to gathering information using one or more of the techniques previously mentioned.  Judith A. Pirani’s recent EDUCAUSE Center for Applied Research (ECAR) Research Bulletin, “5 Guidelines for Instituting IT Value Measurement” includes a concise description of a few:

Balanced Scorecard:  A set of financial measures and operational measures that illustrate the desired outcomes and the means of achieving them.  ISACA’s generic IT scorecard covers the following areas: corporate [or institutional] contribution, user orientation, operational excellence, and future orientation.

Portfolio Analysis:  Considers IT assets not solely from a cost perspective but includes other elements like risk, yield and benefits.  The goal is to balance risk and payoff by investing IT assets that both support basic organizational operations and help create and address new or existing strategic opportunities.

Return on Investment Measures:

Value on Investment (VOI):  VOI is the measure of the value of soft or intangible benefits derived from technology initiatives, compared to the investment needed to produce them.

Net Present Value (NPV):  The NPV of an investment is the present (discounted) value of future cash inflows minus the present value of the investment and any associated future cash outflows.  It allows consideration of such things as cost of capital, interest rates, and investment opportunity costs.  It’s especially appropriate for long-term projects.  The bigger the NPV – other things being equal – the more attractive the investment is.

Others I would add to the list include the following:

Gartner ITScore Self-Assessment:  “Holistic sets of interactive maturity assessments designed to help CIOs and IT leaders evaluate the maturity of both the IT organization as a provider of IT services, and the enterprise as a consumer of information technology. Unlike other IT maturity assessments, a Gartner IT Score measures your organization’s capabilities within the context of an enterprise culture, behaviors and capacity for leadership – factors that dramatically impact IT’s effectiveness and it’s ability to contribute real business value.” (See Gartner website)

TechQual+ Survey:  “Measures a set of generalizable IT service outcomes that are expected of IT organizations by faculty, students, and staff within higher education. The TechQual+ core survey contains 13 items designed to measure the performance of the following three core commitments: 1) Connectivity and Access, 2) Technology and Collaboration Services, and 3) Support and Training.  In addition to the core survey, the project delivers easy-to-use Web-based tools for the creation of individualized surveys, for communication with respondents, for analysis of survey results, and for comparisons with peer institutions.”  (See the Higher Education TechQual+ Project website)

Measuring Information Service Outcomes (MISO) Survey:  Measures the views of faculty, students and staff about technology and library services by gathering data on frequency of use, relative importance, levels of satisfaction, perceived service orientation of front-line staff, and demographic information.  Nearly 70 critical services are included across a spectrum that ranges from support provisions to technological infrastructure.  Participating institutions can evaluate trends over time with repeated participation in the survey, and organizations can compare their responses to those at other institutions.  (See the MISO Survey website).

The EDUCAUSE Center for Applied Research has published numerous research bulletins that are very helpful in terms of understanding the full depth and breadth of methods available for gathering data and instituting metric driven continuing improvement programs.  Some of them are cited below:

Pirani, Judith A. “Five Guidelines for Instituting IT Value Measurement” (Research Bulletin).  Louisville, CO:  EDUCAUSE Center for Applied Research, November 13, 2012.  Available from here.

Consiglio, David, Laurie Allen, Neal Baker, Kevin J.T. Creamer, Joshua Wilson “Evaluating IT and Library Services with the MISO Survey” (Research Bulletin 10, 2011).  Boulder, CO: EDUCAUSE Center for Applied Research, 2011, Available from here.

Chester, Timothy M. “Assessing What Faculty, Students, and Staff Expect from Information Technology Organizations in Higher Education” (Research Bulletin 18, 2010). Boulder, CO: EDUCAUSE Center for Applied Research, 2010, available from here.

Nelson, Mark R. “Assessing and Communicating the Value of IT” (Research Bulletin August 2, 2005). Boulder, CO: EDUCAUSE Center for Applied Research, 2010, available from here.

A number of us will be gathering at the March 2013 Annual NERCOMP Conference for a “Metrics that Matter” preconference leadership forum seminar to consider some of these approaches and others not listed.  Each methodology offers something different even though they are related and sometimes overlapping.  More than one could be used in combination.  For example, my organization is using the Gartner ITScore self-assessments in combination with the MISO Survey.  There’s no single approach that is optimal for everyone.  But we all need to consider many of the same things.  For example, the financial and opportunity costs associated with adopting any of these methods needs to be proportional to the broader context at each of our institutions.  The preconference seminar will provide an opportunity to think about these things during presentations, panel discussions, and through facilitated exchange of ideas and solutions.  Forget new year’s resolutions.  Metrics that matter provide a new year’s awareness.