Outcomes and Indicators | El Centro Nacional de Recursos sobre Violencia Sexual (NSVRC) Pasar al contenido principal
Get Help Escape
English Spanish

Outcomes and Indicators

Section Six Banner: Outcomes And Indicators

 

Outcomes are often a critical part of program development and evaluation. There are evaluation models that don’t require pre-determined outcomes (goal-free evaluation, for example) and innovative program development models often do not involve pre-establishing specific outcomes and rather look for emergent outcomes. However, most of us will be involved in developing and implementing programs and evaluations that require some level of specificity around outcomes or what we hope to achieve with our efforts.

We are all working to end sexual violence, but what will it take to get there? What are the short-term changes that will serve as signposts that we are on our way to that bigger vision? Those questions point to the outcomes we need to work on. Notice that these questions don’t ask what we need to do to get there but rather what we need to change.

Developing good and meaningful outcomes takes some practice.

On the simplest level, the outcome answers the question: what do we hope will be different in the world as a (partial) result of our efforts?

These changes might be in various domains:

  • Community and social norms
  • School or community climate
  • Individual attitude, beliefs, or behaviors
  • Relationship dynamics
  • Organizational operations and practices

In order to be measurable your outcome should include a clear direction of change. Usually that’s indicated by either the word increase or the word decrease but you might also have outcomes that seek to improve or maintain a condition.

When it comes time to measure your progress toward your outcome, you’ll have to ask yourself a different question. How will you know if that outcome has been met? What will be different? This should give you more specific indicators of the change, and those indicators will drive outcome-related data collection.

When you are ready to start writing your own outcomes, check out the Outcomes and Indicators worksheet.

Evaluation Purpose

Why are you evaluating your program? You might evaluate your program for any combination of the following reasons:

  • To prove that it’s working
  • To monitor its implementation
  • To improve it
  • To determine if you’re reaching the right people
  • Because a funder said you have to

You need to be clear on your purpose early on, because the purpose of your evaluation will guide your evaluation questions, which will then guide the type of evaluation and methods you will employ. These different purposes also require different levels of rigor in your evaluative processes. For example, proving that your intervention is creating change is a very high level of evaluation that requires establishing a cause-and-effect relationship between your intervention and measurable change.

When designing questions and considering evaluation purposes, the following distinctions should be kept in mind:

Formative, Summative, or Developmental Focus

A formative evaluation focuses on improving and tweaking an intervention, generally during the initial stages of implementation (that is, while it’s in formation). This might be used for new programs in their initial implementation or for existing programs that are being adapted for new populations or environments (Patton, 2014).

Summative evaluation, is about making value judgments about the program and its impacts. Generally a summative evaluation happens after a program has been improved through formative evaluation (Patton, 2014).

Developmental evaluation, focuses on supporting innovation for initiatives implemented in complex environments that must be responsive and dynamic as a result of that complexity (Patton, 2014).

Process vs. Outcome Evaluation

Process evaluation focuses on the aspects of implementing the program such as how the initiative was implemented, who was reached, etc. (Sexual Abuse and Mental Health Services Administration [SAMHSA], 2016). Outcome evaluation focuses on the results of the intervention like what changed for participants or the community as result. These are used in tandem with each other (SAMHSA, 2016).

Evaluation Questions

Once you’ve got a solid description of your program as you plan to implement it, you need to ask some evaluative questions about your program. These questions will guide the design of your evaluation and help you compare real-world implementation to your plan. The specific questions you ask will depend on a variety of factors, including the nature of your initiative, requirements from funders, resources available for evaluation, and the purpose of your evaluation.

People often start evaluative processes to answer the following two questions:

  • Did we implement the program as intended?
  • Did we achieve the goals/outcomes we set?

However, taking into account that evaluation also seeks to get at the meaning and value of the intervention and its results, evaluation can also seek to answer questions like these:

  • Are we reaching the people who are most in need of our intervention?
  • How do the program participants view the change they experience as a result of the program/initiative?
  • When are adaptations made to the implementation? What factors influence the adaptations? What are the impacts of those adaptations?
  • What is the relationship between facilitation skills of the person implementing the program and participant experience in the program/program outcomes?
  • What unintended outcomes emerge as a result of program adaptations/program implementation at all?

Consider this as an opportunity for your evaluation team to openly brainstorm evaluation questions related to your program or initiative. You will not answer all of the questions you come up with when you brainstorm, but a brainstorming session allows you to consider the vast possibilities that will then need to be narrowed down. Choose questions that you have the resources to answer and that will yield data you are willing and able to take action on. Use this discussion guide during your brainstorm.

 

clipboard with pencil

Tools for Implementation

 

Take a journey through the CDC's Sexual Violence Indicators Guide and Database to explore how to:

  • Identify potential indicators and explore direct links to publicly available data sources 
  • Assess the fit of potential indicators
  • Create a plan to collect, analyze, and use indicator data

Indicator Selector Tool (PDF, 4 pages). This tool from the CDC helps those working in violence prevention to identify indicators that are appropriate for your specific evaluation.

Evaluating Sexual Violence Prevention Programs: Steps and Strategies for Preventionists (Online Course, 60 minutes) This interactive online course walks the user through the basic steps of evaluating the impact of sexual violence prevention programs.  Users will learn the key issues to consider at each of the following steps:

  • Step 1: Clarifying Goals and Objectives
  • Step 2: Planning Evaluation Design
  • Step 3: Choosing Measurement Tools
  • Step 4: Collecting Data

Information for this course is drawn from the Technical Assistance Guide and Resource Kit for Primary Prevention and Evaluation, developed by Stephanie Townsend, PhD for the Pennsylvania Coalition Against Rape (2009). Visit the NSVRC Campus to begin the course. Free account required to log in.

Measures Database (PDF, 3 pages) This database maintained by the Wisconsin Coalition Against Sexual Assault (WCASA) includes resources where you can find free measures, scales, or surveys specific to sexual assault prevention work. Some measures/scales are general examples and others are "standardized measures". Many examples are provided; there are pros and cons to each measure and WCASA does not endorse any specific options. Please contact NSVRC at prevention@nsvrc.org for assistance in identifying appropriate measures.

Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Abuse Resources for Practitioners (PDF, 31 pages) This document is a compendium of measures of bystander attitudes and behaviors developed by the Prevention Innovations Research Center. Some of the versions of the measures have been researched more thoroughly in terms of psychometric properties than others. Please see the citations provided for articles that describe the versions of our measures that have been published. See also, Evidence-Based Measures of Bystander Action to Prevent Sexual Abuse and Intimate Partner Violence: Resources for Practitioners
(Short Measures) (PDF, 22 pages) which provides administrators of prevention programs with shortened, practice-friendly versions of common outcome measures related to sexual abuse and intimate partner violence. These measures have been analyzed to develop a pool of scales that are concise, valid, and reliable.

picture of a lightbulb If you are thinking about measuring bystander intervention check out this short video where NSVRC staff talk with Rose  Hennessy about some common challenges and ways to address those challenges.

References

Patton, M. Q. (2014). Evaluation flash cards: Embedding evaluative thinking in organizational culture. Retrieved from Indiana University, Indiana Prevention Resource Center: http://www.drugs.indiana.edu/spf/docs/Evaluation%20Flash%20Cards.pdf  

Substance Abuse and Mental Health Services Administration (2016). Process and outcomes evaluation. Retrieved from www.samhsa.gov/capt/applying-strategic-prevention-framework/step5-evaluation/process-outcomes-evaluation

 

Back   Index   Next