Skip to main content
Get Help Escape

Section Seventeen Banner: Case Studies

Case Study One
University Partnership/Mixed Methods

(Madison Jackson, personal communication, August 6, 2015)

A rape crisis center in Texas, realizing that their evaluation tools were not giving them useful data, reached out to a professor at a local college for help. Together, they developed tools to collect both quantitative and qualitative data. Specifically, they used a questionnaire that was administered as a pre- and post-test and then also included open-ended question on the post-test. For the qualitative sections, the staff were trained and given scoring tools to enable them to carry out the data analysis themselves. A basic score sheet was developed to assess what participants wrote in response to open ended questions and to help staff code for buzzwords.

The staff members felt that the process the professor helped them design not only felt “doable” in terms of implementation but also helped them collect more meaningful data and use it! For example, they were able to see how the young people talked about certain concepts like bystander intervention and work such participant-initiated language into their curriculum. They were also able to learn more about the ways young people were interacting with each other, including learning about social media sites the preventionists were not previously aware of. Additionally, students were able to both express their feelings about the content and reflect on their own problematic behaviors and desires to change those behaviors. All of this helped the preventionists continue to refine their curricula to speak more directly to the young people with whom they were working.

Tip: The preventionist also mentioned that the young people wanted to write more than they were able to write in the final few minutes of a session, so they had to make sure to end early enough for the participants to have ample time to complete their answers. Since it’s important for participants not to feel shut down and know that you value their input, the time they have to complete such questionnaires matters. You can consider a few different options to make sure this happens. First, you can let them complete the measures at the beginning of the final session if you aren’t covering any critical learning during that session that might be assessed in the evaluation instrument. Second, you can ask the teacher to give the participants time to complete them, preferably at a later date. This option helps determine if any changes from the programming last beyond the time of the program itself.

Note about university partnerships.

As you can see, this process focused not only on building tools but on developing the agency and staff members’ capacity to implement, analyze, interpret, and use the data produced. In this case, the professor had previous experience working as an evaluation specialist at a nonprofit organization, so he had some insight into what would and wouldn’t work well in that environment. If you’re working with an evaluator who doesn’t have this kind of experience, feel encouraged to highlight some of your hopes and fears about evaluation and to give them a sense of what nonprofit life is like.

Case Study Two
Participatory Evaluation at the Local Level

(Alexandra Panagotacos, personal communication, August 12, 2015)

As part of her agency’s school-based prevention efforts, Allison coordinates a group of student leaders. When the time came to develop outcome measures for her program, she facilitated several brainstorming sessions with these student leaders in order to get a better sense of what was happening in their communities. The information from those conversations was used in conjunction with Allison’s own knowledge and skills about program evaluation to come up with a list of possible items for an evaluation tool. From there, the young people helped decide which ones to keep, which ones to modify, and which ones to remove. They also pilot tested it before she used it. When it becomes outdated, Allison intends to take it back to the group for further refinement. Allison noted that at the end of this process, these students knew what they wanted to change, why, and how they would measure it.

She also considered having the young people involved in participatory data collection through various kind of formal observation but decided against it because she felt like it might contribute to drama in the small school she was working in and also didn’t think she could control for bias among the student leaders. However, informally she has students observe what’s happening among their peers and report back to her; she uses this information in shaping program direction.

When it’s time for analysis and interpretation, she will sometimes sit down with participants and with colleagues to look at the data in different ways to come up with different possible interpretations, see what’s connected, and so on. They try to figure out why some things changed but not others.

Allison notes that it is helpful if the group is representative of the students who will be participating in programming. She noticed that when the group was less representative of the participants, some of the language and the concepts in the items chosen were problematic for other students. She advocated to expand her leadership group to include students who the administration did not regard as typical leaders. One such student who was failing her classes managed to pull her grade back up after data from the evaluation process helped her see that she could make a difference.

 

Back   Index    

Topic Prevention