June 2009 // Volume 47 // Number 3 // Tools of the Trade // v47-3tt2
A Framework to Link Evaluation Questions to Program Outcomes
Asking the right evaluation questions is very important to documenting program outcomes. This article provides a roadmap to link evaluation questions to program outcomes. Understanding the program, the purpose of the evaluation, and its utility are critical steps in developing focused evaluation questions. Further, grouping evaluation questions into process and outcome questions will help answer both program implementation efforts (activities and resources) and program effects (KASA change) on participants. Developing a program outcome chart also helps in writing focused evaluation questions. An important strategy in developing evaluation questions is to integrate program evaluation into the program development process.
Writing focused evaluation questions is a challenging task. The most challenging is linking evaluation questions to outcomes. Both evaluators and investigators spend considerable time and resources in developing evaluation questions. A description of the program, its intended objectives, and what key stakeholders want to know about the program are critical first steps in developing evaluation questions.
This article provides a roadmap to link evaluation questions to program outcomes. First, the article describes the key components involved in developing evaluation questions from the start to the end. Second, it describes a logical, sequential framework to link evaluation questions to program outcomes (short-term, immediate, and long term outcomes). Third, using a program outcome chart, the article describes how program activities and resources are linked to program outcomes. Finally, the article discusses using evaluation findings to render judgments, facilitate program improvements, and generate knowledge.
A clear understanding of the program and its objectives is a critical first step in developing evaluation questions. Such an understanding helps you write focused evaluation questions. A second critical step is to know why you are doing the evaluation. In other words, what is the purpose of the evaluation—program improvement, program justification, or generating new knowledge and theories? Develop a clear purpose for evaluation by involving stakeholders and other program staff. Figure 1 provides a roadmap and a listing of key components involved in linking evaluation questions to program outcomes.
Once you develop a clear purpose, the next step is framing evaluation questions. In general, you can frame questions in two forms: process (formative) and outcome (short-term, immediate, and long-term outcomes) questions (Langmeyer, 2008). Process questions help answer how well you are we doing what you set out to do. Are you implementing activities as intended in order to get the results you want? Examples of process questions include the following.
- Are key components of the program in place?
- Are appropriate staff/volunteers who possess necessary skills in place?
- Do you have the right mix of activities?
- Are you reaching the intended target audience?
Schematic Description of Linking Evaluation Questions to Program Outcomes/ Impact
Outcome evaluation questions address how the program activities relate to changes in KASA (knowledge, attitude, skills, and aspirations) and behaviors of participants. In other words, these questions help to measure program effects on participants. In this phase, decisions regarding the type of questions-KASA and behavior changes-should be made.
Both process and outcome evaluation questions help determine activities needed to achieve the intended outcomes. Do the learning activities (workshops, demonstrations, etc.) bring about KASA and behavior changes in program participants? As you write KASA questions, keep in mind the scales of measurement (nominal, ordinal, interval-ratio), the indicators, methods of data collection, and analysis and criteria to determine program success. Table 1 provides a summary of process and outcome questions proposed by Thompson and McClintock (2007), their use in evaluation, and appropriate methods for collecting data.
|Evaluation Questions||What They Measure||Why Useful||Methods|
|Process||How well the program is
Is it reaching the intended people?
|Tells how well the plans
developed are working
Identifies early any problems that occur in reaching the target population
Allows adjustments to be made before the problems become severe
|Outcome||Helps to measure
immediate changes brought about by the program
Helps assess changes in KASA
|Allows for program
modification in terms of materials, resource shifting, etc.
Tells whether or not programs are moving in the right direction
Another critical step in linking evaluation questions to program outcomes is the development of a program outcome chart. The program outcome chart helps identify outcome types (short term, intermediate term and long term), when the outcomes will occur, what resources and activities are needed and/or in place for the outcomes to occur, and what should be the focus of the evaluation effort (Langmeyer, 2008).
A program outcome chart for a Healthy Lifestyle Education Program (HLEP), as shown in Figure 2, provides a logical and sequential picture of how the program operates in relation to activities and outcomes. Consider the following questions as you develop the program outcome chart.
- What is the extent of the problem among targeted program participants?
- What process questions address the extent of the problem?
- What are the short-term and/or medium-term outcomes expected in terms of participant reactions to the program, knowledge acquisition, and skill development or change in aspirations?
- What are the long-term expected outcomes.
Program Outcome Chart for a Healthy Lifestyle Education Program (HLEP)
Findings/results should indicate whether or not the evaluation questions were answered. Linking the results to the criteria in order to determine the success of the program is important. If the results are as expected, then you must decide on what should be done next with the findings, that is, utilization. As suggested by Patton (1997), Chelimsky (1997), and Weiss (1988), utilization can occur in three different areas: program improvement, program justification (accountability), and knowledge generation (Figure 3).
If the results were not as expected, then the program staff should re-examine the purpose, evaluation questions, activities conducted, and resources expended and also determine what went "wrong" or "what was not done." Keep in mind the "utility" when planning an evaluation. Involve key stakeholders and program staff in deciding by whom and for what purpose the evaluation findings will be used. Such thinking and understanding will go a long way in asking appropriate evaluation questions to document program outcomes.
Primary Uses of Evaluation Findings
In summary, asking appropriate evaluation questions helps to focus your evaluation, set measurable objectives, select appropriate indicators and data collection strategies, anticipate problems, make needed improvements, and manage resources wisely. Further, evaluation questions should reflect a thorough understanding of the program background and its operations, its intentions, and the target audience. Consider the following general questions provided by Wortley (2008), Radhakrishna (2007), and Bradburn, Sudman, & Wansink (2004) to develop evaluation questions:
- Is it important to program staff and stakeholders?
- Does it reflect its purpose?
- Will it provide the needed information/data you need to make program improvements?
- Can the data be gathered using available resources?
- Have you considered program evaluation upfront as you were developing the program?
In this era of accountability, program evaluators are under increasing pressure to document program outcomes. Asking focused evaluation questions, based on a thorough understanding of the program, the purpose of the evaluation, and its utility, will help you document these outcomes. Another important strategy is to integrate program evaluation into the program development process early on so that you can see how the program activities and resources are linked to desired changes in knowledge, attitudes, skills, aspirations, and behaviors.
Bradburn, N., Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design-for market research, political polls, and social and health questionnaires. Jossey-Bass, CA: San Francisco.
Chelimsky, E. (1997). The coming transformations in evaluation. In E. Chelimsky and W.R. Shadish (Eds.), Evaluation for the 21st century (pp. 1-26). Thousand Oaks, CA: Sage.
Thompson, N. J., & McClintock, H. O. (2000). Demonstrating your program's worth: A primer on evaluation for programs to prevent unintentional injury. Retrieved July 15, 2008 from: www.cdc.gov/ncpic/pub-res/dypw/03_stages.html
Langmeyer, D. B. (2008). Developing evaluation questions. Retrieved February 15, 2008 from: http://www.archrespite.org/archfs13.html
Patton, M. Q. (1997). Utilization-focused evaluation: The New Century text. 3rd ed. Thousand Oaks, CA: Sage.
Radhakrishna, R .B. (2007). Tips for developing and testing questionnaires/instruments. Journal of Extension, [On-line], 45 (1) Article 1TOT2. Available at: http://www.joe.org/joe/2007february/tt2.php
Weiss, C. H. (1988). Evaluation for decisions: Is anybody there? Does anybody care? Evaluation Practice, 9(1), 5-19.
Wortley, P. (2008). Process evaluation: A PowerPoint presentation. Retrieved July 15, 2008 from: http://www.cdc.gov/vaccines/programs/progeval