By Design...
Program Evaluation 1.0
What is the problem? What do we want to know?
The evaluation should focus on whether or not the two key pillars; Independent Study and TA Role are effectively serving the needs of the learners?
What is the contribution of independent study and TA role towards preparing self-directed learners? To what extent are Independent Study and TA Role effective in serving the needs of learners?
​
Step Two...Identify Purpose for Evaluation and Specify Evaluation Questions.
​
The purpose of the evaluation is to see how effectively the two focus pillars are serving the needs of the self-directed learner.
​
Questions to address:
-
To what extent are the definitions of the pillars relevant to today's learner?
-
To what extent could the wording be changed or adjusted to meet the needs of today's student?
-
To what extent is time correlated to TA time for the success of a self-directed student?
-
If this is an independent study, what is the right amount of seminars that should be offered?
-
How do different TA teachers motivate for success?
-
What is it the minimum care standard that is being applied in TA practice?
-
To what extent can we be sure that TA care standards are being met?
Step 3) Construct a program theory – Building on your learning in Module 2, construct a program theory: theory of action and theory of change. Select a format that best communicates the goals and orientation of the program being evaluated
Scroll Down
Step 4- Identify, describe, and rationalize your evaluation approach
​
Identify
​
Theoretical Roots and Evaluation Approach-
Evaluation should be a deliberate mixture of theoretical and practice, if it is to deliver what is designed to do; improve programs and outcomes for stakeholders. It may provide the common knowledge base that is necessary for practitioners to utilize. Intentional focus and direction coming from theory will provide the necessary starting point from which to have discourse about evaluation in community. Theory has been shown to be central to the practice of evaluation and so one might suggest that the role of theory is a non-negotiable in a partnership built on both working in unison with one another. One might even begin to question the effectiveness of one without the other.
DESCRIBE and RATIONALIZE
​
Theory to be used in this evaluation application: Participatory Evaluation Approach.
I like the Participatory approach because of the community component. All stakeholders are considered, and are also active in the process at some point along the way.
Work in this approach by Cousins and Earl (1995), suggest a very strong motivation for wanting to use this at a school level such as the one I am considering: “…provides general guidelines or principles for collecting information of better quality than is typically available for school level decision making: it also offers means of ensuring greater use of that information than is typically the case with information provided through conventional forms of research and evaluation”
​
It focuses on whether program is being implemented according to plan, if it has its desired effect, and whether it is achieving desired outcomes. It also considers program nuances, is flexible, increases validity in results, develops culture of inquiry, allows ongoing monitoring, staff buy-in. Collecting feedback from those closest to the evaluation may bring better outcomes as it is proactive in design, non-linear, and builds a solid working partnership with evaluation experts and community in the school.
​
Guilt (2014) sees participatory evaluation leading to better overall results, as stakeholders who are involved, may collect better data, understand it better and make better recommendations from the data.
I like that it is focused on Improvement as the priority. As staff are part of the process they better understand the purpose of the evaluation and the work of the external evaluators will better see the “blind spots” of internal staff to the functioning of the pillars.
I also appreciate the approach in that it becomes very clear what questions may need to be asked of the community to move the approach closer to the target of improvement sooner.
Guilt (2014) suggests that we ask three questions prior to beginning an approach:
​
-
What purpose will stakeholder participation serve in this impact evaluation?​
​
They are the end users of the suggested improvements and thus if agency is shift to them, it may result in application and longevity of the suggested improvements.
​
2. Whose participation matters, when and why?
​
We need to get buy-in from all stakeholders and make it clear of their time and place and specific role throughout the process.
​
3. When is participation feasible?
​
The role of the evaluator will oversee gauging when the participation is feasible. This means that they will have to have an ongoing relationship with all stakeholders. Suggesting when the participation is feasible will come through consultation.
​
​
It is also necessary to look at the possible pros and cons of the approach and Better Evaluation.org has a concise version that I thought pointed out the potential “wins” and “losses” of such an approach:
Advantages of doing participatory evaluation
-
Identify locally relevant evaluation questions
-
Improve accuracy and relevance of reports
-
Establish and explain causality
-
Improve program performance
-
Empower participants
-
Build capacity
-
Develop leaders and build teams
-
Sustain organizational learning and growth
​
​
Challenges in implementing and using participatory evaluation
-
Time and commitment
-
Resources
-
Conflicts between approaches
-
Unclear purpose of participation, or a purpose that is not aligned with evaluation design
-
Lack of facilitation skills
-
Only focusing on participation in one aspect of the evaluation process, e.g. data collection
-
Lack of cultural and contextual understanding, and the implications of these for the evaluation design
Sources :
Better Evaluation-Participatory Evaluation Approach- Retrieved from: http://www.betterevaluation.org/en/plan/approach/participatory_evaluation
​
Cousins, J. B., & Earl, L. M. (1995). Participatory evaluation in education: Studies in evaluation use and organizational learning. London: Falmer Press.
​
Guijt, I. (2014). Participatory Approaches, Methodological Briefs: Impact Evaluation 5, UNICEF Office of Research, Florence. Retrieved from:http://devinfolive.info/impact_evaluation/img/downloads/Participatory_Approaches_ENG.pdf
​
STEP 5
Data Collection Justification
​
-
Semi-structured interviews- can be time consuming but are an informative way to get the value from open end question responses. Questions can be prepared well in advance and interviewees can have freedom to answer questions openly. The information gathered usually provides reliable and comparable qualitative data. (Cohen, D. and Crabtree B., 2006)
-
Participatory listening and observation- Because we are looking at a group that already uses the program under observation, the research backing this type of application is documented in work by UX Matter’s Jim Ross. ‘’It’s ideal for service design, process redesign… It’s especially useful in learning about groups of employees, their activities, the systems they use, and the services they perform” Ross, J. (2014).
-
Student reflection- Students as the primary user of the pillars, can provide us with insight into their practice as well as ours. We can collect the reflections and note the responses in a way that spots trends in the data, and similarities and differences in reflections.
-
Surveys- this can be face to face and electronically over the period of the evaluation.
-
Daily activity diagrams- This could be done as a mind map or flow chart by community members to indicate what the long-range, medium range and short-range plan of a student TA and Teacher goals and plans look like.
-
Focus groups- May provide insight into the evaluation which cannot be obtained in individual data collection. There are pros and cons to this method of data collection though. They can be quick and easy to use. Although we must be aware that they can be open to bias, be controlled by strong table members. (CDC Evaluation Briefs, 2008)
Data Analysis Justification
​
-
Narrative analysis- this evaluation is about the story of the students and the teachers and how they work together in using the pillars effectively. This will be about stories and how the stories give us glimpses into what is working and what may need improvement. “researchers have come to understand that personal, social, and cultural experiences are constructed through the sharing of stories.” (Robert Wood Johnson Foundation, (2017).
-
Coding- Coding allows us to organize that data that we find in research. The Impact Center for Evaluation and Research defines coding as: “One easy way to think about coding is to see it as a system to organize your data. In essence, it is a personal filing system. You place data in the code just as you would file something in a folder” The Impact Center for Evaluation and Research (2017).
-
Content analysis
-
Interactional analysis (emphasis on the dialogue)
References:
CDC Evaluation Briefs, (2008). Data Collection Methods for Program Evaluation: Focus Groups. Located at: https://www.cdc.gov/healthyyouth/evaluation/pdf/brief13.pdf
​
Cohen D., Crabtree B., (2006). "Qualitative Research Guidelines Project." Located at: http://www.qualres.org/HomeSemi-3629.html
The Impact Center for Evaluation and Research (2017). Located at: http://programeval.ucdavis.edu/documents/Tips_Tools_18_2012.pdf
Robert Wood Johnson Foundation, (2017). Qualitative Research Guidelines Project. Narrative Analysis. Located at: http://www.qualres.org/HomeNarr-3823.html
Ross, J. (2014). Participatory Observation: Practical Usability. Moving toward a more usable world. Located at: https://www.uxmatters.com/mt/archives/2014/01/participatory-observation.php
STEP SIX Reporting Strategies and Methods for Enhancing Evaluation use
“Use” of the evaluation, is the crucial link in making sure that all the valuable work uncovered by the community is carried through to completion and beyond. Research shows that there are a variety of evaluation uses which include instrumental use, conceptual use, and symbolic use. (Weiss, C., 1998).
The type of evaluation to be considered in this context, may derive its value through process use. It will add value during the process, throughout the process, and ideally after the process has officially concluded. In theory Weiss refers to this as a shift in culture, thinking, and action… ”When evaluation adds to the accumulation of knowledge, it can contribute to large scale shifts in thinking- and in sometimes, ultimately to shifts in action” (Weiss, C. 1998).
​
The intention of evaluation is to improve programs and to inform stakeholders during the process of discovery. Any essential understanding or layer that is peeled away during the evaluation, has the potential to add to the collaborative thinking of the group. That may be enough justification to warrant the process of evaluation.
​
It’s also important that stakeholders recognize that the final summative reporting process will either add to the use of the evaluation if it meets the needs of the closest stakeholders. (Betterevaluation.org. 2017). Having said that, it’s important to roll out to appeal to all stakeholders. We have to appeal to the students as well as the teachers.
​
According to betterevaluation.org,
-
Identify Reporting Requirements: Make sure to report to students and teachers in a meaningful and relevant way for each stakeholder.
-
Develop Reporting Media- a succinct report using visuals will be used to gain relevancy with each stakeholder group.
-
Ensure Accessibility- Might be great to do a “see” and “say” campaign. Appeal to variety of audiences.
-
Develop Recommendations- Top 3 and Future 3 recommendations
-
Support Use- We have gained through the process and now let’s gain more through the findings- Continue to monitor how the TA role and Independent Study are working, have distinct checkpoints during the school year, Include a question in the school wide annual survey, Invite other schools to share a PD session on “how to make programs better through evaluation” allowing our group to share the evaluation experience.
Post Card Launch to announce the findings: Put these out for the students to pick up and wonder what is V 2.0
​
Show this video to ignite interest in the upcoming results...
The final reporting medium will be...
Resources:
Better Evaluation. Report and Support Use. (2017). Located at: http://www.betterevaluation.org/en/plan/reportandsupportuse
Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.
Step 7
Commitment to Standards of Practice – The Standards for Program Evaluation
As teachers, we are called to carry out the professional code of conduct in our duties as adults within a position of trust. Similarly, evaluation professionals are asked to perform their duties under the guidance of standards and principles. Having these standards in place, ensures us that the work that is being done is purposefully considered and applied with diligent care and consideration to the community that it is being practiced in.
​
The common ground that the American Evaluation Association on principles refers to, captures the essence of principles in evaluation: “…they aspire to construct and provide the best possible information that might bear on the value of whatever is being evaluated.” (American Evaluation Association Guiding Principles for Evaluators, 2017).
This is a very strong statement that shows me the worth of principles in action in the evaluation field. Principles could provide that point from which to support legitimacy of the practice. These principles will guide the work of the evaluation process and all those in community.
The guidelines for the AEA include: systematic inquiry, competence, integrity and honesty, respect for people, and responsibility for general and public welfare. (American Evaluation Association Guiding Principles for Evaluators, 2017). Work with an outside evaluator who is applying these principles, may assure us of the integrity of the process.
​
The evaluation itself will be carried out using the Program Evaluation Standard Statements created by the Joint Committee on Standards for Educational Evaluations. They have five standards that are applied to evaluation work; utility standards, feasibility standards, proprietary standards, accuracy standards, and evaluation accountability standards. (http://www.jcsee.org/program-evaluation-standards-statements).
​
In considering the standards, I feel that there are a few that stand out if we were to move forward with this proposal and conduct an evaluation of our pillars. These include, but are not limited to the following list from the accountability standards.
​
-
U2 Attention to Stakeholders- “Evaluations should devote attention to the full range of individuals and groups invested in the program and affected by its evaluation.”
-
U5 Relevant Information- “Evaluation information should serve the identified and emergent needs of stakeholders.”
-
U6 Meaningful Processes and Products- “Evaluations should construct activities, descriptions, and judgments in ways that encourage participants to rediscover, reinterpret, or revise their understandings and behaviors.”
-
F2 Practical Procedures- “Evaluation procedures should be practical and responsive to the way the program operates.”
-
A1 Justified Conclusions and Decisions- “Evaluation conclusions and decisions should be explicitly justified in the cultures and contexts where they have consequences.”
-
E1 Evaluation Documentation- “Evaluations should fully document their negotiated purposes and implemented designs, procedures, data, and outcomes.” Full document :Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.
I feel that the above standards would serve our building in a way that would build trust in the process of evaluation of the pillars. It seems to make the complex…simple just by applying standards of diligence to the process.
​
As we move towards improvement, and measurement through data use continues to be a prominent feature in communities of practice, the use of program evaluation is likely to be utilized more. It is my hope that we might be able to conduct a participatory evaluation to dig deep into the pillars of self-directed learning and find out if they are serving the needs of the community as effectively as they could be.