top of page

By Design...

Program Evaluation 1.0

What we know...

Step One...Select and Describe the Program Context

Identify a social program that you would like to evaluate. Describe the focus of the program, size of staff, goals, resources, community demographics, and other details important to conducting a program evaluation

Bishop Carroll has been offering learning opportunities in student self-directed studies since 1971. The early research to underpin the program came from Dr. J. Lloyd Trump, in his 1971 book, “A School for Everyone.” The community operates under the guidance and foundation of 5 Pillars that help to guide the practice of the education staff and the students. The five pillars include: TA role, Differentiated Staffing, Continuous Progress, Independent Study, and Individualized Schedules. (Detailed Below).

 

In order to understand the foundations of a program, one should also consider the Bishop Carroll Mission Statement “Bishop Carroll High School is a unique educational Catholic community offering continuous progress in a personalized and individualized learning environment that supports, challenges and empowers students to take responsibility for learning and personal growth” and the Vision Statement: “We, the Bishop Carroll High School Catholic Community, commit to providing a student-centered educational journey which fosters authentic personal growth. We challenge each individual to be a lifelong learner, to embrace a passion for life and to share his or her gifts for the benefit of our global community.”

 

Students from all over the City of Calgary choose to come and study at Bishop Carroll by choice. The student population is currently at 1400 students. It is a mixture of 60% females and 40% males, from all over the city. Bishop Carroll currently has a professional staff of 55 and a support staff of 56.

​

Since I have been at Bishop Carroll (8 years), we have not undergone any form of evaluation in the context of practice, or program. Throughout this evaluation course, I often found myself wondering if our pillars were still relevant to our practice, and to learning, so many years after their initial implementation in 1971. I also wondered if our Mission and Vision statements aligned in accordance to the pillars. I can’t help but think; what if we were to undergo the process of evaluation to see if the pillars were serving the students? What if we could improve our outcomes in a favorable way through an evaluation?

 

This type of collaborative community building may in fact yield evaluative thinking, that could serve all beyond the evaluation process itself. Research completed by Tom Archibald and Guy Sharrock (2014) would suggest that evaluative thinking is best applied within many contexts in communities of practice such as ours. One might even suggest that only in our community, can we seek to inform, and ultimately improve, what matters to us most. The strength of bringing community together on a topic, that is always needing reflection such as pillars, has the potential to inform and improve. “This is formally known as evaluative thinking: the cognitive process of asking questions, explicating beliefs and assumptions, learning and reflecting, and developing new understanding to make informed decisions and prepare for action.” (Lee, K., & Chavis, D. (2015).

 

One might obviously assume that in considering this evaluation that there is something “wrong” or in need of “solving” with the pillars. However, one has to go beyond that type of reactive thinking. The intention is to look at this not so much as “solving a problem”, but as proactively seeking out ways to inform ourselves, so that pillars may in fact be adjusted, modified, or even left untouched, if the process of evaluation indicates as much.

 

This evaluation must start with the end in mind and go back in an iterative process to determine the worth of making changes to the pillars, as informed by the outcomes of the evaluation itself.

​

The Pillars that will undergo evaluation 

​

Independent Study

 

"All material for courses is assembled into self-study packages known as Learning Guides. These guides tell students exactly what they must learn, show them how to learn it and prepare them for testing and evaluation. Some activities that are included as part of the Learning Guides are: reading books, viewing videos, responding to questions and assignments, completing self-tests, conducting experiments and various other educational activities depending on the subject.

 

The Learning Guides also provide many opportunities for students to collaborate with each other by directing them to engage in team work, seminars and group presentations which are aimed at enhancing the student’s learning. Independent learning is supported through one-to-one consultations with teachers that are subject area specialists. They are always available and willing to meet with students to help them achieve their learning guide objectives. The ultimate goal of independent study is to foster a sense of ownership of the knowledge learned. If successful, the student will understand how to learn, which is an important skill for post-secondary education."

 

Teacher Advisor (TA) Role

 

"In Bishop Carroll’s unique learning environment students must exercise a considerable amount of responsibility in order to be successful. However, students are not alone in this responsibility. Their success is also the responsibility of their Teacher Advisor who works with them to develop an academic plan which reflects the student’s unique needs and goals.

 

Each of our students are assigned a TA who they will remain with until graduation. Students will meet with Teacher Advisors twice daily and for a longer period every week. It is the TA’s duty to help the student set goals, motivate them, make sure they are keeping on track with their learning program and stay in touch with parents. The TA will also record learning progress daily and will then issue reports to parents regularly so parents can stay informed about the amount and quality of their child’s work.

 

Because each student remains with the same Teacher Advisor for their entire high school career, this fosters a strong and productive working relationship between the two.

 

A TA will not only motivate and support your child, but they will also become a mentor and someone who genuinely cares about your child’s education and future."

​

​

​

​

What is the problem? What do we want to know?

The evaluation should focus on whether or not the two key pillars; Independent Study and TA Role are effectively serving the needs of the learners?
 What is the contribution of independent study and TA role towards preparing self-directed learners? To what extent are Independent Study and TA Role effective in serving the needs of learners?
​

Step Two...Identify Purpose for Evaluation and Specify Evaluation Questions.

​

The purpose of the evaluation is to see how effectively the two focus pillars are serving the needs of the self-directed learner. 

​

Questions to address:

  • To what extent are the definitions of the pillars relevant to today's learner?

  • To what extent could the wording be changed or adjusted to meet the needs of today's student?

  • To what extent is time correlated to TA time for the success of a self-directed student?

  • If this is an independent study, what is the right amount of seminars that should be offered?

  • How do different TA teachers motivate for success?

  • What is it the minimum care standard that is being applied in TA practice?

  • To what extent can we be sure that TA care standards are being met?

Step 3) Construct a program theory – Building on your learning in Module 2, construct a program theory: theory of action and theory of change. Select a format that best communicates the goals and orientation of the program being evaluated

Scroll Down

Step 4- Identify, describe, and rationalize your evaluation approach

​

Identify

​

Theoretical Roots and Evaluation Approach-

Evaluation should be a deliberate mixture of theoretical and practice, if it is to deliver what is designed to do; improve programs and outcomes for stakeholders.  It may provide the common knowledge base that is necessary for practitioners to utilize. Intentional focus and direction coming from theory will provide the necessary starting point from which to have discourse about evaluation in community. Theory has been shown to be central to the practice of evaluation and so one might suggest that the role of theory is a non-negotiable in a partnership built on both working in unison with one another. One might even begin to question the effectiveness of one without the other.

 

DESCRIBE and RATIONALIZE

​

Theory to be used in this evaluation application: Participatory Evaluation Approach.

I like the Participatory approach because of the community component. All stakeholders are considered, and are also active in the process at some point along the way.

Work in this approach by Cousins and Earl (1995), suggest a very strong motivation for wanting to use this at a school level such as the one I am considering: “…provides general guidelines or principles for collecting information of better quality than is typically available for school level decision making: it also offers means of ensuring greater use of that information than is typically the case with information provided through conventional forms of research and evaluation”

​

It focuses on whether program is being implemented according to plan, if it has its desired effect, and whether it is achieving desired outcomes. It also considers program nuances, is flexible, increases validity in results, develops culture of inquiry, allows ongoing monitoring, staff buy-in. Collecting feedback from those closest to the evaluation may bring better outcomes as it is proactive in design, non-linear, and builds a solid working partnership with evaluation experts and community in the school.

​

Guilt (2014) sees participatory evaluation leading to better overall results, as stakeholders who are involved, may collect better data, understand it better and make better recommendations from the data.

I like that it is focused on Improvement as the priority. As staff are part of the process they better understand the purpose of the evaluation and the work of the external evaluators will better see the “blind spots” of internal staff to the functioning of the pillars.

I also appreciate the approach in that it becomes very clear what questions may need to be asked of the community to move the approach closer to the target of improvement sooner.

 

 Guilt (2014) suggests that we ask three questions prior to beginning an approach:

​

  1. What purpose will stakeholder participation serve in this impact evaluation?​

​

       They are the end users of the suggested improvements and thus if agency is shift to them, it may           result in application and longevity of the suggested improvements.

​

    2. Whose participation matters, when and why?

​

       We need to get buy-in from all stakeholders and make it clear of their time and place and specific            role throughout the process.

​

    3.  When is participation feasible?

​

The role of the evaluator will oversee gauging when the participation is feasible. This means that they will have to have an ongoing relationship with all stakeholders. Suggesting when the participation is feasible will come through consultation.

​

​

It is also necessary to look at the possible pros and cons of the approach and Better Evaluation.org has a concise version that I thought pointed out the potential “wins” and “losses” of such an approach:

 

Advantages of doing participatory evaluation

  • Identify locally relevant evaluation questions

  • Improve accuracy and relevance of reports

  • Establish and explain causality

  • Improve program performance

  • Empower participants

  • Build capacity

  • Develop leaders and build teams

  • Sustain organizational learning and growth

​

​

Challenges in implementing and using participatory evaluation

  • Time and commitment

  • Resources

  • Conflicts between approaches

  • Unclear purpose of participation, or a purpose that is not aligned with evaluation design

  • Lack of facilitation skills

  • Only focusing on participation in one aspect of the evaluation process, e.g. data collection

  • Lack of cultural and contextual understanding, and the implications of these for the evaluation design

 

 

 

 

Sources :

Better Evaluation-Participatory Evaluation Approach- Retrieved from: http://www.betterevaluation.org/en/plan/approach/participatory_evaluation

​

Cousins, J. B., & Earl, L. M. (1995). Participatory evaluation in education: Studies in evaluation use and organizational learning. London: Falmer Press.

​

Guijt, I. (2014). Participatory Approaches, Methodological Briefs: Impact Evaluation 5, UNICEF Office of Research, Florence. Retrieved from:http://devinfolive.info/impact_evaluation/img/downloads/Participatory_Approaches_ENG.pdf

​

STEP 5

Data Collection Justification

​

  • Semi-structured interviews- can be time consuming but are an informative way to get the value from open end question responses. Questions can be prepared well in advance and interviewees can have freedom to answer questions openly. The information gathered usually provides reliable and comparable qualitative data. (Cohen, D. and Crabtree B., 2006)

  • Participatory listening and observation- Because we are looking at a group that already uses the program under observation, the research backing this type of application is documented in work by UX Matter’s Jim Ross. ‘’It’s ideal for service design, process redesign… It’s especially useful in learning about groups of employees, their activities, the systems they use, and the services they perform” Ross, J. (2014).

  • Student reflection- Students as the primary user of the pillars, can provide us with insight into their practice as well as ours. We can collect the reflections and note the responses in a way that spots trends in the data, and similarities and differences in reflections.

  • Surveys- this can be face to face and electronically over the period of the evaluation.

  • Daily activity diagrams- This could be done as a mind map or flow chart by community members to indicate what the long-range, medium range and short-range plan of a student TA and Teacher goals and plans look like.

  • Focus groups- May provide insight into the evaluation which cannot be obtained in individual data collection. There are pros and cons to this method of data collection though. They can be quick and easy to use. Although we must be aware that they can be open to bias, be controlled by strong table members. (CDC Evaluation Briefs, 2008)

 

Data Analysis Justification

​

  • Narrative analysis- this evaluation is about the story of the students and the teachers and how they work together in using the pillars effectively. This will be about stories and how the stories give us glimpses into what is working and what may need improvement. “researchers have come to understand that personal, social, and cultural experiences are constructed through the sharing of stories.” (Robert Wood Johnson Foundation, (2017).

  • Coding- Coding allows us to organize that data that we find in research. The Impact Center for Evaluation and Research defines coding as: “One easy way to think about coding is to see it as a system to organize your data. In essence, it is a personal filing system. You place data in the code just as you would file something in a folder” The Impact Center for Evaluation and Research (2017).

  • Content analysis

  • Interactional analysis (emphasis on the dialogue)

 

References:

 

CDC Evaluation Briefs, (2008). Data Collection Methods for Program Evaluation: Focus Groups. Located at: https://www.cdc.gov/healthyyouth/evaluation/pdf/brief13.pdf

​

Cohen D., Crabtree B., (2006). "Qualitative Research Guidelines Project." Located at: http://www.qualres.org/HomeSemi-3629.html

 

The Impact Center for Evaluation and Research (2017). Located at: http://programeval.ucdavis.edu/documents/Tips_Tools_18_2012.pdf

 

Robert Wood Johnson Foundation, (2017). Qualitative Research Guidelines Project. Narrative Analysis. Located at: http://www.qualres.org/HomeNarr-3823.html

 

Ross, J. (2014). Participatory Observation: Practical Usability. Moving toward a more usable world. Located at: https://www.uxmatters.com/mt/archives/2014/01/participatory-observation.php

STEP SIX Reporting Strategies and Methods for Enhancing Evaluation use

 “Use” of the evaluation, is the crucial link in making sure that all the valuable work uncovered by the community is carried through to completion and beyond. Research shows that there are a variety of evaluation uses which include instrumental use, conceptual use, and symbolic use. (Weiss, C., 1998).

 

The type of evaluation to be considered in this context, may derive its value through process use. It will add value during the process, throughout the process, and ideally after the process has officially concluded. In theory Weiss refers to this as a shift in culture, thinking, and action… ”When evaluation adds to the accumulation of knowledge, it can contribute to large scale shifts in thinking- and in sometimes, ultimately to shifts in action” (Weiss, C. 1998).

​

The intention of evaluation is to improve programs and to inform stakeholders during the process of discovery. Any essential understanding or layer that is peeled away during the evaluation, has the potential to add to the collaborative thinking of the group. That may be enough justification to warrant the process of evaluation.

​

It’s also important that stakeholders recognize that the final summative reporting process will either add to the use of the evaluation if it meets the needs of the closest stakeholders. (Betterevaluation.org. 2017). Having said that, it’s important to roll out to appeal to all stakeholders. We have to appeal to the students as well as the teachers.

​

According to betterevaluation.org,

  1. Identify Reporting Requirements:  Make sure to report to students and teachers in a meaningful and relevant way for each stakeholder.

  2. Develop Reporting Media- a succinct report using visuals will be used to gain relevancy with each stakeholder group.

  3. Ensure Accessibility- Might be great to do a “see” and “say” campaign. Appeal to variety of audiences.

  4. Develop Recommendations- Top 3 and Future 3 recommendations

  5. Support Use- We have gained through the process and now let’s gain more through the findings- Continue to monitor how the TA role and Independent Study are working, have distinct checkpoints during the school year, Include a question in the school wide annual survey, Invite other schools to share a PD session on “how to make programs better through evaluation” allowing our group to share the evaluation experience. 

 

 

Post Card Launch to announce the findings: Put these out for the students to pick up and wonder what is V 2.0

​

Show this video to ignite interest in the upcoming results...

The final reporting medium will be...

Resources:

Better Evaluation. Report and Support Use. (2017). Located at: http://www.betterevaluation.org/en/plan/reportandsupportuse

Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.

Step 7 

Commitment to Standards of Practice – The Standards for Program Evaluation

 As teachers, we are called to carry out the professional code of conduct in our duties as adults within a position of trust. Similarly, evaluation professionals are asked to perform their duties under the guidance of standards and principles. Having these standards in place, ensures us that the work that is being done is purposefully considered and applied with diligent care and consideration to the community that it is being practiced in.

​

The common ground that the American Evaluation Association on principles refers to, captures the essence of principles in evaluation: “…they aspire to construct and provide the best possible information that might bear on the value of whatever is being evaluated.” (American Evaluation Association Guiding Principles for Evaluators, 2017).

 

This is a very strong statement that shows me the worth of principles in action in the evaluation field. Principles could provide that point from which to support legitimacy of the practice. These principles will guide the work of the evaluation process and all those in community.

 

The guidelines for the AEA include: systematic inquiry, competence, integrity and honesty, respect for people, and responsibility for general and public welfare. (American Evaluation Association Guiding Principles for Evaluators, 2017). Work with an outside evaluator who is applying these principles, may assure us of the integrity of the process.

​

The evaluation itself will be carried out using the Program Evaluation Standard Statements created by the Joint Committee on Standards for Educational Evaluations. They have five standards that are applied to evaluation work; utility standards, feasibility standards, proprietary standards, accuracy standards, and evaluation accountability standards. (http://www.jcsee.org/program-evaluation-standards-statements).

​

In considering the standards, I feel that there are a few that stand out if we were to move forward with this proposal and conduct an evaluation of our pillars. These include, but are not limited to the following list from the accountability standards.  

​

  • U2 Attention to Stakeholders- “Evaluations should devote attention to the full range of individuals and groups invested in the program and affected by its evaluation.”

  • U5 Relevant Information- “Evaluation information should serve the identified and emergent needs of stakeholders.”

  • U6 Meaningful Processes and Products- “Evaluations should construct activities, descriptions, and judgments in ways that encourage participants to rediscover, reinterpret, or revise their understandings and behaviors.”

  • F2 Practical Procedures- “Evaluation procedures should be practical and responsive to the way the program operates.”

  • A1 Justified Conclusions and Decisions- “Evaluation conclusions and decisions should be explicitly justified in the cultures and contexts where they have consequences.”

  • E1 Evaluation Documentation- “Evaluations should fully document their negotiated purposes and implemented designs, procedures, data, and outcomes.” Full document :Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.

 

I feel that the above standards would serve our building in a way that would build trust in the process of evaluation of the pillars. It seems to make the complex…simple just by applying standards of diligence to the process.

​

As we move towards improvement, and measurement through data use continues to be a prominent feature in communities of practice, the use of program evaluation is likely to be utilized more. It is my hope that we might be able to conduct a participatory evaluation to dig deep into the pillars of self-directed learning and find out if they are serving the needs of the community as effectively as they could be.

bottom of page