К содержанию номера журнала: Вестник КАСУ №2 - 2012
Автор: Нестеренко Анна Евгеньевна
As a future professional, with master degree I am really interested in the
sphere of education. In order to educate or apply knowledge to the minds of
people, the special program should be created. But in this case again teachers
faced with one more complicated problem – program evaluation. Again, program
evaluation is wildly used not only in the sphere of education. Let us invide
deeper in this question. The field of program evaluation has evolved over the
past half century, moving from focusing primarily on research methods to
embracing concepts such as utilization, values, context, change, learning,
strategy, politics, and organizational dynamics. Along with this shift has come
a broader epistemological perspective and wider array of empirical methods.
They are qualitative and mixed methods, responsive case studies, participatory
and empowerment action research, and interpretive and constructivist versions
of knowledge. Note that the concept of program evaluation can include a wide
variety of methods to evaluate many aspects of programs in nonprofit or
for-profit organizations. There are numerous books and other materials that provide
in-depth analysis of evaluations, their designs, methods, combination of
methods and techniques of analysis. However, personnel do not have to be
experts in these topics to carry out a useful program evaluation. The
"20-80" rule applies here, that 20% of effort generates 80% of the
needed results. It's better to do what might turn out to be an average effort
at evaluation than to do no evaluation at all.
Many people believe evaluation is a useless activity that generates lots
of boring data with useless conclusions. This was a problem with evaluations in
the past when program evaluation methods were chosen largely on the basis of
achieving complete scientific accuracy, reliability and validity. This approach
often generated extensive data from which very carefully chosen conclusions
were drawn. Generalizations and recommendations were avoided. As a result,
evaluation reports tended to reiterate the obvious and left program
administrators disappointed and skeptical about the value of evaluation in
general. More recently (especially as a result of Michael Patton's development
of utilization-focused evaluation), evaluation has focused on utility,
relevance and practicality at least as much as scientific validity.
Many people believe that evaluation is about proving the success or
failure of a program. This myth assumes that success is implementing the
perfect program and never having to hear from employees, customers or clients
again - the program will now run itself perfectly. This doesn't happen in real
life. Success is remaining open to continuing feedback and adjusting the
program accordingly. Evaluation gives you this continuing feedback.
Many believe that evaluation is a highly unique and complex process that
occurs at a certain time in a certain way, and almost always includes the use
of outside experts. Many people believe they must completely understand terms
such as validity and reliability. They don't have to. They do have to consider
what information they need in order to make current decisions about program issues
or needs. Note that many people regularly undertake some nature of program
evaluation - they just don't do it in a formal fashion so they don't get the
most out of their efforts or they make conclusions that are inaccurate (some
evaluators would disagree that this is program evaluation if not done
methodically). Consequently, they miss precious opportunities to make more of
difference for their customer and clients, or to get a bigger bang for their
So What is Program Evaluation?
First, let us consider "what is a program?" Typically,
organizations work from their mission to identify several overall goals which
must be reached to accomplish their mission. In nonprofits, each of these goals
often becomes a program. Nonprofit programs are organized methods to provide
certain related services to constituents, e.g., clients, customers, patients,
etc. Programs must be evaluated to decide if the programs are indeed useful to
constituents. In a for-profit, a program is often a one-time effort to produce
a new product or line of products.
So, still, what is program evaluation? Program evaluation is carefully
collecting information about a program or some aspect of a program in order to
make necessary decisions about the program. Program evaluation can include any
or a variety of at least 35 different types of evaluation, such as for needs
assessments, accreditation, cost/benefit analysis, effectiveness, efficiency,
formative, summative, goal-based, process, outcomes, etc. The type of
evaluation you undertake to improve your programs depends on what you want to
learn about the program. Don't worry about what type of evaluation you need or
are doing- worry about what you need to know to make the program decisions you
need to make, and worry about how you can accurately collect and understand
So, then where Program Evaluation is Helpful? See some frequent Reasons:
Program evaluation can:
1. Understand, verify or increase the impact of products or services on
customers or clients - These "outcomes" evaluations are increasingly
required by nonprofit funders as verification that the nonprofits are indeed
helping their constituents. Too often, service providers (for-profit or
nonprofit) rely on their own instincts and passions to conclude what their customers
or clients really need and whether the products or services are providing what
is needed. Over time, these organizations find themselves in a lot of guessing
about what would be a good product or service, and trial and error about how
new products or services could be delivered.
2. Improve delivery mechanisms to be more efficient and less costly - Over
time, product or service delivery ends up to be an inefficient collection of
activities that are less efficient and more costly than need be. Evaluations
can identify program strengths and weaknesses to improve the program.
3. Verify that you're doing what you think you're doing - Typically, plans
about how to deliver services, end up changing substantially as those plans are
put into place. Evaluations can verify if the program is really running as
Program evaluation can:
4. Facilitate management's really thinking about what their program is all
about, including its goals, how it meets it goals and how it will know if it
has met its goals or not.
5. Produce data or verify results that can be used for public relations
and promoting services in the community.
6. Produce valid comparisons between programs to decide which should be
retained, e.g., in the face of pending budget cuts.
7. Fully examine and describe effective programs for duplication
Still, evaluation has remained an essentially empirical endeavor that
emphasizes data collection and reporting and the underlying skills of research
design, measurement, and analysis. Related fields, such as organization
development (OD), differ from evaluation in their emphasis on skills like
establishing trusting and respectful relationships, communicating effectively,
diagnosis, negotiation, motivation, and change dynamics. The future of program
evaluation should include graduate education and professional training programs
that deliberately blend these two skill sets to produce a new kind of
professional-a scholar-practitioner who integrates objective reflection based
on systematic inquiry with interventions designed to improve policies and
programs (McClintock, 2004).
Narrative methods represent a form of inquiry that has promise for
integrating evaluation and organization development. Narrative methods rely on
various forms of storytelling that, with regard to linking inquiry and change
goals, have many important attributes:
Storytelling lends itself to participatory change processes because it
relies on people to make sense of their own experiences and environments.
Stories can be used to focus on particular interventions while also
reflecting on the array of contextual factors that influence outcomes.
Stories can be systematically gathered and claims verified from
independent sources or methods.
Narrative data can be analyzed using existing conceptual frameworks or
assessed for emergent themes.
Narrative methods can be integrated into ongoing organizational processes
to aid in program planning, decision making, and strategic management.
The following sketches describe narrative methods that have somewhat
different purposes and procedures. They share a focus on formative evaluation,
or improving the program during its evaluation, though in several instances
they can contribute to summative assessment of outcomes. For purposes of
comparison, the methods are organized into three groups: those that are
relatively structured around success, those whose themes are emergent, and
those that are linked to a theory of change.
Narratives Structured Around
Dart and Davies (2003) propose a method they call the most significant
change (MSC) technique and describe how it was applied to the evaluation of a
large-scale agricultural extension program in Australia. This method is highly
structured and designed to engage all levels of the system from program clients
and front-line staff to statewide decision makers and funders, as well as
university and industry partners. The MSC process involves the following steps:
Identify domains of inquiry for storytelling (e.g., changes in
decision-making skills or farm profitability).
Develop a format for data collection (e.g., story title, what happened,
when, and why the change was considered significant).
As described by Dart and Davies (2003), one of the most important results
of MSC was that the story selection process surfaced differing values and
desired outcomes for the program. In other words, the evaluation storytelling
process was at least as important as the evaluation data in the stories. In addition,
a follow-up case study of MSC revealed that it had increased involvement and
interest in evaluation, caused participants at all levels to understand better
the program outcomes and the dynamics that influence them, and facilitated
strategic planning and resource allocation toward the most highly valued directions.
This is a good illustration of narrative method linking inquiry and OD needs. A
related narrative method, structured to gather stories about both positive and
negative outcomes, is called the success case method (Brinkerhoff, 2003 ). The
method has been most frequently used to evaluate staff training and related
human resource programs, although conceptually it could be applied to other
programs as well.
The purpose of the success case method is not just to evaluate the training,
but to identify those aspects of training that were critical-alone or in
interaction with other organizational factors. In this way, the stories serve
both to document outcomes, but also to guide management about needed
organizational changes that will accomplish broader organizational performance
goals. Kibel (1999) describes a related success story method that involves more
complex data gathering and scoring procedures and that is designed for a
broader range of human service programs.
Narratives With Emerging Themes
A different approach to narrative methods is found within qualitative case
studies (Costantino & Greene, 2003). Here, stories are used to understand
context, culture, and participants’ experiences in relation to program
activities and outcomes. As with most case studies, this method can require
site visits, review of documents, participant observation, and persona. The
authors changed their original practice of summarizing stories to include
verbatim transcripts, some of which contained interwoven mini stories. In this
way they were able to portray a much richer picture of the program (itself an
intergenerational storytelling program) and of relationships among participants
and staff, and they were able to use stories as a significant part of the
Nelson (1998) describes a similar approach that uses both individual and
group storytelling in evaluating youth development and risk prevention
Narratives Linked to a Theory of
The previous uses of narrative emphasize inquiry more than OD
perspectives. Appreciative inquiry (AI) represents the opposite emphasis,
although it relies heavily on data collection and analysis (Barrett & Fry,
2002). The AI method evolved over time within the OD field as a form of inquiry
designed to identify potential for innovation and motivation in organizational
groups. AI is an attempt to move away from deficit and problem-solving
orientations common to most evaluation and OD work and move toward “peak
positive experiences” that occur within organizations. AI uses explicitly
collaborative interviewing and narrative methods in its effort to draw on the
power of social constructionism to shape the future. AI is based on social
constructionism’s concept that what you look for is what you will find, and
where you think you are going is where you will end up.
The AI approach involves several structured phases of systematic inquiry
into peak experiences and their causes, along with creative ideas about how to
sustain current valued innovations in the organizational process. Stories are
shared among stakeholders as part of the analysis and the process to plan
change. AI can include attention to problems and can blend with evaluation that
emphasizes accountability, but it is decidedly effective as a means of socially
creating provocative innovations that will sustain progress.
This brief overview of narrative methods shows promise for drawing more
explicit connections between the fields of program evaluation and OD. In
addition, training in the use of narrative methods is one means of integrating
the skill sets and goals of each profession to sustain and improve programs.
1. Barrett, F., Fry, R. (2002). Appreciative inquiry in
action: The unfolding of a provocative invitation. In R. Fry, F. Barrett, J.
Seiling, D. Whitney (Eds.), Appreciative inquiry and organizational
transformation: Reports from the field. Westport, CT: Quorum Books.
2. Brinkerhoff, R. O. (2003). The success case method: Find
out quickly what’s working and what’s not. Berrett-Koehler Publishers.
3. Costantino, R. D.,Greene, J. C. (2003). Reflections on
the use of narrative in evaluation. American Journal of Evaluation, 24(1),
4. Dart, J., Davies, R. (2003). A dialogical, story-based
evaluation tool: The most significant change technique. American Journal of
Evaluation, 24(2), 137–155.
5. Kibel, B. M. (1999). Success stories as hard data: An
introduction to results mapping. New York: Kluwer/Plenum.
К содержанию номера журнала: Вестник КАСУ №2 - 2012