К содержанию номера журнала: Вестник КАСУ №2 - 2012
Автор: Нестеренко Анна Евгеньевна
In the sphere of program evaluation there are several methods, they are:
for assessments needs, accreditation, coast or benefit, effectiveness,
efficiency, and others. One of the type is goal based. The aim of goal-based
evaluation is to investigate whether the project has achieved its goals. This
question is posed at the end of the project process, often within the context
of a summative evaluation.
There are as many goals as there are projects, let me provide general
1. Formulate clear goals for your project
Only if goals are clearly defined without ambiguity can their achievement
can be verified.
2. Link to program goals
If your project is promoted within a higher-level program consider the program
goal to which your project contributes. The goals of your project should
contribute as "interim goals" to achieve "overall goals"
defined on the program level
3. How do you want to achieve your goal?
Consider which measures of your project contribute to achieving which
4. How will you recognize whether you have achieved the goal?
Consider what suitable or observable indicators of goal achievement are.
5. How will you provide the necessary data?
Consider which method is suitable for providing the necessary information.
The Instrument selection in most cases, the instruments (e.g.
questionnaires, checklists) should normally be adapted to your project, by
adding the name of your course into a questionnaire or by erasing those
questions, which are not relevant to your context, or by merging different
questionnaires into one.
The “Goal Based Evaluation” (GBE) is often called the “Goal Free
Evaluation”(GFE) as it was written in a Michael Scriven book “Evaluation
Thesaurus”. Goal Based Evaluation is just what is says – the evaluation seeks
to determine if the stated goals (and objectives) of the program or project
have been achieved. This is the typical evaluation with which most of us are
familiar. We have a list of goals and objectives, and we design an evaluation
to see how well we did with each. I hate to think of all the times I just
rewrote the objectives as questions for a survey!
Screven notes that the GBE approach can be flawed by false assumptions
underlying the goals, changes in goals over time, and dealing with
inconsistencies in them. An example of a false assumption comes from a Solar
Energy workshop evaluation that showed that less than a third of people who
attended incorporated solar energy into their homes. The evaluation had failed
to ask if, based on the workshop information, participants had a home that
could be retro-fitted. Thus, the program seemed to be more of a failure than it
was. Side effects and other consequences are seldom addressed.
Goal Free Evaluation, according to Screven, has the ‘purpose of finding
out what the program is actually doing without being cued to what it is trying
to do. That is, the evaluator doesn’t know the purpose of the program.’ If the
program is doing what it is supposed to be doing, according to Screven, ‘then
these achievements should show up (in observation of process and interviews
with consumers not staff).’
Screven says ‘that evaluators who do not know what the program is supposed
to be doing look more thoroughly for what it is doing.’ Of course, this makes
it a challenge for program staff to conduct the evaluation in a goal free
However, I think that you, as program folks, can do GFE. It is a matter of
how you ask the questions. For example, you can ask, “Since (date), what
changes, if any, have you made to your farming practices?” Then, follow-up
with, “What prompted you to make that change(s)? Hopefully, the response will
be your program. And if the reply is “my neighbor, or a magazine, or another
program”, well, you will have learned something useful. Contrast those
questions to: “As a result of participating in ‘my program’, did you make the
following changes: xxxxx, yyy, zzzz…?”`
While GBE will continue as the main direction in most evaluations, see if
you can find ways to ask goal free questions over the course of your project or
An agency received funding to conduct family-day care training for mothers
receiving public assistance and living in public housing. No one had checked to
see if family day care as a business was allowed in public housing. The
evaluation showed that none of the participants who completed the extensive
training started a family day care business (the stated goal). Because the
evaluation also asked what happened as a result of the training, it was discovered
that two-thirds of the participants had found work in child care settings, and
all said that their parenting skills were improving, neither of which were
As for the preparation for designing your training plan, the purpose of
the design phase is to identify the learning objectives that together will
achieve the overall goals identified during the needs assessment phase of systematic
training design. You will also identify the learning activities (or methods)
you will need to conduct to achieve your learning objectives and overall
Also, note that there is a document, Complete Guidelines to design your
training plan, that condenses the guidelines from the various topics about
training plans to guide you to develop a training plan. That document also
provides a Framework to design your training plan that you can use to document
the various aspects of your plan.
Designing your learning objectives, learning objectives specify the new
knowledge, skills and abilities that a learner should accomplish from
undertaking a learning experience, such as a course, webinar, and self-study or
group activity. Achievement of all of the learning objectives should result in
accomplishing all of the overall training goals of the training and development
The following depicts how learning objectives are associated with the
training goals (identified during the needs assessment phase), learning
methods/activities, and evidence of learning and evaluation activities.
Training goal overall results or capabilities you hope to attain by
implementing your training plan, pass supervisor qualification test. Overall
results or capabilities you hope to attain by implementing your training plan:
1) exhibit required skills in problem solving and decision making;
2) exhibit required skills in delegation learning methods / activities
what you will do in order to achieve the learning objectives, e.g.
1. Complete a course in basic supervision
2. Address a major problem that includes making major decisions
3. Delegate to a certain employee for one month
Documentation / evidence of learning evidence produced during your
learning activities -- these are results that someone can see, hear, feel,
read, smell, e.g.
1. Course grade
2. Your written evaluation of your problem solving and decision making
Evaluation assessment and judgment on quality of evidence in order to
conclude whether you achieved the learning objectives or not.
To help learners understand how to design learning objectives, the
following examples are offered to convey the nature of learning objectives. The
examples are not meant to be offered as examples to be adopted word-for-word as
learning objectives. Trainers and/or learners should design their own learning
objectives to meet their overall training goals and to match their preferred
strategies for learning. The topic of the learning objective is included in
bolding and italics. Learning objectives are numbered directly below.
Goals-based evaluation is a method used to determine the actual outcome of
a project when compared to the goals of the original plan. Performing a
goals-based evaluation helps a company to further develop successful processes
and either discard or reconfigure unsuccessful ones. There are certain observations
that are used to gauge a project when using a goals-based evaluation that can
help the efficiency of a small business.
An understanding of how the goals were established for a particular
project is an important part of a goals-based evaluation. Project goals need to
be grounded in research and use historical data to be effective as performance
measuring tools. For example, a sales goal for an upcoming marketing project
may have used two years of historical data. The goals-based evaluation of the
project may determine that four years of historical data is a better way to
create sales projections.
Part of planning a project is establishing a timeline for achieving goals.
The timeline includes milestones that are used as points where the actual data
is compared to projections. One of the observations made by a goals-based
evaluation is whether the timeline was appropriate for the project and if the
milestones were placed effectively. For example, data may indicate that a
marketing promotion that ran for six months saw a decline in revenue after only
three months. This data is used to determine the schedule for future projects.
Projects are developed based on the list of priorities that will help to
achieve the final goal. A goals-based evaluation will indicate if those
priorities were correct, or if any changes need to be made for future projects.
For example, a marketing plan indicates that advertising should be designed
before contacting media outlets for pricing. But the goals-based evaluation of
the project indicates that advertisers can give a variety of prices that can
save the company money. The advertising should, therefore, be developed after
discussing pricing options with advertisers.
To make a long story short let me say that, often programs are established
to meet one or more specific goals. These goals are often described in the
original program plans.
Goal-based evaluations are evaluating the extent to which programs are
meeting predetermined goals or objectives. Questions to ask yourself when
designing an evaluation to see if you reached your goals, are:
1. How were the program goals (and objectives, is applicable) established?
Was the process effective?
2. What is the status of the program's progress toward achieving the
3. Will the goals be achieved according to the timelines specified in the
program implementation or operations plan? If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities,
training, etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the
goals? (Depending on the context, this question might be viewed as a program
management decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes
- know why efforts are behind schedule before timelines are changed)?
7. How should goals be changed (be careful about making these changes -
know why efforts are not achieving the goals before changing the goals)? Should
any goals be added or removed? Why?
8. How should goals be established in the future?
The overall goal in selecting evaluation method(s) is to get the most
useful information to key decision makers in the most cost-effective and
realistic fashion. Consider the following questions:
1. What information is needed to make current decisions about a product or
2. Of this information, how much can be collected and analyzed in a
low-cost and practical manner, e.g., using questionnaires, surveys and
3. How accurate will the information be (reference the above table for
disadvantages of methods)?
4. Will the methods get all of the needed information?
5. What additional methods should and could be used if additional
information is needed?
6. Will the information appear as credible to decision makers, e.g., to
funders or top management?
7. Will the nature of the audience conform to the methods, e.g., will they
fill out questionnaires carefully, engage in interviews or focus groups, let
you examine their documentations, etc.?
8. Who can administer the methods now or is training required?
9. How can the information be analyzed?
Note that, ideally, the evaluator uses a combination of methods, for
example, a questionnaire to quickly collect a great deal of information from a
lot of people, and then interviews to get more in-depth information from
certain respondents to the questionnaires. Perhaps case studies could then be
used for more in-depth analysis of unique and notable cases, e.g., those who
benefited or not from the program, those who quit the program.
There are four levels of evaluation information that can be gathered from
clients, including getting their:
1. Reactions and feelings (feelings are often poor indicators that your
service made lasting impact);
2. Learning (enhanced attitudes, perceptions or knowledge);
3. Changes in skills (applied the learning to enhance behaviors);
4. Effectiveness (improved performance because of enhanced behaviors).
Usually, the farther your evaluation information gets down the list, the
more useful is your evaluation. Unfortunately, it is quite difficult to
reliably get information about effectiveness. Still, information about learning
and skills is quite useful.
Ideally, management decides what the evaluation goals should be. Then an
evaluation expert helps the organization to determine what the evaluation
methods should be, and how the resulting data will be analyzed and reported
back to the organization. Most organizations do not have the resources to carry
out the ideal evaluation.
Still, they can do the 20% of effort needed to generate 80% of what they
need to know to make a decision about a program. If they can afford any outside
help at all, it should be for identifying the appropriate evaluation methods
and how the data can be collected. The organization might find a less expensive
resource to apply the methods, e.g., conduct interviews, send out and analyze results
of questionnaires, etc.
If no outside help can be obtained, the organization can still learn a
great deal by applying the methods and analyzing results themselves. However,
there is a strong chance that data about the strengths and weaknesses of a
program will not be interpreted fairly if the data are analyzed by the people
responsible for ensuring the program is a good one. Program managers will be
"policing" themselves. This caution is not to fault program managers,
but to recognize the strong biases inherent in trying to objectively look at
and publicly (at least within the organization) report about their programs.
Therefore, if at all possible, have someone other than the program managers
look at and determine evaluation results.
Develop an evaluation plan to ensure your program evaluations are carried
out efficiently in the future. Note that bankers or funders may want or benefit
from a copy of this plan. Ensure your evaluation plan is documented so you can
regularly and efficiently carry out your evaluation activities. Record enough
information in the plan so that someone outside of the organization can
understand what you're evaluating and how. Consider the following format for
1. Title Page (name of the organization that is being, or has a
product/service/program that is being, evaluated; date)
2. Table of Contents
3. Executive Summary (one-page, concise overview of findings and recommendations)
4. Purpose of the Report (what type of evaluation(s) was conducted, what
decisions are being aided by the findings of the evaluation, who is making the
5. Background About Organization and Product/ Service/ Program that is
a) Organization Description/History
b) Product/Service/Program Description (that is being evaluated)
I) Problem Statement (in the case of nonprofits, description of the
community need that is being met by the product/service/program)
II) Overall Goal(s) of Product/Service/Program
III) Outcomes (or client/customer impacts) and Performance Measures (that
can be measured as indicators toward the outcomes)
IV) Activities/Technologies of the Product/Service/Program (general description
of how the product/service/program is developed and delivered)
V) Staffing (description of the number of personnel and roles in the
organization that are relevant to developing and delivering the
6) Overall Evaluation Goals (eg, what questions are being answered by the
a) Types of data/information that were collected
b) How data/information were collected (what instruments were used, etc.)
c) How data/information were analyzed
d) Limitations of the evaluation (eg, cautions about findings/conclusions
and how to use the findings/conclusions, etc.)
8) Interpretations and Conclusions (from analysis of the data/information)
9) Recommendations (regarding the decisions that must be made about the
Appendices: content of the appendices depends on the goals of the
evaluation report, eg.:
a) Instruments used to collect data/information
b) Data, eg, in tabular format, etc.
c) Testimonials, comments made by users of the product/service/program
d) Case studies of users of the product/service/program
e) Any related literature
There are some tips to avoid in evaluating a program:
1. Don't balk at evaluation because it seems far too
"scientific." It's not. Usually the first 20% of effort will generate
the first 80% of the plan, and this is far better than nothing.
2. There is no "perfect" evaluation design. Don't worry about
the plan being perfect. It's far more important to do something, than to wait
until every last detail has been tested.
3. Work hard to include some interviews in your evaluation methods.
Questionnaires don't capture "the story," and the story is usually
the most powerful depiction of the benefits of your services.
4. Don't interview just the successes. You'll learn a great deal about the
program by understanding its failures, dropouts, etc.
5. Don't throw away evaluation results once a report has been generated.
Results don't take up much room, and they can provide precious information
later when trying to understand changes in the program.
As a professional you may chose any type of program evaluation, but the
main point of any program is to achieve its goals. So the article and the
information about goal type of program evaluation can help people to improve
the benefit of every program.
1. Barrett, F., Fry, R. (2002). Appreciative inquiry in
action: The unfolding of a provocative invitation. In R. Fry, F. Barrett, J.
Seiling, D. Whitney (Eds.), Appreciative inquiry and organizational
transformation: Reports from the field. Westport, CT: Quorum Books.
2. Brinkerhoff, R. O. (2003). The success case method: Find
out quickly what’s working and what’s not. Berrett-Koehler Publishers.
3. Kibel, B. M. (1999). Success stories as hard data: An
introduction to results mapping. New York: Kluwer/ Plenum.
4. Costantino, R. D.,Greene, J. C. (2003). Reflections on the use
of narrative in evaluation. American Journal of Evaluation, 24(1)
5. Dart, J., Davies, R. (2003). A dialogical, story-based
evaluation tool. American Journal of Evaluation, 24(2), 137–155.
К содержанию номера журнала: Вестник КАСУ №2 - 2012