1 / 8100%
Referring regularly to a specific program/ project explain the different evaluation types in
monitoring and evaluation
Name
MSPM 6102 - Practices in Project Management
Walden University
2022
1.0 Introduction
Project evaluation can be defined as a systematic method for collecting, analyzing, and using
information to answer questions about projects, policies and programs, particularly about their
effectiveness and efficiency. In both the public and private sectors, stakeholders will want to
know if the programs they are funding, implementing, voting for, receiving or objecting to are
actually having the intended effect and to what cost. According to Bamberger, M. (2000), this
definition of project evaluation focuses on the question of whether the program, policy or project
has, as indicated, the intended effect. However, equally important are questions such as how the
program could be improved, whether the program is worthwhile, whether there are better
alternatives, if there are unintended outcomes, and whether the program goals are appropriate
and useful. Evaluators help to answer these questions, but the best way to answer the questions is
for the evaluation to be a joint project between evaluators and stakeholders.
There are several types of project evaluation. They include the following:
2.0 Types of Evaluation
2.1 Internal evaluation
Internal evaluation is a process of quality review undertaken within an institution for its own
ends (with or without the involvement of external peers).
Internal review is something an institution does for its own purposes. From an external agency
perspective, internal review is seen as the part of the external process that an institution
undertakes in preparation for an external event, such as a peer-review, site visit. In such
circumstances, internal review tends to be conflated with self-evaluation.
For instance in an electrical plant an internal evaluator will need to rely on standards such as
‘professional competence, objectivity, and clarity of presentation’ (Braskamp et al. 1987). In
most cases, these will be sufficient and a transparent methodology will allow the results to speak
for themselves.
Some argue that an internal evaluator is better placed to understand the environment and prepare
findings in the style most likely to be used. An internal evaluator can also build credibility over
time (Cummings et al. 1988) and prepare the ground for acceptance and utilization of evaluation
results (Gunn 1987). Because internal evaluators in an electrical plant know the ‘nuances’ of the
organization, they are able to see ways that the evaluation can make a difference and promote the
use of evaluation findings over the longer term (Weiss 1972).
The ability to communicate relevant and timely information is a form of credibility that may be
as much of an advantage for an internal evaluator as the ‘objectivity’ of external evaluators
(Love 1991). However, many authors believe that this issue can be solved by external evaluators
working closely with stakeholders in a participative mode Carey (1997).
2.2 External evaluation
At the same time, external evaluators are often under the same pressures to give favorable
assessments as internal evaluators. Many of the same issues of manipulation and threat arise,
such as where an external evaluator has hopes of additional work or whose payment might be
withheld Mathison, (1994). Generally, external consultants have less security and legal
protection than staff. In addition, if an external evaluator takes a highly participative approach to
the evaluation, it may become difficult to criticize the people who the evaluator has taken such
pains to cultivate Patton, (1997). Social pressures on external evaluators can be as intense as
those on internal staff. This should not be a major factor in most organization’s decision about
which type of evaluator to employ.
Internal evaluators will be of great importance to an electrical plant because as new unbiased
actors, they may be more forthright about their recommendations, refusing to be ‘buffaloed’ and
daring to ‘scare the hell out of people’ Braskamp et al. (1987). External evaluators can often
raise issues that would be uncomfortable for an internal evaluator to rise.
Finally, the availability of an internal evaluator could be seen as an investment that an
organization makes in ‘an enduring corporate resource’ that is useful in a number of contexts
(Love 1991).
Whether this is a reasonable investment will depend on the size of the organization and its likely
future evaluation needs. The argument for building staff capacity may be a strong factor in some
cases but may make no sense for an organization that is only going to conduct one evaluation per
decade. This factor will depend on the particular context of each case.
2.3 Mid-term or interim evaluation
An evaluation conducted half-way through the life cycle of the program or intervention to be
evaluated. Monitoring an ongoing activity aimed at assessing whether the program or
intervention is implemented in a way that is consistent with its design and plan and is achieving
its intended results. For example a close monitoring of an electrical plant will be closely
evaluated to realize the impact of this particular project.
2.4 Ex-post or summative evaluation
Summative evaluation is a method of judging the worth of a program at the end of the program
activities. The focus is on the outcome (Bhola 1990).
Examples
Here are some examples of summative evaluation:
i. Determining attitudes and achievement related to using a primer after it has been used in
a training course
ii. Collecting data on the impact of a program operating in a community for a period of time
An evaluation which usually is conducted some time after the program or intervention has been
completed or fully implemented. Generally its purpose is to study how well the intervention
served its aims, and to draw lessons for similar interventions in the future. For instance, after
completion of the electrical plant, the main stakeholders will asses the importance of this project
to the community and the potential risks that will likely occur after establishing this project. The
results will be used to foresee the future of this project and implement the potential policies that
will be of importance to the society.
2.5 Meta-evaluation
Two processes are often referred to as meta-evaluation: the assessment by a third evaluator of
evaluation reports prepared by other evaluators; and the assessment of the performance of
systems and processes of evaluation. This method of assessment is very important because
different people are involved in the assessment process and hence the level of biasness is highly
reduced in this project.
2.6 Formative evaluation
Formative evaluation is a method of judging the worth of a program while the program activities
are forming or happening. Formative evaluation focuses on the process (Bhola 1990).
Examples
Here are some examples of formative evaluation:
i. Testing the arrangement of lessons in an electrical plant for engineers before starting
installation process.
ii. Collecting continuous feedback from participants in a program in order to revise the
program as needed
An evaluation which is designed to provide some early insights into a program or intervention to
inform management and staff about the components that are working and those that need to be
changed in order to achieve the intended objectives. This particular form of assessment can be
used to rectify and plan future important objectives concerning this electrical plant project.
References
Bhola, (1990). 'The production of context: using activity theory to understand behaviour change
in response to HIV and AIDS.' Unpublished doctoral dissertation. University of KwaZulu-Natal,
Pietermaritzburg.
Delbert Charles Miller, Neil J. Salkind (2002) Handbook of Research Design & Social
Measurement. Edition: 6, revised. Published by SAGE.
Bhola, (1990) Shoestring evaluation: Designing impact evaluations under budget, time and data
constraints. American Journal of Evaluation, 25,5 - 37.
Jacobs, Francine H. "Child and Family Program Evaluation: Learning to Enjoy Complexity."
Applied Developmental Science 7.2 (2003): 62-75. Academic Search Premier. EBSCO. Web. 21
Sept. 2011.
Bamberger, M. (2000). The Evaluation of International Development Programs: A View from
the Front. American Journal of Evaluation, 21, pp. 95-102.
Smith, T. (1990) Policy evaluation in third world countries: some issues and problems. The
Asian Journal of Public Administration, 12, pp. 55-68.
Ebbutt, D. (1998). Evaluation of projects in the developing world: some cultural and
methodological issues. International Journal of Educational Development, 18, pp. 415-424.
Bulmer, M. and Warwick, D. (1993). Social research in developing countries: surveys and
censuses in the Third World. London: Routledge.
Potter, C. (2006). Program Evaluation. In M. Terre Blanche, K. Durrheim & D. Painter (Eds.),
Research in practice: Applied methods for the social sciences (2nd ed.) (pp. 410-428). Cape
Town: UCT Press.
Rossi, P., Lipsey, M.W., & Freeman, H.E. (2004). Evaluation: a systematic approach (7th ed.).
Thousand Oaks: Sage.
Students also viewed