1 / 5100%
Concepts in Project Management
Name
MSPM 6102 - Practices in Project Management
Walden University
2022
EXPLAIN THE FOLLOWING CONCEPTS
i) Internal evaluation
Internal evaluation is a process of quality review undertaken within an institution for its own
ends (with or without the involvement of external peers).
Internal review is something an institution does for its own purposes. From an external agency
perspective, internal review is seen as the part of the external process that an institution
undertakes in preparation for an external event, such as a peer-review, site visit. In such
circumstances, internal review tends to be conflated with self-evaluation.
By contrast, an internal evaluator will need to rely on standards such as ‘professional
competence, objectivity, and clarity of presentation’ (Braskamp et al. 1987). In most cases, these
will be sufficient and a transparent methodology will allow the results to speak for themselves.
Some argue that an internal evaluator is better placed to understand the environment and prepare
findings in the style most likely to be used (Shapiro & Blackwell 1987). An internal evaluator
can also build credibility over time (Cummings et al. 1988) and prepare the ground for
acceptance and utilization of evaluation results (Gunn 1987, Love 1991,). Because internal
evaluators know the ‘nuances’ of the organization, they are able to see ways that the evaluation
can make a difference and promote the use of evaluation fi ndings over the longer term (Weiss
1972).
The ability to communicate relevant and timely information is a form of credibility that may be
as much of an advantage for an internal evaluator as the ‘objectivity’ of external evaluators
(Love 1991). However, many authors believe that this issue can be solved by external evaluators
working closely with stakeholders in a participative mode (Posavac & Carey 1997).
ii) External evaluation
At the same time, external evaluators are often under the same pressures to give favorable
assessments as internal evaluators (Weiss 1972). Many of the same issues of manipulation and
threat arise, such as where an external evaluator has hopes of additional work or whose payment
might be withheld (Mathison 1994). Generally, external consultants have less security and legal
protection than staff. In addition, if an external evaluator takes a highly participative approach to
the evaluation, it may become difficult to criticize the people who the evaluator has taken such
pains to cultivate (Scriven 1997; Patton 1997). Social pressures on external evaluators can be as
intense as those on internal staff (Weiss 1972). This should not be a major factor in most
organization’s decision about which type of evaluator to employ.
Another argument for external evaluators is that, as new unbiased actors, they may be more
forthright about their recommendations, refusing to be ‘buffaloed’ and daring to ‘scare the hell
out of people’ Braskamp et al. (1987). According to Weiss (1972) notes that external evaluators
can often raise issues that would be uncomfortable for an internal evaluator to rise.
Finally, the availability of an internal evaluator could be seen as an investment that an
organization makes in ‘an enduring corporate resource’ that is useful in a number of contexts
(Love 1991).
Whether this is a reasonable investment will depend on the size of the organization and its likely
future evaluation needs. The argument for building staff capacity may be a strong factor in some
cases but may make no sense for an organization that is only going to conduct one evaluation per
decade. This factor will depend on the particular context of each case.
Both internal and external evaluators face a number of ethical issues. Internal evaluators
arguably have to deal with more stark cases of divided loyalty and pressure to suppress negative
results (Love 1991); however, external evaluators face similar issues.
References
Cummings, OW, Nowakowski, JR, Schwandt, TA, Eichelberger, RT, Kleist, KC, Larson, CL
& Knott, TG 1988, ‘Business perspectives on internal/external evaluation’, in JA McLaughlin,
LJ Weber, RW Covert & RB Ingle (eds), Evaluation Utilization, vol. 39, pp. 59–74.
Czarnecki, MT 1999, Managing by measuring: how to improve your organization’s performance
through effective benchmarking, AMACOM, New York.
Faase, TP & Pujdak, S 1987, Shared understanding of organizational culture’, in J Nowakowski
(ed.), The client perspective on evaluation: New directions for program evaluation, no. 36,
Jossey-Bass, San Francisco. Globerson, A, Globerson, S & Frampton, J 1991, You can’t manage
what you don’t measure: control and evaluation in organizations. Gower Publications, Aldershot.
Goldberg, B & Sifonis, JG 1994, Dynamic planning: the art of managing beyond tomorrow,
Oxford University Press, New York.
Guba, EG & Lincoln, YS 1987, ‘The countenances of fourth-generation evaluation’, in DJ
Palumbo (ed.), The politics of program evaluation, Sage Publications, Newbury Park, California.
Gunn, WJ 1987, ‘Client concerns and strategies in evaluation studies’, in J Nowakowski (ed.),
The client perspective on evaluation: New directions for program evaluation, no. 36, Jossey-
Bass, San Francisco.
Hogben, D 1977, ‘Curriculum evaluation: by whom, for whom?’, paper presented at the Annual
Conference of the American Educational Research Association. Publications, Beverly Hills,
California.
House, ER 1993, Professional evaluation: social impact and political consequences, Sage
Publications, Newbury Park, California.
Kushner, S 2000, Personalizing evaluation, Sage Publications, Thousand Oaks, California.
Love, AJ 1991, Internal evaluation: building organizations from within, Sage Publications,
Newbury Park, California.
Mathison, S 1994, ‘Rethinking the evaluator role: partnerships between organizations and
evaluators’, Evaluation and Program Planning,
vol. 17, no. 3, pp. 299–304.
Meyers, WR 1981, The evaluation enterprise: a realistic appraisal of evaluation careers, methods,
and applications, Jossey-Bass, San Francisco.
Newman, DL & Brown, RD 1996, Applied ethics for program evaluation, Sage Publications, San
Francisco.
Nowakowski, J 1987, ‘Editor’s notes’, in J Nowakowski (ed.), The client perspective on
evaluation: New directions for program evaluation, no. 36, Jossey-Bass, San Francisco.
Owen, JM with Rogers, PJ 1999, Program evaluation: forms and approaches, 2nd edn,
Allen & Unwin, St Leonards, NSW.
Students also viewed