r The Impact of Social Policy

Pranab Chatterjee and Diwakar Vadapalli

'"I·-\ere is increasing recognition today that social policies and programs should be carefully evaluated to determine whether they do, in fact,

meet their stated objectives. Althongh it has often been assumed that social policies have a positive impact, this assumption has been called into question by many critics of government social programs. This chapter discusses the ways in which the impact of social policies can be assessed. It describes the principles and techniques used in different types of evaluation. Although evaluation research has become increasingly sophisticated, values and ideolo­ gies continue to play an important role in deciding which policy approaches work best.

___________The logic of Bmpact Analysis

Rossi and Freeman (1985, 1993) and Rossi, Lipsey, and Freeman (2004) observe that there are four phases of social policy evaluation. These are needs assessment, selection of a program to respond to needs, impact evalu­ ation, and cost-benefit analysis. Upon outlining the four phases of evalua­ tion, they discuss many experimental, quasi-experimental, and time-series designs that can be used for program evaluation. Mohr (1995) singled out the idea of impact evaluation and called it an attempt to isolate the direct effects of a policy (or, more precisely, a program derived from a policy) apart from any confounding environmental effects. Earlier, Suchman (1967) sng­ gested tbat a program is a form of social experiment, and any evaluation of it leads to the conclusion tbat the program does or does not produce given social ends. Following Suchrnan's ideas, Riecken and Baruch (1974) listed ways of evaluating the impact of social experiments, many of which can be construed as preludes to new forms of social policy. Schalock (2001), using




T these contributions, defined outcome-based evaluation as evaluation that uses valued and objective person- and organization-referenced outcon1es to analyze a program's effectiveness, impact, or efficiency. Snchman (1967), in his earlier work, had stated that, if any one program does not produce given social ends, one should conclude that it is a case of program failure.

However, if it seems that a substantial number of programs, all similar in nature, do not produce given ends, then it indicates a case of theory fail­ ure. In other words, the theory that generated the programs (as interven­ tions to bring about a change) has been developed on faulty premises.

The groundwork of Rossi and Suchman on impact analysis (both of social programs and of the parent policies or theories on which they rest) is based on the assumption that quantitative analysis and multivariate design will produce knowledge about the impact of social policies and programs. Ask a typical policy analyst, academic, or program-administrator about the impact of social policy and one will be provided with a sheaf of statistics supporting one perspective or another. This has come to be regarded as not just natural but the most appropriate response. On the matter of overem­ phasis on numbers, Zerbe (1998) has observed, "Hard numbers drive out soft" (p. 429).

Designing Impact Analysis: Some Issues

For example, an intervention designed to. improve economic conditions in an urban neighborhood might well appear to be very successful, until one becomes aware that a regional upturn in the economy has occurred throughout the evaluation period and, although the neighborhood economy is much improved, it has, in fact, lagged far behind the rapid growth evi­ dent across the rest of the region. To reliably sort out program effects from environmental and other confounding influences is a daunting task. The significant achievements of impact analysis have been to spark an aware­ ness of the need for such an analysis if program effectiveness is e'ver to be convincingly established and to offer a cookbook of strategies for attempt­ ing to achieve valid statistical evaluations.

Perhaps the best case for the use of quasi-experimental designs in impact analysis was made by Campbell (1969), when he proposed that the evalua­ tion of a policy in one state, province, or country is possible by comparing the posttests in two nearly identical states, provinces, or countries, where one has experienced a policy and the other has not. However, this form of impact analysis often results in doing two case studies, which defeats the entire purpose of quasi-experimentation with valid samples and controls.

Impact analysis typically suggests a spectrum of research approaches from experimental to quasi-experimental and strongly recommends that the evaluator stick as closely to the classic controlled experiment and sta­ tistical analysis strategies as possible. Of course, it is rarely possible to approach these conditions in social experimentation and evaluation, so the

85 6. The Impact of Social Policy

main thrust of an impact analysis is on quasi-experimental strategies and somewhat less powerful statistical analyses. The notion of qualitative strategies is usually dismissed as neither rigorous nor practical enough to warrant consideration.

The product resulting from an impact analysis is a methodologically and statistically sophisticated document detailing relationships and relative levels of importance among a number of variables. In keeping with the values of scieuce in the modern age, it is widely accepted that rigorous attention to methodological and statistical norms will produce an objective analysis of the program under study, so the resulting information may safely be used to determine the fate of a particular policy and the fates of all those stakeholders upon which it has an impact.

Information of this statistical kind _has become the lingua franca of deci­ sion makers for many easily appreciated reasons. It is a manageable way to consider very large numbers, whether dollars or populations. It appears to offer a nearly irrefutable assessment, apparently devoid ofbias or ideology. It is not presented as personal or emotional and is perceived as dispassion­ ate and objective. It does indeed provide one of the most useful approxi­ mations available of valid grounds for a judgment as to the effect of a particular policy or program. And it allows for the easy flow of info1ma­ tion from one venue to another, for example, from the budget office to the program designers to states, counties, and beyond.

The methodological and statistical achievements of impact analysis have, howevei; contributed to certain strategies for the evaluation of policy (Rossi & Freeman, 1985). Crane (1982) suggested that a useful impact analysis clearly depends on the formulation of evaluative hypotheses, which may take the following form:

Null hypothesis: The trne effect is zero.

Alternative hypothesis: The true effect is at least equal to the threshold effect. (pp. 86-88)

Mohr (1995), using Deniston's ideas (1972a, 1972b), listed further ele­ ments of impact evaluation, when he defined "a problem relative to a given . [policy or Jprogram as some predicted condition that will be unsatisfactory without the intervention of the program and satisfactory, or at least more acceptable, given the program's intervention" (p. 14). Yet, troubling ques­ tions persist. In order to be statistically malleable, complex phenomena must be reduced to measurable form. If this is not possible (as it frequently is not) indicators must be developed. That is, one kind of information must be made to substitute for another. So, for example, educational level or occu­ pational title is often used as a proxy for income or socioeconomic status because respondents to surveys are loath to reveal their actual earnings. One potential difficulty arises when indicators are used not as indicators but as actual measures. The potential for misunderstanding inherent in the process


is important enough that it has led to the establishment of nationwide pan­ els charged with the production of increasingly more reliable indicators.

For example, there are no existing measures that are called "measures of the impact of social policy." However, the Human Development Index (HD!), developed 'by the United Nations, can be used as a somewhat direct measure of conditions in a society, and it can be then speculated whether one or another of these conditions is the outcome of a certain l<ind of social policy. The HDI represents three equally weighted indicators of the quality of human life: longevity, as shown by life expectancy at birth; knowledge, as shown by adult literacy and mean years of schooling; and income, as pur­ chasing power parity 'dollars per capita (United Nations Development Programme, 1994, pp. 108, 220). Using 0.875 as a boundary, 35 states from Canada through Portugal could be said to rate high as welfare soci­ eties in 1993. In contrast, 10 countries from the former planned economies of Eastern Europe can be seen to rate below the 0.875 threshold. The data are presented in Table 6.1.

The data from Table 6.1 can now be placed iu the design parameters shown in Table 6.2. In this posttest only design, the impact of economic and social policies show that market-oriented policies produce higher HDI levels than planning-oriented policies.

The data presented in Table 6.1 are standardized data and can be placed in most any design parameters. Statistical impact analysis works well with such data.

Perhaps a "better" form of impact analysis would emerge if the Human Development Index measures were available for two different times (e.g., Time 1 and Time 2), and then one could see where planned and market economies were in Time 1 and whether, at Time 2, the gain or loss of planned economies is greater or less than those of market economy soci­ eties. This better design would be called a pretest-posttest design (Campbell & Stanley, 1963; Cook & Campbell, 1979; Mohr, 1995).

A yet better design for impact analysis would emerge if prestest-postest measures were available for societies that were comparable to the societies described in Table 6.1 in Time 1 but that did not experience industrial devel­ opment to the same extent as tbe market economy societies and planned economy societies did. Actually, this effort can be simulated by going back to the posttest design (as shown in Table 6.2). Take, for example, the case of Afghanistan, wbich has not experienced any form of industrialization or social policy, and note its HDI level in 1993 (which is 0.229). Then, consider the ethnically similar neighboring societies of Uzbekistan or Turkmenistan or Tajikistan and look at their HDI levels in 1993, which are 0.679, 0.695, and 0.616, respectively, as reported by United Nations Development Programme (1996). Such data can be grouped together for a posttest design (as shown in Table 6.3) to see the impact of economic and social policies driven by forced industrialization and planned economy.

87 6. The Impact of Social Policy

Market Versus Planned Economy Societies and Their Human Development Index Table 6.1 (HD!) Measures, 1993

HD/ State HD/State

Market economy societies

Canada 0.951 Denmark 0.924

Switzerland 0.926 Belgium 0.929

Japan 0.938 Iceland 0.919

Austria 0.928 Finland 0.935

Sweden 0.933 Luxembourg 0.895

Norway 0.937 New Zealand 0.927

France 0.935 Israel 0.908

Australia 0.929 Barbados 0.908

United States 0.940 Ireland 0.919

Netherlands 0.938 Italy 0.914

United Kingdom 0.924 Spain 0.933

Germany 0.920 Hong Kong 0.909

Greece 0.909 Argentina 0.885

Cyprus 0.909 Costa Rica 0.884

Bahamas 0.895 Uruguay 0.883

South l<orea 0.886 Chile 0.882

Malta 0.886 Singapore 0.881

Portugal 0.878

Former planned economy societies

Czech Republic 0.872 Russia 0.804

Slovakia 0.864 Bulgaria 0.773

Hungary 0.855 Belarus 0.787

Latvia 0.820 Ukraine 0.719

Poland 0.819 Lithuania 0.719

SOURCE: United Nations Development Programme {1996).

Table 6.2 Impact of Economic and Social Policies in Market Versus Planned Societies, With Posttest Measures

Market economy societies Time 1 Time 2 (1993) (N1=35) (Undetermined) (Higher in Human Development Index)

Planned economy societies Time 1 Time.2 (1993) (N2=10) (Undetermined) (Lower in Human Development Index)



Time 2 (1993) (Nigher in Numan Development Index)

Table 6.3 Four Societies, Where Three Have Had Economic and Social Policies, With Posttest Measures

Three societies with industrialization and economic and social policies (N1

Time 1 (Undetermined)

Uzbekistan 0.679

Turkmenist?n 0.695

Tajikistan 0.616

One society without significant industrialization and without economic or social policies (N 2 =:;1)

Time 1 Time 2 (1993) (Undetermined) (Lower in Human Development Index)

Afghanistan 0.229

The existence of standardized data on nations, as shown above, makes it possible to do several types of statistical analysis: time series observation, posttest only observation, and several types of quasi-experimental observation.

The National Human Development Report produced by the Planning Commission of the Government of India (2002) contains HDI data for states and union territories. The availability of such data at a within-nation level allowed for comparisons between different states and regions and also between urban and rural populations. This report also presents data that was computed for the same states in 1981 and 1991, which allowed for comparison in time, as suggested earlier. In the absence of such data, as is the case for some countries and many within-nation regions, when one is interested in seeing whether policies targeted to only a region within a nation have had any impact, one has to resort to other i_ndicators that are often not comparable to the HDI levels.

Lack of acceptable and standardized outcome measures is one principal problem in impact analysis. However, there are other issues as well. Coleman (1975) listed some considerations in the following terms.

For policy research,

(1) partial information available at the time an action must be taken is better than complete information after that time;

(2) the ultimate product is not a "contribution to existing knowledge" in the literature, but a social policy modified by the' research results;

(3) results that are, with high certainty, approximately correct are more valuable than results which are more elegantly derived but possibly grossly incorrect;

89 6. The Impact of Social Policy

(4) it is necessary to differently treat policy variables which are subject to policy manipulation, and situational variables which are not;

(5) the research problem enters from outside any academic discipline and must be carefully translated from the real world of policy or.the con­ ceptual world of a client without loss of meaning; and

(6) the existence of competing or conflicting interests should be reflected in the commissioning of more than one research group, under the aus­ pices of different interested parties where possible. Even in the absence of explicitly conflicting interests, two or more research projects shonld be commissioned to study a given policy problem. (pp. 22-34)

Ideological' Biases in Impact Analysis

But beyond methodological considerations lie questions of another order. There is, for example, a growing literature regarding the valne­ laden nature of the ostensibly objective evaluation process. Every aspect of evaluation reflects a decision made, often, according to a particular perspective or value system. When one measures poverty, for example, what is it that is being measured? Income, perhaps. But income relative to what? .

Economist Mollie Orshansky developed the federal poverty thresholds in 1964 for the Social Security Administration (Fishei; 1997). They are based upon what has been termed the economy budget, that is, the amount a care­ ful homemaker would spend on food for a family of a particular size for "temporary or emergency use when funds are low.,, This decision was to locate the Americau standard of poverty below the level of "minimum com­ fort', and even below the level. of "minimum adequacy" at the level of "n1in­ imum subsistence." The problem is, this measure presu1nes the family in question has access to a working stove connected to utilities, owns a sub­ stantial collection of cookware, and has aworking refrigerator,. not to inen­ tion having had the opportuuity to learn nutritious cooking techniques and basic budgeting and shopping strategies. It seems that, at the levels of poverty often addressed in evaluation reports, few of these assumptions are valid, yet they remain the de facto staudard of American poverty.

Serious attempts at redefining poverty, either according to different levels of income or from different perspectives (for example, poverty defined as the inability to afford satisfactory housing as opposed to simply affording shelter of any kind or, most notably, the ability to financially sup­ port a socially rewarding lifestyle as opposed to affording simple existence) have been the subject of intense debate since the turn of the century. The United States alone, for example, has produced 60 different "poverty levels" between the 1900s and the 1990s (Fisher, 1996).


IEffecfrveness Analysis ______________

One simple question can be derived from the above discussions. It can be stated in the following form: Is a given policy, including any programs gen­ erated from that policy, effective? Obviously, such a question about effec­ tiveness requires clarity of goals (or ends to be achieved) of social policy.

The idea of effectiveness analysis is borrowed from the field of organi­ zational studies and, more specifically, from the concept of organiza­ tional effectiveness (Cameron & Whetten, 1983; Price, 1968, 1972; Mintzberg, 1993 ). In effectiveness analysis, one sees organizational behav­ ior as rational, goal-directed behavior and begins the analysis of effective­ ness by assessing how much of the goal has been or is being reached by the organization. This model originates in the classic works of Max Weber (1925/1947) and is called the rational model of organizational studies (Haas & Drabek, 1973). Building on this model, Price (1972) proposed that organizations with single goals are likely to be more effective than organi­ zations with multiple goals. Similarly, he suggests that organizations with high degrees of goal specificity are more likely to be effective than organi­ zations that have diffuse goals, and so on. Also building on this model, Etzioni (1964) proposed that· effectiveness can be studied from two per­ spectives: a goal perspective and a system perspective. The goal perspective tries to decipher the organizational goals and then attempts to assess whether or how much of the goals have been reached. In this context, it often becomes important to differentiate between stated goals and pursued goals .. That is, the organization may claim that it is in business to pursue Goal X, but, in reality, it is pursuing Goal Y. The second perspective pro­ posed by Etzioni, the systems perspective, calls for a comparative approach to effectiveness evaluation. Here, one assesses the impact of two or more similar organizations in the pursuit of similar ends and attempts to come to some conclusions abont which one is more effective in this pursuit and why.

Most of the reasoning behind the studies of rational bureaucracies and their effectiveness can be transferred to the study of the impact analysis of social policy. One just needs to substitute social policy for organization. After all, the implementation of social policy often requires a rational bureaucracy, where the policy sets the goal and the organization responsi­ ble for program execution attempts to attain that goal.

The lessons learned from the studies of organizational effectiveuess can be used in the studies of policy effectiveness (assuming that the concept of effectiveness is similar to that of impact). These lessons are that policy goals are often not clearly stated; that there may be multiple and conflicting goals set by a social policy; and that, within two or more cultural contexts, the same form of policy execution may produce different outcomes due to the cultural context in which policies are executed (Boje, Gephart, & Thatchenkery, 1996; Haas & Drabek, 1973; Newstrom & Davis, 1993 ).

Taking these issues into account, Hudson (1997) has commented that "there can be no effectiveness unless there is change, [and] a measure of

91 6. The Impact of Social Policy

effectiveness is a measure of change" (pp. 70-71). He has also offered a set of equations that formally assess change with a probabilistic model.

The term goal analysis was popularized by Mager (1972). In this small but comprehensive work, he outlines how to decipher the attainable goals, ensure these are the goals being pursued, and do impact analysis of a pro­ gram or of a policy behind that program. .

_______________ Effidency Analysis

In organizational behavior, the concept of efficiency analysis has long been popular. It was formally introduced by Taylor (1916/1987) and can be referred to as an early form of cost-benefit analysis. Taylor showed that an organization can be effective but not necessarily efficient, especially if the unit cost of production is too high or if the goals of production cannot be met within a specified time period.

The lessons learned from efficiency analysis (Haas & Drasbek, 1973; Newstrom & Davis, 1993) are already used by Rossi:& Freeman (1985) as they offered designs for cost-benefit evaluation. They can be ttanslated to policy analysis by asking, first, whether the policy under question is effec­ tive but not necessarily efficient and, second, whether the policy under question is effective but not useful under the time constraints.

___ Manifest and Latent IBenefidaries of Social Policy

Clearly, defining poverty and assessing its effects are complex and ttoublesome issues for researchers of all kinds, and particularly for policy makers. There are issues on at least two additional levels that are more ttoubling still. Since the early 1970s, social scientists have begun examining the policy process in a reflexive manner that has generated several important new insights. ·

Among these important insights is the notion, now supported through research, that policies may state program goals or identify target populations that are not the actual focus of the policy. That is, social policy is directed at both manifest beneficiaries-for example, tl1e poor or minorities-and at what may be termed latent beneficiaries-for example, providers of services or the policy makers themselves (Chatterjee, 1999). This process is common enough in a variety of contexts. In fact, the entire edifice of gentlemanly compromise, which has produced snch an impressive history of social devel­ opment in America, is grounded in the shared willingness of elected repre­ sentatives to allow such fictions to pass, as often as not, without question.

The question of who benefits by given social policies has been dealt with from several perspectives in recent times. Bartik (1991) has used it in relation to policies intended to promote regional and local economic development. Clotfeller (1991, 1992) has asked this very question in great detail about policies that create and support the nonprofit sector in market economies.


An examination of recent American social welfare policy may help make clear some of the more important issues. For example, in an environment that is perceived by many as increasingly blaming of the beneficiaries of welfare, it has also been increasingly fashionable to discuss the wealth of opportunity that only awaits ambitiou and personal commitment to hard work. A policy outgrowth of these two streams of thought has been the recent drive for welfare reform under it many names, such as welfare-to­ work, workfare, and so on.

The idea publicly put forth is that the beneficiaries of the programs will be the presently poor who prepare diligently for new careers and pull them­ selves up with a helping hand from the government. By limiting the amount and the duration of benefits available, legislatures expect to eliminate or cur­ tail the culture of poverty, welfare dependency, work aversion or avoidance, and the other problems of recent welfare strategies. This public message has been well received and oft repeated. However, it fails to tell the whole story.

For example, who are the beneficiaries of this program? The manifest beneficiaries, of course, are the poor who will finally be given just what they need, no less and no more, to succeed in America. But what of the latent ben­ eficiaries, those who stand to benefit from the program, but not as pnblicly? Earlier welfare strategies created an entire industry of service providers and program administrators (as well as the hundreds of college and professional programs to train them and the thousands of professors to staff those pro­ grams and institutions and to evaluate them). The present approach will benefit the career training industry and whatever bureaucratic mechanism becomes necessary to monitor the quality of the work of its graduates.

Private industry is likely to benefit from the sudden influx of recently trained, poorly experienced, low-cost workers created by the welfare reform effort. Those who currently provide services to low-paid workers without governme.nt benefits, such as child care, insurance, and medical care, will see their market expanding.

A further step away from the manifest beneficiaries are the institutions and individuals who theorize, consult, and write about social problems, as well as those who conduct research. With so many new and untried inter­ ventions in the field, it will be a time of extraordinary opportunity for those who can produce the kinds of research and theoretical products deemed most useful to policy makers and others with a stake in welfare change.

Finally, the legislators themselves stand to benefit. By appealing to the taxpayer's natural instinct to save money by reducing government, they have frequently been able to solidify their political positions. By reducing benefits to the poorest, least politically powerful groups, they have been able to do so with little risk.

The same process of identifying latent beneficiaries may be applied to any social welfare legislation, whether it originates from the left or right, from conservative or liberal. It is important to do so in order to understand fully why social welfare policy takes the form it does, who supports its passage and implementation, and what the real stalces are ·in an evaluation

93 6. The Impact of Social Policy

process. It may make little difference to policy makers if a program is per­ ceived to fail its manifest beneficiaries, so long as latent beneficiaries are pleased with the results.

Of late, a number of social theorists have written in essential agreement with Charles Lemert's (1997) assessment of the social policy process: "The more sensible post-modernists, being generally respectful of much in modernity, are rigorously skeptical of the prospects that modernity's grand ideals ever will, or ever were truly meant to, become the trne manifest struc­ ture of world things." That is, there is a widening realization that policy is not infrequently created for reasons other than those offered publicly by policy makers themselves.

--~--------Stakeholder~Based Analysis

Stakeholder-based analysis, like effectiveness and efficiency analysis, has. also been popular in studies of organizational behavior (Cooperrider & Dutton, 1998; Harrison & Shirom, 1998). Here, the critical question is, who wants know about the impact of a given policy? It is entirely possible that groups such as policy makers, politicians, and disinterested social scientists, using their various tool bags, .conclude that a given policy is poorly effective. On the other hand, those involved as the policy's target population or those who benefit from the programs generated by a given policy may hold a very dif­ ferent position than the groups mentioned above. Nowakowski (1987), shifting the "who wants to know" question, offered plans to do research from the client's perspective. In the discipline called organizational develop­ ment, which is usually located in business schools and where training is offered to individuals who would serve as consultants to various organiza­ tions, a similar approach is called appreciative inquiry (Coopetride1; 1986; Srivastava & Cooperrider, 1990). An important point in appreciative inquiry is to study a client's organization in such a way that the client benefits from the organization development consultant's knowledge but does not get alien­ ated if the consultant 1nakes some constructive criticisms.

Translated to the realm of the impact analysis of social policy, apprecia­ tive inquiry would mean a study of social policy that is sensitive to the needs of the clients who commissioned the stlldy, who may be stakeholders in the programs derived from tbat policy, and who may be beneficiaries of some or all parts of that policy's execution.

Impact Analysis or Satisfaction Analysis?

We could proceed to contrast these strategies across the entire evaluation process. However, it is important to note that the whole concept of impact . analysis as analysis by impartial third-party scientists to see whether a given policy X produces outcome Y should be differentiated from tbe question



THE NATURE OF SOCIAL POLICY r whether given policy X produces satisfaction in given group G and whether given group G wants the programs derived from that social policy continued. What we have here will suffice to make the real point, which is that several of the attempts to define a new evaluation process really amount to attempts to shift the ability to define concepts and standards from policy makers to those who must live with policy consequences. This work is more than simply political in nature (although, clearly, it could reduce or bolster the influence of groups espousing any number of ideological or political views). Rathei; it represents a serious-minded attempt to get inside the black box at the heart of so many important social policy questions. That is, the developing strate­ gies attempt to discover not only what policies work but also why they work and in what contexts, what their meaning is to each class of stakeholder, and what approaches may best be applied in which particular contexts.

Use of Focus Groups

The concept of the focus group originated in the 1930s (Rice, 1931), and it was developed to gather specific types of information from key infor­ mants in a group setting. Typically, a focus group is somewhere between seven and ten in size ·and is selected by the researcher (Krueger, 1988). In this group, the researcher creates a relaxed atmosphere to elicit information about certain subjects. The discussions carried on in this group become an important source of data for the researcher. Roethlisberger and Dickson (1939) used a version of it during their famous studies of employee partio­ ipation and productivity. Merton, Fiske, and Kendall (1956) produced the first book on the subject, and they outlined how it is an important tool for policy research. Krueger (1988) has outlined how focus groups can be used before a program begins, corresponding with the idea of needs assessment in the sequence described by Rossi and Freeman (1985, 1993). Focus groups can also be used during a program or afte1· a program, strategies that Rossi and Freeman call program monitoring and impact evaluation, respectively. However, it is entirely possible that focus groups may yield information that is different from formal, multivariate, instrument-generated information captured through a postest or a pretest-posttest design.

The use of focus groups is comparable with what is called the Delphi method, in which selected purposive groups are used for community-level agenda building (Linstone & Turoff, 1975).

The interpretive Model in Policy Research ______

It has become increasingly common for conscientious theorists to recommend that researchers employ a variety of approaches in conducting policy evalua­ tions (Porter, 1995). A particularly eloquent statement of the case from a more general perspective than simply policy studies has been made by Robert Alford


95 6. The Impact of Social Policy

(1998). He suggests that there are three paradigms of inquiry: multivariate, interpretive, and historical. At the macro level, the multivariate model studies structure, the interpretive method studies culture, and the historical method studies context. Knowledge is obtained, in the multivariate model, through data; in the interpretive model, through observation; and, in the historical model, through evidence. The following case example illustrates how impact evaluation from a different paradigm may lead to different forms of outcome.

An Example From Delinquency Research

A metropolitan city in the United States gets a large grant to set up a delinquency prevention program, and the program is to be administered through its pork district. The theory behind the program is the opportunity theory of Cloward and Ohlin (1964), which suggests that the reason for delinquent behavior by working- and lower-class youth is their lack of opportunity. Given proper opportunity, the theory further suggests, delin­ quent behavior in these youth is likely to be reduced. Arnold (1964) pronounced that this was a failed idea, and Lewis (1966) claimed that working-class and lower-class youth are products of a culture of poverty.

Nevertheless, influenced by opportunity theory, a city sets up a program of delinquency prevention, and approximately 150 teenagers are recruited into it. The program consists of connseling by young adult counselors two to three times a week, participation in recreational group work throughout the week, and a stipend of a few hundred dollars a month. A multivariate program evaluation of this looks as shown in Table 6.4.

In this multivariate (the research included other variables besides x) pretest-posttest model, x represented the rate of being apprehended by juve­ nile authorities. At Time 1, it was found that

(that is, apprehension rates in the program group and the comparison group are approximately equal). However, at Time 2, it was found that

X2 > x 1 ; x2 > x 3; x2 > x4; and x1 =x2 =x4 (approximately).

This multivariate model of policy and program analysis leads to the con­ clusion that the progra1n is not effective. In fact, it seems to increase delin­ quency and certainly does not lead to delinquency reduction.

Table 6.4 Pretest-Posttest Comparison in Delinquency Prevention: An Example

Time 1 Time 2 (about a year later)

Program group

A comparable group


However, an interpretive study, which collected information through a series of open-ended interviews with program staff and through some observations of interactions between the program staff and the counselors in the program, revealed the following scenario: The counselors are not professionally trained helping persons. They are young adult members from the same community in their early twenties, some of whom are graduates of a nearby community college. The counselors have developed a work group themselves, and a work group culture has formed. Within that work group culture, the counselors often brag about their advisees and take pride in the fact that their advisees are indeed very tough, perhaps even tougher than the advisees of their peer counselors. One of the secretaries said the following about these counselors: "Boy, they treat those kids [teenagers in the program group who are targets of intervention] like some rich people treat their Doberman pinschers!"

The secretary's observation is indeed a very important symbolic state­ ment: the work group culture of the counselors, themselves only a few years older than the teenagers in the program, has created a status system. Within that status system, a program teenager confers status on his or her coun­ selor by being "bad," "tough," and more unruly than others. Th~ teenagers are rewarded by their counselors for being deviaut. Whatever the coun­ selors do toward delinquency reduction during their formal interactions with these teenagers, the culture of the work group and the status system within that work group give an important message to the inner city youth, and that message is that they are to be rewarded for their "toughness."

The case makes clear, we think, the usefulness of an interpretive model in program and policy evaluation. Alford (1998), writing from a more gen­ eral perspective than that of policy studies, supports our view, and he makes some particularly eloquent statements about the importance of using vari­ ous approaches in the analytical work of the social scientist:

the emphasis on the multivariate [quantitative] paradigm as th~ only "real" social science is impoverishing, a mark of the insecurity of the discipline, not a sign of its scientific matnrity .... (p. 4)

Developing coherent arguments that recognize historical processes, symbolic meanings, and multivariate relations is the best way to construct an adequate explanation of a complex social phenomenon. (p. 19)


These considerations may encourage reflection upon what the most signifi­ cant impact of recent social welfare policy in America might be. That is, it has become increasingly evident to observers from many political points of

97 6. The Impact of Social Policy

view that the policy creation and evaluation processes are not objective in the sense science is usually taken to be; that statistics, alone, ·may mask more than they reveal; and that any meaningful evaluation of the impact of social welfare policy must, as a matter of course, incorporate sophisticated statis­ tical analysis, provide historical perspective on the problem being addressed and on previous attempts at solutions, and provide rigorously collected and analyzed qualitative data from the perspective of every stakeholder (that is, beneficiary) group involved. Further, an effective evaluation of policy impact must first seek to unearth and clearly state the goals of the policy under con­ sideration from the points of view of all those who supported it and worked for its passage and implementation.

Alford, R. A. (1998). The craft of inquiry. New York: Oxford University Press. Arnold, R. (1964). Mobilization for youth: Patchwork or solution? Dissent, 11, 347-354. Barrile, T. J. (1991). Who benefits? Kalamazoo, MI: W.E. Upjohn Institute. Boje, D. M., Gephart, R. P., Jr., & Thatchenkery, T. J. (Eds.). (1996). Postmodern

management and organization theory. Thousand Oaks, CA.: Sage. Cameron, I(., & Whetten, D. A. (1983). Organizational effectiveness: A co1nparison

of 1nultiple ntodels. New York: Academic Press. Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24, 409-429. Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental

designs in research. Chicago: Rand McNally. Chatterjee, P. (1999). Repackaging the welfare state. Washington, DC: NASW Press, Chatterjee, P., Olsen, L., & Holland, T. P. (1976). Evaluation research: Some possible

contexts of tbeory failure.Journal ofSociology and Social Welfare, 3(4), 384-408. Clotfelle1; C. T. (1991). Economic challenges in higher education. Chicago: University

of Chicago Press. Clotfelle1; C. T. (Ed.). (1992). Who benefits from the nonprofit sector? Chicago:

University of Chicago Press. Cloward, R., & Ohlin, L. E. (1964). Delinquency and oppo1tunity. New York: Free Press. Coleman, J. S. (1975). Problems of conceptualization and measurement in studying

policy impacts. In K. M. Dolbeare (Ed.), Public policy evaluation (pp. 19--40). Beverly Hills, CA: Sage.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experi1nentation: Design and analy­ sis issues for field studies. Chicago: Rand McNally.

Cooperrider, D. L. (1986). Appreciative inquiry: Toward a 1nethodology for under­ standing and enhancing organizational innovation. Unpublished doctoral dis­ sertation, Case Western Reserve University, Cleveland, 01-L

Cooperrider, D. L., & Dutton, J. E. (1998). Organizational dimensions of global change. Thousand, Oaks, CA: Sage.

Crane, ]. A. (1982). The evaluation of social policies. The Hague, Netherlands: Kluwer-Nijhoff.

Deniston, 0. L. (1972a). Evaluation of disease control programs. Washington, DC: U.S. Departnient of Health, Education,. and Welfare, Public Health Service,


Health Services and Mental Health Adn1inistration, Communicable Disease Center.

Deniston, 0. L. (i972b). Prograni planning for disease control programs. Washington, DC: U.S. Department of I-Iealth, Education, and Welfare, Public Health Service, Health Services and Mental Health Administration, Communicable Disease Center.

Etzioni, A. (1964). Modern organizations. Englewood Cliffs, NJ: Prentice Hall. Fisher, G. M. (1996, Summer). Relative or absolute: New light on the behavior of

poverty lines over time. GSSISSS Newsletter, pp. 10-12. Fisher, G. M. (1997, Winter). The development and history Of the behavior of

poverty lines over time. GSSISSS Newsletter, pp. 6-7. Haas, J.E., & Drabek, T. E. (1973). Complex organizations: A sociological reader.

New York: Macmillan. .

Harrison, M., & Shirom, A. (1998). Organizational diagnosis and assessment. Thousand Oaks,' CA: Sage.

Hudson, W. (1997). Assessment tools as outcon1e measures in social work. In E. ]. Mullen & J. L. Magnabosco (Eds.), Outcomes measure1nent in the human ser­ vices (pp. 68-80). Washington, DC: NASW Press.

Krueger, R. A. (1988). Focus groups: A practical guide for applied research. Newbury Park, CA: Sage.

Lemert, C. (1997). Postmodernism is not what you think. Malden, MA: Blackwell. Lewis, 0. (1966). La vida: A T'ue1to Rican fa1nily in the culture ofpoverty-San Juan

and New York. New York: Vintage. Linstone, H. A., & Turoff, M. (1975). The Delphi method: Techniques and applica­

tions. Reading, MA: Addison-Wesley. Mager, R. F. (1972). Goal analysis. Belmont, CA: Lear Siegler/Fearon. Merton, R. K., Fiske, M., & l(endall, P. L. (1956). The focused interview. Glencoe,

IL: Free Press. 1v1intzberg, H. (1993). Structure in fives: Designing effective organizations. Englewood­

Cliffs, NJ: Prentice Hall. Mohr, L.B. (1995). Impact analysis for program evaluation. Thousand Oaks, CA: Sage. Newstrom, J. W., & Davis, I<. (1993). Organizational behavior: Human behavior at

work. New York: McGraw-Hill. Nowakowski, J. (1987). The client perspective on evaluation. San Francisco: Jossey­

Bass. Planning Commission, Government of India. (2002, March). National Human

Development Report 2001. New Delhi: Oxford University Press. Porter, T. M. (1995). Trust in numbers: The pursuit ofobjectivity in science and pub­

lic life. Princeton, NJ: Princeton University Press. Price, J. (1968), Organizational effectiveness: An inventory of propositions.

Homewood, IL: Richard Irwin. Price, J. (1972, Winter). The study of organizational effectiveness. The Sociological

Quarterly, 13, 3-15. Rice, S. (1931). Methods in social science. Chicago: University of Chicago Press. Riecken, H. W., & Boruch, R. F. (Eds.). (1974). Social experimentation: A method .

for planning and evaluating social intervention. New York: Academic Press. Roesthlisberger, F. J., & Dickson, W. J. (1939). Management and the workei:

Cambridge, MA: Harvard.University Press.

99 6. The Impact of Social Policy

Rossi, P.H., & Freeman, H. E. (1985). Evaluation: A systematic approach (3rd ed.). Beverly Hills, CA: Sage.

Rossi, P.H., & Freeman, H. E. (1993). Evaluation: A systematic approach (6th ed.). Newbury Park, CA: Sage.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.

Schalock, R. L. (2001). Outcome-based evaluation. New York: Plenum Press. Srivastava, S., & Cooperrider, D. (1990). Appreciative management and leadership.

San Francisco: JosseyHBass. Suchman, E. A. (1967). Evaluative research: Principles and practice in public service

and social action prog1'ams. New York: Russell Sage Foundation. Taylor, F. (1987). The principles of scientific management. In J. M. Shafritz & J. S.

Ott (Eds.), Classics of organization theory (pp. 66-81). Chicago: Dorsey. (Original work published 1916)

United Nations Development Prbgramme. (1994 ). Human development report 1994. New York: Oxford University Press,

United Nations Development Programme. (1996). Human development report 1996. New York: Oxford University Press.

Weber, M. (1947). The theory ofsocial and economic organization (A. M. Henderson· & T. Parsons, Trans.). New York: Free Press. (Original work published as part

of Wirtschaft und Gesellschaft in 1925) Zerbe, R. 0. (1998). Is cost-benefit analysis legal? Three rules. journal of Policy

Analysis and Management, 17(3), 419-456.