Chapter 1
Introducing the Paper Program
What are âpaper programs,â and how do they emerge in the social service delivery system? As the name implies, paper programs are those that exist, for the most part, on paper only. They are programs whose formal documents (e.g., grant proposals, public relations material, reports to funding agencies) specify the services or other resources they are supposed to provide, and their routine documentation suggests they are providing what they claim, but in reality the programs are providing none or a mere fraction of these services. The argument that this book consistently makes is that paper programs can exist because of the lack of accountability mechanisms among groups and cultural institutions with stakes in the social service delivery systemâfrom the consumers of services, to funding agencies, to the taxpayers, to sponsoring organizations, to program evaluators, to the media. The book argues that although demands for accountability have actually increased over the past decades through government and private mandates (Martin & Kettner, 1997), nearly all monitoring in this system is done through written communication. Only rarely does face-to-face interaction occur. For purposes of accountability, paper reports in essence become the program.
To some, the concept of a paper program may seem incredible. Social services are organized responses to difficult, usually enduring, social problems that are most often experienced by vulnerable individualsâindividuals who become the consumers or clients of the programs. Studies on excellence in social service organizations reveal that one of the dimensions of excellence reported most often by organizational leaders is orientation to serving client needs (Harvey, 1998; Rapp & Poertner, 1992). According to Manning (2003), social or human service organizations have unique callings: âFirst, people are the raw material of the organization. Second, human service organizations are mandated to promote and protect the welfare of the people they serveâ (p. 22). Through a series of case studies, I argue that this orientation toward clients does occur frequentlyâespecially where ethics-based staff have strong voices in program operationsâ but in daily practice, complex processes throughout the system tend to divert attention from the stated ideal, and almost nothing exists in the system to refocus the effort. The argument that the ideal does exist should be what surprises the reader in the upcoming chapters rather than the uncovering of paper programs, because the cultural milieu in which most social programs operate does so little to encourage an orientation toward the usually vulnerable consumers of service.
This book describes processes that occur at the program level, the level of the social service delivery system, and beyond. At the program level, most staff profess the ideal of commitment to clients, but the actual required work and the accountability obligations tend to center on printed products rather than client-staff relations. This mismatch between expressed ideals and routine practices is a centerpiece of Bourdieuâs (1977) theory of practice. Bourdieu argued that embedded practices and dispositions, rather than stated ideals, tend to guide human behavior. These dispositions and patterns of practice emerge through historical and adaptive processes, but over time the factors that led to these practices may be forgotten and the practices continue with minimal human reflectionâa phenomenon Bourdieu termed habitus. Individuals find themselves submerged in habitual and officially approved practices, and while some may not always follow the patterns, they nevertheless appear natural over time.
One routine activity that has increased in postmodern times is the practice of replacing the real with a representation of the real (Baudrillard, 1994). In many cases, paper reports as opposed to actual transactions are used to âproveâ that something exists, which occurs often with social programs. Funding sources for programs usually require written reports of program operations and expenditures, but funding personnel seldom conduct site visits to determine whether these services are actually provided. This has become a virtual reality of accountability mechanisms. On their end, program personnel invest a great deal of time and energy in the creation of these paper products, diverting energy from the actual objectives of the program. The documentation is often necessary for communicating with other staff, maintaining a history of what services or resources have been provided (sometimes just to jog the memory), and obviously because no one can be present to monitor every transactionâthe documentation is the next best thing. The problems begin to arise when the representation of the transaction becomes the proof of the transaction, and ultimately when it replaces the transaction itself. The history is written and the created image is treated like the final reified commodity (Jameson, 1984).
Local practices do not emerge in a vacuum. Social programs are embedded in systems that include groups ranging from the least powerful consumers of services to very powerful government and capitalist interests. Social services may not be a high priority for the more powerful interests, and the process of merely securing financial backing for the programs involves hegemonic relationships. The theory of hegemony derives from the recognition that government and other dominant interests cannot enforce control over any subordinate groups without these groups yielding a limited consent (Gramsci, 1971). Gramsci, like the Marxists, focused on class relations, while rejecting the determinism and economism of the Marxists. He added a psycho-cultural dimension where human agency and subordinate formations play roles in the development of policies and ideologies. According to this approach, dominant groups in capitalist societies rule through power blocs, or special-purpose alliances. The alliances may include groups that negotiate their limited consent in return for getting some of their interests represented in the power blocâa process called cooptation. These blocs are historical and as such, are fluid coalitions of interests that share political solidarity at some shifting point in time. They often represent groups from several classes.
Social service organizations and advocacy groups find their way into these alliances. For example, social service agencies may receive funding to treat individuals for what are really complex, difficult, and enduring social problems. The problems may include poverty, crime, child abuse, homelessness, hunger, alcohol, and illegal drugsâproblems more often faced by those in the lower socioeconomic strata than by those in the upper. Although these problems can easily be ignored by the more powerful members of the blocs, they need some form of limited consent from subordinate groups to rule. Hence, through processes of interpenetration and mixing, subordinate groups with some interests at odds with those of dominant groups may actually invade hegemony (Canclini, 1995). As conceptualized, hegemony is a seat of struggle and also a venue for change.
Through the case studies in the upcoming chapters, my hope is that the reader will be able to follow the ways that hegemonic relationships can result in the basic existence of social programs, the creation of ideologies about social programs through sources such as the mass media, and also the weak incentives for holding social programs accountable. However, this venue for change argued by some of the less deterministic hegemony theorists is one of the major points of optimism in this book as well, and is addressed in the concluding chapter.
This book is based on my own evaluation and research consultant work with over fifty social programs and experience in directing two nonprofit organizations. Many of these social programs are highlighted in the upcoming chapters, although specific names and other identifiers have been altered. The work focuses on case studies of programs, their implementation, and the monitoring mechanisms that were and were not in place during program operations. Again, it is my hope that the reader experiences surprise at the information providedânot because of the information on paper programs, but because of the information on extraordinary social service efforts (and their staff) that continue on and on with so little out there to ensure their performance and, for that matter, their survival.
© 2006 by The Haworth Press, Inc. All rights reserved.
doi:10.1300/5556_01
Chapter 2
Varieties of Paper Programs
There are, of course, varying degrees of paper programs. These range from out-and-out fraud to programs that are locked into paper processes. The following example of the âprogram in a boxâ represents the more extreme end of the continuum. I will later discuss the more common variety.
PAY: The Program in a Box
I was exposed to the âPAYâ program years before I became its evaluator (PAY is an acronym for Positive Alternatives for Youth). I attended an adolescent programming conference in the early 1990s, and during a workshop a well-prepared team of two men and two women were discussing what appeared to be an innovative demonstration project. The team (which I learned later was the program staff) explained that PAY had been planned as an alternative aftercare program for 120 youths who had spent over six months in residential treatment centers for law offenses, substance abuse, or other problem behaviors.
The team claimed that conventional aftercare programming in the area usually only included counseling, while their project offered positive alternatives to the delinquent lifestyle, either through direct services or through monitored referrals. These alternatives could include recreational programs, volunteer opportunities with stipends, job training and placement, special interest clubs, client support groups, and family social activities, as well as the more conventional case management and counseling services.
To illustrate their innovations, the team presented a video of the PAY Opportunities Fair they held early in the programâs first year. The video showed hundreds of youths listening to presentations on program options and being interviewed by allied organizational representatives who offered them opportunities to participate in sports leagues, job training programs, and summer camps. After the video, the team said that PAY was in its last year of a three-year grant and was well on its way to becoming a model for opportunity-based programming.
One year later, I received a telephone call from a high-level administrator of PAYâs parent organization, the âCity Life Institute.â The administrator told me that PAY was now out of operation because the three-year demonstration grant had ended, but they needed an evaluation. Unfortunately, the programâs first evaluator had failed to produce a report and the federal funding agency was demanding one. Could I come in and evaluate this effort retrospectively?
We met. During our discussion, the administrator explained that he knew less than he should about PAY dynamics because the program had been stationed in a community center across town. He had seen to it that program staff produced monthly reports for the City Life Institute and provided detailed case management records of their clientele for routine organizational audits. However, the administrator said he had heard rumors that the program was not running at full speed at the end of the demonstration period.
The administrator outlined the story of the first evaluation firm. The firm had been selected through a routine competitive process. As the PAY program was being organized, the City Life Institute had mailed requests for proposals (RFPs) to most of the areaâs program evaluators. A committee comprised of staff from PAY and the City Life Institute then reviewed the submitted proposals.
The firm that was selected through the review process had designed an experimental model for its evaluation work. An experimental model compares pretests and posttests of groups who are assumed to be affected by the program with pretests and posttests of groups in a comparison situation who are assumed to be unaffected by the program. As part of the experimental model, half of the youths who were preparing for discharge from the residential treatment centers were randomly assigned to participate in PAY, and the remaining youths were assigned to participate in a control group, which used the conventional aftercare programs (with case management/counseling only). Clients from both groups would be given pretest surveys early in the first year and posttest surveys at the end of the three-year grant period. Staff from both programs would also conduct routine assessments and maintain case management records on each client. The evaluation firm then planned to analyze the pre- and posttest data, and compare any changes in youth behavior in the conventional aftercare program to changes in youth behavior in the PAY program. The case management records would be used for additional analysis and to help interpret findings, where relevant.
However, three issues affected the evaluation process. First, program staff had been instrumental in selecting the evaluation firm, but they had no formal training in evaluation methodology. They may not have understood that random selection was key to the utility of this evaluation design. When the evaluation firm gave them a list of youths being discharged from the residential treatment centers, with half assigned to PAY and half assigned to the conventional aftercare program, staff balked. The staff wanted some choice in hand-selecting their clientele (which could also skew evaluation results in favor of the PAY program). They ended up selecting their clients from the Opportunities Fair they organized for residential treatment clients early in the programâs first year. Purportedly, staff referred 120 youths to PAY from the fair but never notified the evaluation firm of the change in protocol. Most of the remaining participants in the fair were slated to participate in conventional aftercare programs.
Second, the evaluation staff did not conduct their own pretest and posttest surveys. Instead they assigned this duty to staff from the program and control group, possibly for budgetary reasons. Moreover, for unknown reasons the evaluation firm never requested the pretest surveys at the time they should have been conducted.
Third, the evaluation team assumed an external posture to the program. Evaluators who select the external stance often limit their involvement in the program to planning an evaluation design, developing questionnaires and other measurement instruments, and conducting pretest surveys (or at minimum monitoring others who conduct the surveys)âthen tend to back away from the program until it is time to conduct posttests. This external stance was almost a standard in the field until the late 1970s, when evaluation reformers and funding agencies began to recognize the value of using evaluation feedback to improve programs in progress (e.g., Patton, 1978).
The PAY evaluation team maintained minimal contact with program personnel from early in the first year to late in the programâs third year. At this time the evaluation firm began contacting PAY staff to request the pretest and posttest surveys from clients. The evaluators received no response from staff despite repeated (and documented) contacts by telephone, fax, and mail. Finally, just as the program was about to close, the evaluation firm received a box of seventy-five files of PAY case management records, with a short note from staff informing the evaluation team that no one had conducted pretest or posttest surveys, or maintained the random sample the evaluators had drawn for clientele. At this point, the evaluation firm notified the parent organization that the terms of their contract had not been kept and they could not proceed with the evaluation.
When asked if our consulting firm could conduct an evaluation at this late date, I told them it was possible, but it would involve clear-cut design limitations. I said our evaluation team probably could access enough client names and telephone numbers from PAYâs case management records and from the records of the conventional aftercare programs to conduct telephone interviews. Our team would employ a quasi-experimental evaluation design in which we would construct a comparison group by selecting youth from the conventional aftercare programs that matched the PAY youths in certain salient characteristics (e.g., age, race/ethnicity, type of residential treatment center client utilized), but who reported receiving no PAY services. Using the original pre- and posttest instrument as a model, we could pose questions retrospectively, asking clients about their current situation at the close of a three-year program and their situation three years ago as they entered the program.* We also acknowledged that our results might be affected by the staff decision to handpick their clients (potentially skewing findings in favor of the program), and we would be addressing this limitation in our report. As we do routinely, we strengthened the overall design with analyses of additional data sources, including qualitative interviews with program stakeholders (e.g., client parents, staff from collaborating organizations) and program documents (e.g., the case management records, program reports). Our team received the go-ahead from the City Life Institute.
The administrator was able to locate only one of PAYâs former staff members to help guide our effort. The available man, âMark,â was currently employed in another City Life effort, and at first seemed quite willing to help us with all of our needs. He transported the box of seventy-five case management records from the first evaluation firm to us, expressing surprise that his files were not included in the box. He said he would search for his own records and get back to us. He also gave us some unsettling information. He said that PAYâs staff supervisor had been unable to control his staff. Following the initial flurry of activity planning the Opportunities Fair, some staff began coming to work âwhen they felt like it.â According to Mark,
He [staff supervisor] even started a sign-in/sign-out sheet at the desk so people would have to tell him where they were going. Theyâd walk right by the sheet and just say they had to be out in the field today. [Name] would just come in to pick up his check every other Friday. Youâd never see him any other time.
However, the case management files in the box appeared to provide more than enough information to begin the process of accessing PAY clients. In addition to names, telephone numbers, addresses, and signed parental permission slips for the program and evaluation, each clientâs file also included weekly progress reports, assessments, lists of activities accomplished, and a substantial number of referral forms to other sources of support. The names and telephone numbers of clients were given to the evaluation interviewers to conduct the telephone surveys (with the hope that Mark would soon deliver his records as well). I began coding the files for descriptive data and contacting individuals at the referral sources to schedule qualitative interviews. I also read the PAY reports on program activities and services, including the semiannual reports to the federal funding agency and the monthly reports to the parent organization. These appea...