Preventing and Countering Violent Extremism
eBook - ePub

Preventing and Countering Violent Extremism

Designing and Evaluating Evidence-Based Programs

Michael J. Williams

Share book
  1. 192 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Preventing and Countering Violent Extremism

Designing and Evaluating Evidence-Based Programs

Michael J. Williams

Book details
Book preview
Table of contents
Citations

About This Book

This textbook serves as a guide to design and evaluate evidence-based programs intended to prevent or counter violent extremism (P/CVE).

Violent extremism and related hate crimes are problems which confront societies in virtually every region of the world; this text examines how we can prevent or counter violent extremism using a systematic, evidence-based approach. The book, equal parts theoretical, methodological and applied, represents the first science-based guide for understanding "what makes hate, " and how to design and evaluate programs intended to prevent this.

Though designed to serve as a primary course textbook, the work can readily serve as a how-to guide for self-study, given its abundant links to freely available online toolkits and templates. As such, it is designed to inform both students and practitioners alike with respect to the management, design, or evaluation of programs intended to prevent or counter violent extremism. Written by a leading social scientist in the field of P/CVE program evaluation, this book is rich in both scientific rigor and examples from the "real world" of research and evaluation dedicated to P/CVE.

This book will be essential reading for students of terrorism, preventing or countering violent extremism, political violence, and deradicalization, and highly recommended for students of criminal justice, criminology, and behavioural psychology.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Preventing and Countering Violent Extremism an online PDF/ePUB?
Yes, you can access Preventing and Countering Violent Extremism by Michael J. Williams in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Terrorism. We have over one million books available in our catalogue for you to explore.

Information

Leg III
Program evaluation

This final leg of the book is intended, not only for P/CVE program evaluators, but for program designers, program managers, their frontline staff, and others intimately involved in trying to demonstrate the effects of a given program. By reading it—regardless of one’s job title—one can expect to become better equipped to capture, analyze, and communicate important information regarding the effects of a programs. Additionally, the following section will make clearer the importance, not only of evaluation itself, but of the advantages of involving evaluation specialists early in a program’s design phase.

A misconception

It would be a misconception to believe that evaluation is necessarily a lost cause if it’s brought to bear on a program that has already been designed and is up-and-running. If that were so, we wouldn’t have quality evaluations of the US public-school system, given that the school system was in effect before formal evaluations of it were made. On the contrary, one of our greatest evaluation methodologists, Thomas Cook, co-author of the venerable “Experimental and Quasi-experimental Designs for Generalized Causal Inference,” often made public education his cause cĂ©lĂšbre (Shadish, Cook, & Campbell, 2002).
However, it is true that data collection opportunities are—de facto—limited if they are captured later vs. sooner. Furthermore, some (though not all) very strong research designs require data to be collected at an early “pre-implementation” phase. So, methodological options become narrower the later that evaluation design is brought to bear relative to program implementation. Regardless, whether one is planning for measurement and evaluation relatively early or late with respect to a program’s implementation, know that there are rigorous research and evaluation options available; though, to underscore, it is typically advantageous to design evaluations prior to program implementation.

A word to evaluation funders/commissioners

The upcoming chapters can serve myriad evaluation-involved stakeholders, including those who commission evaluations. For example, Chapter 13 includes links to an innovative, freely available, online tool, known as “GeneraToR,” which is designed to help commissioners develop Terms of Reference (ToR; aka Request for Proposals [RFP]; aka Notice of Funding Opportunity [NOFO]). However, this leg of the text is unlikely to satisfy those who wish to have guidelines for the bureaucratic machinations of commissioning an evaluation (e.g., formation of an evaluation steering group, and deciding and codifying who will be responsible for making various bureaucratic decisions). For such advice, see the guide featured in this endnote.1
Instead, the upcoming leg of our journey is about bringing an evaluation to life as though you’re in the evaluator’s driver’s seat: as though you’re designing and conducting an evaluation first-hand. If you’re an evaluation commissioner, this section likely will increase your understanding of what to expect from top-flight evaluations: not only what to expect of the results, but what to expect of the processes of evaluation, and of standards for scientific reporting. Let this section of the book assist you in planning, overseeing, and demanding evaluation excellence in the field of P/CVE.

A word to evaluators and would-be evaluators alike

This leg of the book assumes some training in science; however, one needn’t hold a science degree to gain from it. Nevertheless, if the evaluation plan involves surveys, or interviews, or any other data collection from human participants (and what P/CVE evaluation doesn’t?) resolve to recruiting a social scientist, trained in such methods, to the team: if nothing else, recruiting one as a consultant. Otherwise, conducting an evaluation without doctoral-level expertise in research design, data collection, and data analysis, is tantamount to “practicing without a license.” In principle, anyone can evaluate, just as anyone can represent themself in court. However, if there isn’t a well-trained social scientist on the team, even if the team is incredibly smart, it simply doesn’t know what it doesn’t know; or, as it’s said, “one who represents themself in court has a fool for a client.” Save yourself time, trouble, and the possible faux pas of acting on faulty intelligence (i.e., the ostensible results of a homespun evaluation) by teaming with a competent doctoral-level social scientist (more on team selection in Chapter 13). In modern societies, we recognize that it is unethical—indeed illegal—to practice medicine without a license. The stakes are simply too high for it to be otherwise. In the realm of P/CVE—another potentially life-and-death enterprise—should we expect anything less than to be assisted by qualified doctors?

Orientation

Though this final section contains an enormous amount of information regarding evaluation and its methods in general, of course, it will focus on evaluation/methods as they pertain to P/CVE evaluation specifically. If you enjoy science, you’re going to enjoy the journey ahead: plenty of logic in motion, an engineer’s perspective on assessing slices of the human condition. If one is not so inclined toward science, don’t be daunted. Although any field of scientific inquiry is infinite, the general processes of evaluation science can be (at least conceptually) compartmentalized.

12 Defining the problem and identifying goals

As described in the previous leg of this text, those who design P/CVE programs must identify both the need(s) that a program intends to fulfill and the program’s operational goals. So, too, must those attending to P/CVE program evaluations identify both the need(s) that an evaluation intends to fulfill and its operational goals. Consequently, goals for the evaluation must be articulated and prioritized. This chapter describes the various types of evaluations (e.g., impact evaluation, developmental evaluation, process evaluation) and their uses. Additionally, it will discuss—from the perspective of program evaluation (vs. program design)—approaches to developing logic models and articulating a program’s theory of change.

Learning objectives

  • Understand “the problem” to be addressed by “utilization-focused evaluation,” and how the latter addresses the former?
  • Be able to describe the major types of evaluations.
  • Understand why every good evaluation—those that are scientifically grounded— must include at least some aspect(s) of a process evaluation.
  • Be able to discuss the importance of informational priority-setting for evaluation.
  • Understand why evaluators should articulate a program’s theory(ies) of change, even if one already has been developed for the program.
  • Understand functions that logic models serve for evaluators.

The “problem” to be addressed

As mentioned, similar to how P/CVE programs must identify both the needs they intend to fulfill, and their operational goals, so, too, must P/CVE program evaluations be tailored to fulfill identified needs. In the case of evaluation, that need is information. Whose needs? The answer to that is simple: the primary intended users of the evaluation. Though that answer is easy to articulate, evaluators need to take pains to have a clear understanding of the informational needs of the persons they serve.
The point is that evaluations are useless unless they provide accurate, actionable information to those who could benefit from the information (e.g., programmatic decision makers or public policymakers). This is at the heart of so-called “utilization-focused evaluation”: an evaluation watershed movement pioneered several decades ago by Michael Patton (see Patton, 2008). Evaluation is not just basic research. It’s applied research; it serves the actionable informational needs of predefined others. In short, all types of evaluation should be developed in the spirit of utilization-focused evaluation.
REALITY CHECK
Bear in mind that the audience of researchers and practitioners, in the field of P/CVE, can be considered among the legitimate primary intended users of a given P/CVE evaluation. Therefore, evaluators ought to put substantial consideration into how a given evaluation can satisfy, not only the perhaps narrow informational needs of (for example) program staff and evaluation funders but those of the theory and practice of P/CVE more broadly. In other words, program evaluations can be vehicles both for theoretical developments and for codification of evidence-based practices relevant to P/CVE: assuming that to do so does not place undue burdens on program staff or program participants (Williams & Kleinman, 2013).
Evaluation needn’t be either applied or basic research; it should be both. That dual function is the brass ring of evaluation. To miss an opportunity to make both practical and theoretical contributions to the field is to do a disservice to the important mission of P/CVE itself.
As mentioned, the evaluation “problem,” to be addressed is a lack of information in some regard. However, the needed information is not necessarily about whether a given program “works” (i.e., an impact evaluation). Instead, to find out what needs to be known, once again, evaluators must consult the primary intended users. For example, primary intended users might be interested primarily, or additionally, in whether a program is being executed as planned (i.e., a process evaluation), or how they can develop a new intervention for a given population/clientele (i.e., a developmental evaluation). Furthermore, the evaluand might not be a P/CVE program per se, but a P/CVE-related policy, strategy, or behavior of a network, etc. (see “Clarify what will be evaluated,” n.d.).

Identify primary intended users

In some (arguably most) cases, an evaluation may have several uses (see “Identify who are the primary intended users of the evaluation and what will they use it for,” n.d.). By identifying primary intended users, one may subsequently query them to learn of their informational needs, so that those needs can be met by the evaluation (ibid.). Primary intended users are not all of those who have a stake in the evaluation, but those who have the capacity to affect change informed by the evaluation (ibid.). These parties are in a privileged position to “to do things differently” (e.g., change tactics, strategies, policies), because of their juxtaposition to the evaluation and/or the program itself (ibid.). Therefore, the informational needs of these parties are privileged over others (for example, over a general audience who might be curious as to an evaluation’s results; ibid.). The following endnote provides a link to a guide, from the International Development Research Centre that includes questions intended to guide the identification of primary intended user(s).2

Determining the intended use(s) of an evaluation

Though it might seem obvious, the intended use(s) of an evaluation defines its purpose(s), and from a utilization-focused perspective, the intended uses are sacrosanct. It is not enough simply to assert that an evaluation will be used for “accountability” or for “learning” (see “Decide purpose,” n.d.). Those should go without saying. The aforementioned guide, from the International Development Research Centre, contains questions intended to guide evaluators in their discussions with primary intended users, to ascertain how they seek to use information from a prospective evaluation. In sum, the central question to ask of primary intended users is “What do you want to know about the program?” The primary job of evaluators is to translate the answer(s) to that question into a suitable evaluation/res...

Table of contents