Participatory Pedagogic Impact Research
eBook - ePub

Participatory Pedagogic Impact Research

Co-production with Community Partners in Action

  1. 246 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Participatory Pedagogic Impact Research

Co-production with Community Partners in Action

About this book

Involvement of community partners in the structure and design of services is largely accepted in principle, but its practice is heavily contested. This book argues that the co-production of research is one of the best ways to involve community partners. As well as having intrinsic value in and of itself, research embeds a culture of learning, co-production and of valuing research within organizations. It also creates a mechanism for developing evidence for, monitoring and evaluating subsequent ideas and initiatives that arise from other co-production initiatives.

The book makes a case for research to be a synthesis of participatory research, critical pedagogy, peer research and community organizing. It develops a model called Participatory Pedagogic Impact Research (PPIR). Participatory research is often criticized for not having the impact it promises. PPIR ensures that the issues chosen, and the recommendations developed, serve the mutual self-interest of stakeholders, are realistic and realizable. At the same time this approach pushes the balance of power towards the oppressed using methods of dissemination that hold decision makers to account and create real change. PPIR also develops a robust method for creatively identifying issues, methods and analytic frameworks. Its third section details case studies across Europe and the United States of PPIR in action with professional researchers' and community partners' reflections on these experiences.

This book gives a unique articulation of what makes for genuinely critical reflective spaces, something underdeveloped in the literature. It should be considered essential reading for both participatory research academics and those involved in health and social care services in the planning, commissioning and delivery of services.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Participatory Pedagogic Impact Research by Mike Seal in PDF and/or ePUB format, as well as other popular books in Medicine & Health Care Delivery. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2018
Print ISBN
9780367590000
eBook ISBN
9781317532910
Edition
1

Part 2
Participatory Pedagogic Impact Research

Towards a process

5 Meaningful organizational impact

Winning stakeholders over to the process
Drawing on practical experience this chapter looks at how to sell the concept of co-production and participative research to all stakeholders. It will examine what needs to be in place for a participatory research project to have a chance of making an impact on the overall organizational culture, particularly stakeholders’ mindsets towards research and community partner co-production. In doing so it warns against falling into what I, and others (Seal, 2005; Skyrme, 1999), have called elsewhere the ā€˜fallacy of information’ which is the self-deception that if stakeholders just knew how good a thing participatory research and co-production is, they would come on board. They might say they embrace the ideas – and even mean it – but other factors came come into play and sabotage the process, including unconsciously. The arguments for co-production and participatory research have been well rehearsed earlier in the book and elsewhere. The challenge is finding an organization that is in the right place to hear them.

Introduction

It may help here to discuss a project where everything that could go wrong did. I was brought in to embed evidence-based practice (Dickinson, 2001) within a new service initiative for young people. This was around the time of the coining of the term NEET (not in education, employment or training) when the government realized that a substantial number of people were ā€˜slipping through the net’ and not ā€˜engaging’ in ways that it was felt they ought. This was a top down initiative that was once countrywide (Smith, 2000, 2007). As Smith (2000, 2007) notes, its premise was based, ironically, on two non evidence-based governmental perceptions. The first was why people were NEET. The government did not entertain that perhaps young people were NEET because there were no decent jobs; and that training and education did not lead to jobs any more, had been cut, or farmed out to private companies and dumbed down. The government instead assumed that young people were not motivated, and/or had been mollycoddled, and it was they who needed to change (Smith, 2000, 2007). This was the underlying premise of the service. It took a carrot and stick approach, where young people were both compelled to engage (benefit laws were changed to ensure this) and were to be given the right advice and support to make the best of this, hence the new service.
The second perception was about existing services for young people (Smith, 2000, 2007). There was a perception that the careers service needed to be more young people friendly, and youth and community work a little less so, or rather that it needed to give advice ā€˜properly’, especially careers advice. There was also the ā€˜round table communication deception’ (Seal, 2005) which assumes that having multiple agencies involved with one young person is inefficient – i.e. that what was needed was for agencies to communicate better, often round a table, and identify one of them to be the keyworker to co-ordinate services and build a coherent plan. What this does not anticipate is the level of organizational defensiveness of many young people’s services (Seal, 2005) with many assuming that they already had young people at their centre of their planning. As Jeffs and Smith (2002) also note, this view of multi-agency working ignores the agency young people already employ. Young people often only got what they needed through their own strategic management of the agencies input. Multi-agency approaches, whereby all agencies work out a package of support between them, represents a net loss for them.
I have gone into some detail on this because it was an initiative that was fairly doomed from the start. The new service was top down and rejected by most agencies, who had been corralled into multi-agency partnerships, and had some of their budgets diverted into it (Smith, 2000, 2007). To then try and embed evidence-based practice into the partnership would have been difficult under any circumstances. As one stakeholder commented, ā€˜are we going to apply evidence-based practice to this mess then?’ – the research project was an initiative borne of necessity rather than commitment. In my first planning meeting the chief executive said that they ā€˜had to do something about evidence-based practice’ and asked if I could come up with a ā€˜toolkit’ to do this. Evidence-based practice was popular with government at the time (Catlan, 2002; Dickinson, 2001). It was a medical model that was being grafted on unquestioningly onto any social intervention (Smith, 2000, 2007). However, like co-production and participatory research, it had potential to be meaningful. I countered that evidence-based practice was not about doing a couple of small scale evaluations and then saying this was the best model; it was about embedding a culture of research and evaluation that could continually develop and evolve current best practice. We needed to embed a culture across the partnership to do this and then develop infrastructure to allow for research to be conducted on an on-going basis. After some discussion the chief executive reluctantly agreed and I went away to develop a project proposal to these ends.
In the original proposal I suggested a series of five-day courses spread across different agencies and stakeholders to develop an understanding of evidence-based practice and to start to embed a culture around it – this being the bare minimum of what I thought it would take to make this process meaningful. This was cut to two days supplemented by a paper I produced to be read beforehand, which I never saw any indication of having been read by anyone, including the chief executive. The two-day workshops were changed by the steering group to a dedicated operational group meeting in each area, which would only be attended by middle management. This, in all but one case, became an item on the existing agenda of around 45 minutes, which was often then cut down again, in one case to 10 minutes. As you can imagine, no culture change happened around evidence-based practice; in fact only some middle managers got to hear that the project was happening at all.
We conducted a pilot piece of participatory research, which had a small impact for those concerned and produced some interesting results. The proposed all-stakeholder conference looking at the implications of the results and how to further embed evidence-based practice became a dedicated high level board meeting with only service leads attending, which then became an item on the agenda of their next general board meeting. They asked me to reduce the final 30-page report (as agreed) to a two-page executive summary that ā€˜could be read in two tube stops on the way to the meeting’. I was asked if I could minimize the stuff about doing further research, as they did not have the budget, and concentrate on what needed to change. In the presentation the chief executive asked what had happened to his original idea of a toolkit.
While a woeful tale, it certainly was a learning curve for me in what is needed for a project to even have the possibility of having meaningful impact. I had succumbed to the fallacy of information. The project was a good idea, and everyone genuinely agreed to it. I also think that no one deliberately sabotaged it, although some definitely undermined it. However, other factors came into play that meant it unravelled. There was not a partnership culture, in that particular time and place, that would have allowed it to succeed, whatever different stakeholders intentions were. This example highlighted the importance for me of taking into account both the external and internal factors that impact on the organization or initiative that is interested in developing a co-production/participatory research project. Building on previous work (Seal, 2009) I think that there are a number of characteristics of what will make for successful engagement of an organization and its stakeholders:
1) That you have taken account of the external factors on the organization.
2) It is a learning organization; overall it has more enhancing features than inhibiting ones for co-production and participatory research.
3) That you have access to and are allowed to cultivate a champion with all stakeholders, from community members to senior management.
4) That there are processes whereby all stakeholders can develop a meaningful understanding of co-production and participatory research.
5) That the aims of the project highlight and dovetail with stakeholder concerns and that the project has a focus that threads through, illuminates and finds common ground between the concerns of stakeholders.

Taking account of external factors

Perhaps a first consideration is whether the organization has chosen to take part in the project. While perhaps an ideal situation, I think it makes a real difference if they have. Even if the decision to participate has only really been exercised by senior management, it gives researchers leverage when you encounter cynicism from others in the organization. It also means that you can motivate those lower down the hierarchy with the carrot of making senior management accountable. If the organization is only engaging in the project because they have been told to by government or external forces, then achieving buy in becomes all the more difficult, as the detailed example just given demonstrates.
There are many models for considering external factors that impact on organizations, known by acronyms such as PEST (political, economic, social and technological), PESTEL or PESTLE (which adds legal and environmental factors), SLEPT (which adds legal factors), STEEPLE and STEEPLED (which adds ethics and demographic factors), and DESTEP (which adds demographic and ecological factors). I like SPELIT, which stands for Social, Political, Economic, Legal, Intercultural and Technological – although the E can also stand for environmental, ecological or even ethical. Of definite relevance to the previous case study were legal, political and intercultural factors. Politically, as said, the new service was not well received by any existing stakeholders, was seen as top down, and a product of the prevailing ideology at Westminster. Legally the aforementioned young people’s service was never made statutory and was only an ā€˜initiative’. The government at the time throwing out a number of initiatives and seeing which ones prevailed (Smith, 2002). Culturally the new service clashed with many existing organizational and service cultures, as did evidence-based practice which was seen as tainted with the new young people’s service and the prevailing governmental regime. The prevailing culture was to ride out the initiative and have as little damage done to the core cultures and services as possible. It also had a very hierarchical culture and structure (Smith, 2002b).
The danger of schemas such as SPELIT is that there are always going to be political, legal and cultural issues, and it can cause pessimism about the chances of a project having an impact. The fundamental question is whether these factors are too strong or make the service too fragmented, to allow an impact on the organization and its stakeholders, or whether the project would be hijacked or undermined by some of the stakeholders to a negative end. As an example of success in the face of such problems, we have our project with probation services. The project was initiated at a time when probation services were undergoing privatization. In some ways it was the worst possible time to engage as staff motivation was at an all-time low and suspicion and reluctance to take on new initiatives was at an all-time high. However, underneath this all stakeholders had a commitment to the core principles of the probation service, although very different views of how to actualize them, and these core principles could be related to the aims of co-production and participatory research. As Beth Coyne will describe in Chapter 12, the project was able to have a discernible impact. While there was sabotage from some stakeholders, it could be identified and mediated, at least to a degree.

The learning organization: inhibiting and enhancing factors

Even if the organization and senior management buy into the idea of participatory research and co-production project planners need to know whether this is likely to succeed in reality. It is, as we have noted several times, easy to claim principles and a belief in such things. One way of viewing this is to try to assess to degree to which the organization is a learning one. I have previously examined different models of what constitutes a learning organization (Anderson, 1997; Argyris, Putnam & McLain, 1985; Senge, 1994; Smith, 2001a & b) and identified inhibiting and enhancing models, with dimensions of control, protectionism, views about evidence and research, views on history, views on dissent, ā€˜holy cows’, staff relationships, openness on information sharing and public or private debate. Table 5.1 is a refined version of that previous work (Seal, 2005, 2009).
Table 5.1 Different models of what constitutes a learning organization
Factor Inhibiting model Enhancing model
Control The organization wants to control the environment and task of co-production and participatory research. The staff and management groups spend a long time defining participation and research, foreseeing barriers without talking to the community members. The o...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Table of Contents
  7. List of illustrations
  8. List of contributors
  9. Acknowledgements
  10. Introduction: the de-mystification and democratization of research
  11. Part1 The co-production of knowledge with community partners: dilemmas and debates
  12. Part 2 Participatory Pedagogic Impact Research
  13. Part 3 Notes from the field: research in action
  14. Conclusion: towards truly ethical co-productive research
  15. Index