
- 216 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
First published in 1992. The present volume belongs to a series focusing on the evaluation of development assistance. This series is part of a research and publishing programme under the auspices of the Working Group on Aid Policy and Performance of the European Association of Development Research and Training Institutes (EADI). Two volumes have already appeared. The first focuses on the performance and economic impact of food aid and the second on the evaluation policies and performance of some European countries. The papers were presented at a workshop in March 1990 organised by the Norwegian Institute of International Affairs at Lysebu, on the outskirts of Oslo.
Tools to learn more effectively

Saving Books

Keyword Search

Annotating Text

Listen to it instead
Information
1
Evaluating Development Assistance: State of the Art and Main Challenges Ahead
I. Introduction
Since the 1960s, aid to developing countries has become an important component of the world economy. Thus, in 1989, official development assistance (ODA) amounted to 51.3 billion USD. Almost 90 per cent of this aid was provided by the members of the OECD Development Assistance Committee (DAC). More than 75 per cent was provided bilaterally, while the remaining 24 per cent was channelled through multilateral agencies [OECD, 1990]. Although far too small to significantly affect the world-wide income distribution, the present flow of development assistance, in its national context, is not insignificant. In 1989, the ODA provided by the DAC countries amounted to 0.33 per cent of their gross national product (GNP), varying between 0.15 per cent (USA) and 1.04 per cent (Norway) for the individual countries. The relative importance of the ODA contributions tends to be much greater on the recipient side, especially for low-income countries and small countries. In 1988â89, the low-income countries of sub-Saharan Africa, on average, received ODA for an amount equal to 14.2 per cent of their GNP. For the Asian low-income countries the average amounted to only 1.5 per cent of GNP because of the relatively low ODA receipts of India and China (0.8 and 0.6 per cent of their GNP). For Asian and Latin-American middle-income countries with large populations, ODA contributions varied between 1.2 (Thailand) and less than 0.1 per cent of their GNP (Mexico) [OECD, 1990].
During the late 1970s and the 1980s, aid became increasingly exposed to criticism from both the political right and the left for various and often different reasons [Riddell, 1987]. One response of bilateral and multilateral aid agencies was to give added emphasis to evaluation. During this period, the evaluation function became institutionalised, and most aid agencies established evaluation units within their administrative structures [Stokke, 1991a].
From the perspective of the donor agencies and, at the national level, ultimately the government and Parliament, two concerns are of particular importance. The first has already been alluded to, namely, criticism of aid in general or the way it is provided in specific cases. Such criticism may relate both to the effectiveness of aid (the more fundamental question of whether aid attains the objectives set for the ODA transfers) and its efficiency (the question of whether there is an acceptable relation between the results of aid and its cost). Evaluation, therefore, becomes a tool by which the aid administration, and ultimately the politicians, may justify aid appropriations. The other concern is related, more particularly, to the administration of development assistance. When it comes to the end use, ODA is utilised abroad - and administered by other administrative systems than that of the donor. Its effects are less visible than government spending at home and escape both systemic control by public administration and scrutiny by civic society, including the mass media. There is, therefore, an extra incentive for control mechanisms to ensure that aid works according to the objectives set and in an efficient way. Evaluation serves this function, too.
From the perspective of recipient governments, the evaluation of projects and programmes financed wholly or partly by foreign donors may be considered a less urgent, even tricky, activity, since it exposes their administrative system and centrally placed administrators to critical scrutiny. Furthermore, there is the risk that the resources from outside may dry up as a result of the evaluation. The fact that evaluation is usually initiated from the outside, designed to provide information adapted to the needs of foreign aid agencies, and to a large extent also carried out by foreigners, is not conducive to creating enthusiasm for the process on the part of recipient governments. There are also other constraints, such as tight budgets and shortages of suitably trained manpower. Moreover, to the extent that aid is earmarked for specific projects, its opportunity cost for the recipient country is very low; as a consequence there is no urgent need for an evaluation of such aid unless it is considered in the context of a broader programme. Nevertheless, during the 1980s, many Third World governments also came to consider evaluation as increasingly important for policy-making and management.
Evaluation, as a tool in public administration, was developed in the United States. There, in its more mature form in the late 1960s and early 1970s, it focused on major social programmes initiated by the administrations of Presidents Kennedy and Johnson. When, however, it was taken up by European governments, evaluation was first used in the field of development assistance. Adapted to the forms of aid prevalent in the late 1960s and 1970s, and usually working within the confines of the aid-implementing administration, evaluation mainly focused in the 1970s and early 1980s on often small, aid projects. However, during the late 1980s, evaluation eventually tried to meet the challenges posed by the new forms of aid - support to large programmes and various forms of non-project aid. These challenges remain solidly on the evaluation agenda for the 1990s, too [Stokke, 1991a].
Originally, social scientists, in particular economists, were in demand, bringing the different approaches and tools of their trade with them into the new growth area of evaluation. During the formative years, economists were centrally placed in most aid administrations and their main instrument, cost-benefit analysis, accordingly became a much-used technique in the evaluation of aid. However, other social sciences were increasingly drawn upon, too. The approaches and tools chosen for the purpose of evaluation are seldom neutral;1 they imply values in many respects, not the least through the inclusion, or the omission, of various aspects to be addressed.
As applied to aid, evaluation found itself in the interface between academic research and public administration, with the needs of the administrators holding the upper hand; evaluations were expected to be cost-effective themselves and to identify requirements that could immediately be acted upon by the aid administration, with implications both for the resources spent on evaluation and for the timing of the work. The most common means of evaluation is a fact-finding team, increasingly with a multi-disciplinary composition. The team is usually appointed by the (donor) aid agency, which also sets the terms of reference (TOR). The team visits the area of the project, reports back to the agency on its findings and makes recommendations of changes that might be considered in order to improve the project.
This setting may not be equally well suited to answer all kinds of questions, from different stakeholders, about all kinds of aid activities. Several contradictions are involved. One basic contradiction, already alluded to, can be seen in the tension between the aid administrationâs demand for quick answers, to be transformed into immediate administrative action, and the professional concern of the evaluator for the methodology and precision of the craft. Given the context, an additional question - for both the administrator and researcher - would concern the degree of accuracy needed in order to serve the particular purpose of the evaluation in question.
For the aid administration, the major pay-off of evaluation is the information extracted and fed back to the management, to be used for adjustments of the ongoing activity and for the planning and implementation of new ones. Evaluation offers an opportunity to learn, in a systematic way, from past experiences. However, the quality of the information provided, and its credibility, will to a large extent depend on the methods used. The methods have to be chosen according to the kind of questions to be answered; for some types of questions, the approach and tools of one discipline may be the most appropriate, while other questions may be better addressed by other disciplines.2 This applies to the different types and forms of aid, too.
Evaluation reports will be more credible if based on an accepted methodology. Otherwise, the findings may depend on the subjective perceptions of the evaluators. According to the established tradition, an evaluation should basically address two questions. First, what changes have occurred as a result of the aid intervention, and to what extent are these changes adequate? This question relates to the more fundamental question: does aid work? Second, were the resources spent on the project justified by its results?
To answer the first question, an evaluation methodology should identify the criteria by which the situation created by the aid intervention is to be judged. One way of proceeding is to assess the extent to which the objectives set have been attained. However, objectives set for aid have not always been formulated in clear terms. They may be set at a high level of generalisation but not operationalised at the level of the specific aid intervention. They may also be changed during the implementation process. This is discussed in more detail in section III. To evaluate whether the ultimate objectives of an aid intervention have been achieved, it becomes necessary to assess the changes which have taken place in the socio-cultural, economic, institutional and technical conditions as a result of the intervention. This is discussed in section IV on impact analysis.
An evaluation will usually also have to address the second question, namely the extent to which the observed results of the aid were worth the resources used. The traditional approach to this question is to use cost-benefit analysis and related techniques. This approach ideally expresses all costs and benefits in terms of one numéraire and then computes an indicator of private or social profitability. However, it has a number of limitations and shortcomings: it is data-intensive, neglects social and institutional change, is not adapted to projects with a hazy relation between inputs and outputs, for activities other than projects and for overall assessments of aid programmes or parts of these. In section V, the advantages and shortcomings of cost-benefit analysis as a technique of evaluation are briefly discussed.
Finally, in section VI, the main challenges to evaluation emerging from the structural changes in the forms of aid that have taken place during the 1980s are briefly identified and discussed. These changes call for imagination in the way evaluation should be organised, the strategies to be followed and the approaches and methods to be applied. They all have to be adapted to the more demanding tasks with which evaluation is confronted.
Before proceeding with this discussion, however, it is necessary to state the meaning of the concept of evaluation and also to define several terms related to it, in order to prevent confusion about terms which are often used in an imprecise way. In 1986, the Development Assistance Committee (DAC) of the OECD [OECD, 1986] made an effort to standardise the language in this field. In the following section, we draw on this work.
II. Terminology
As monitoring and evaluation of aid gained momentum during the 1970s, several terms were introduced. The exact meaning of these terms was not always clearly defined. The Expert Group on Aid Evaluation, established by DAC in 1982, made a valuable contribution to the standardisation of the terminology.
The group [OECD, 1986:65] defines evaluation as âan examination as systematic and objective as possible of an ongoing or completed project or programme, its design, implementation and results, with the aim of determining its efficiency, effectiveness, impact, sustainability and the relevance of the objectivesâ.
The five elements aimed at are clarified by the group as follows:
- Aid is efficient if [ibid.: 72] âit uses the least costly resources necessary to achieve its objectivesâ. This implies, inter alia, that âthe aid can gain the most results for its economic contributionsâ.
- Effectiveness of aid relates to the effects of aid vis-Ă -vis the objectives set. Aid is effective to the extent that the objectives are achieved.
- Impact is a wider term. It refers to [ibid.: 73] the âeffect [of an aid intervention] on its surroundings in terms of technical, economic, socio-cultural, institutional and environmental factorsâ.
- Sustainability refers [ibid.: 73] to âthe extent to which the objectives of an activity will continue (to be reached) after the project assistance is overâ.
- Finally, evaluation may or should analyse [ibid.: 73] âto what extent⊠the objectives and mandate of the programme (are) still relevantâ.
An evaluation may be carried out during the implementation of a project (for example, mid-term) or after it has been completed, that is, when the funding has come to an end (ex-post evaluation). In the first case, emphasis will be on the effectiveness of aid vis-Ă -vis the immediate objectives and the efficiency, in terms of, inter alia, a timely start of the work, delivery of inputs and costs. Although some experiences related to the implementation may be recorded, on which administrative action may be recommended, it will hardly be possible to address in a meaningful way the effectiveness, efficiency, impact and sustainability of the aid intervention at this stage. Firm evidence of effects will still be scarce; effectiveness can therefore be analysed only in a preliminary way at this stage. The same is true for efficiency. It will be too early to discuss impact and sustainability and also a little early to assess the relevance of the objectives set, although some early experiences may sometimes be indicative. However, as time passes, it will gradually be possible to analyse these issues. A full analysis of the impact and sustainability of an aid intervention may, however, not be possible until several years after the funding from outside has been ended.
Evaluation is distinct from other related activities, although the border-lines may sometimes be blurred:
- (1) Appraisal, sometimes also called ex-ante evaluation, is the final examination of an aid proposal with respect to its [ibid.: 65] ârelevance, technical, financial and institutional feasibility and its socioeconomic profitabilityâ. This examination precedes the decision whether to engage in the project or not.3
- (2) Auditing aims to determine [ibid.: 62] âwhether ⊠the measures, processes, directives, and organisational procedures of the donor ⊠conform to norms and criteria set out in advanceâ.
- (3) Financial control consists of the verification of [ibid.: 62] âwhether expenses have been authorised and recovered, and whether they conform to rules and contractsâ.
- (4) Monitoring is a management function during the implementation phase. The aim is to collect all the necessary information in order to check whether the implementation is proceeding in accordance with the plan and whether the original objectives are being realised. It should allow the management to make timely adjustments.
However, the dividing line between these functions and evaluation during the implementation period or at the end of a project or programme may indeed be a narrow one; this applies in particular to evaluation and monitoring in their various forms. Thus, a mid-term evaluation will partially have the same function as monitoring, and the latter, ideally, should provide data on which evaluation may be based. However, although monitoring and evaluation may be considered as a continuum along several dimensions rather than distinct activities, as suggested by Binnendijk [1989], particularly from the perspective of aid management, they may differ in terms of reference periods and their primary users. They may therefore best be seen as distinct but related activities.
III. The Objectives
The criteria on which evaluation of an aid intervention is to be based have to be established before the evaluation can take place. The initial objectives of the intervention have, traditionally, emerged as the obvious point of departure. In a formal sense, these appear as common objectives of the donor and recipient authorities, since both are parties to the aid agreement. Basically, however, the objectives are formulated by the donor agency. The discussion in this ...
Table of contents
- Cover
- Half Title
- Title
- Copyright
- Contents
- Foreword
- Glossary
- 1. Evaluating Development Assistance: State of the Art and Main Challenges Ahead
- 2. The Rise and Fall of Cost-Benefit Analysis in Developing Countries
- 3. Technical Cooperation: Assessment of a Leviathan
- 4. Evaluating the Effectiveness of Technical Cooperation Expenditures
- 5. Evaluation and Women-oriented Aid
- 6. Participatory Research and the Evaluation of the Effects of Aid for Women
- 7. Evaluating UK NGO Projects in Developing Countries Aimed at Alleviating Poverty
- 8. Evaluating the Impact of World Bank Structural Adjustment Lending: 1980â87
- 9. Aid Policy Evaluation
- Index
- Notes on Contributors
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, weâve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Evaluating Development Assistance by Lodewijk Berlage,Olav Stokke in PDF and/or ePUB format, as well as other popular books in Business & Business General. We have over one million books available in our catalogue for you to explore.