Statistical Tragedy in Africa?
eBook - ePub

Statistical Tragedy in Africa?

Evaluating the Database for African Economic Development

  1. 136 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Statistical Tragedy in Africa?

Evaluating the Database for African Economic Development

About this book

What do we know about economic development in Africa? The answer is that we know much less than we would like to think. This collection assesses the knowledge problem present in statistics on poverty, agriculture, labour, education, health, and economic growth. While diverse in origin, the contributors to this book are unified in two conclusions: the quality and quantity of data needs to be improved; and this is a concern not just for statisticians. Weaknesses in statistical methodology and practice can misinform policy makers, international agencies, donors, the private sector, and the citizens of African countries themselves. This is also a problem for academics from various disciplines, from history and economics to social epidemiology and education policy. Not only does academic work on Africa regularly use flawed data, but many problems encountered in surveys challenge common academic abstractions. By exploring these flaws, this book will provide a guide for scholars, policy makers, and all those using and commissioning surveys in Africa. This book was originally published as a special issue of The Journal of Development Studies.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Statistical Tragedy in Africa? by Morten Jerven,Deborah Johnston in PDF and/or ePUB format, as well as other popular books in Negocios y empresa & Teoría económica. We have over one million books available in our catalogue for you to explore.

Information

The Political Economy of Bad Data: Evidence from African Survey and Administrative Statistics

JUSTIN SANDEFUR & AMANDA GLASSMAN
Center for Global Development, Washington DC, USA
ABSTRACT Across multiple African countries, discrepancies between administrative data and independent household surveys suggest official statistics systematically exaggerate development progress. We provide evidence for two distinct explanations of these discrepancies. First, governments misreport to foreign donors, as in the case of a results-based aid programme rewarding reported vaccination rates. Second, national governments are themselves misled by frontline service providers, as in the case of primary education, where official enrolment numbers diverged from survey estimates after funding shifted from user fees to per pupil government grants. Both syndromes highlight the need for incentive compatibility between data systems and funding rules.

1. Introduction

There is a growing consensus among international observers that official statistics in many sub-Saharan African countries are woefully inadequate and unreliable (Jerven, 2013), what Devarajan (2013) calls a ‘statistical tragedy’. In response to this tragedy, the UN High Level Panel on post-2015 development goals has called for a ‘data revolution’ to improve tracking of economic and social indicators in Africa and the rest of the developing world (United Nations, 2013). The agenda emerging around these discussions has tended to assume that more money and better technology will solve the problem, focusing on an expansion of survey data collection efforts, and a push for national governments to disseminate information under open data protocols (Caeyers, Chalmers, & De Weerdt, 2012; Demombynes, 2012).
Do these solutions address the underlying causes of bad statistics? Relatively less attention has been paid to understanding why national statistics systems became so badly broken in the first place, or to analysing the perverse incentives which any data revolution in sub-Saharan Africa must overcome. We attempt to fill this gap by documenting systematic discrepancies between data sources on key development indicators across a large sample of countries. By necessity, we focus on cases where multiple independent sources report statistics on ostensibly comparable development indicators.1 For this, we draw on cross-national data within Africa on primary school enrolment and vaccination rates taken from the Demographic and Health Surveys (DHS), and contrast it with data from education and health management information systems maintained by line ministries.2
The core hypothesis to be tested in this article is that misrepresentation of national statistics does not occur merely by accident or due to a lack of analytical capacity – at least, not always – but rather that systematic biases in administrative data systems stem from the incentives of data producers to overstate development progress. The administrative data we analyse are designed to be part of management information systems in health and education ministries. It should be no surprise, then, that misrepresentations in the data reflect the incentives provided by the governance and funding structures of these ministries, particularly in the low-income, highly aid-dependent countries which dominate our sample.3
The article is organised around two interlinked principal–agent problems: in the first, national governments can be seen as agents of international aid donors and domestic constituencies; in the second, governments act as principals seeking to motivate civil servants to simultaneously provide public services and report truthful data on the same.
In the first case, an international aid donor (the principal) seeks to allocate resources between and evaluate the performance of a national government (the agent). The principal requires data to monitor agents’ performance. Recognising the risks inherent in ‘self- reported’ official statistics, international donors invest heavily in the collection of survey data on households, farms and enterprises. Notably, these surveys involve considerable foreign technical assistance paid for directly by donors. In the extreme case of the DHS data sponsored by the US Agency for International Development (USAID) and other partners – on which much of the analysis below relies – the donor insists on a standardised questionnaire format in all countries, donor consultants train the data collectors and oversee fieldwork, and all or most raw data is sent back to the donor country for analysis and report writing.4
Donors can’t always rely on such carefully controlled data products like the DHS, though, and Section 3 shows the predictable results when donors link explicit performance incentives to administrative data series managed by national governments. In 2000, the Global Alliance for Vaccines and Immunisation (GAVI) offered eligible African countries a fixed payment per additional child immunised against DTP3, based on national administrative data systems. Building on earlier analysis by (Lim, Stein, Charrow, &Murray, 2008), we show evidence that this policy induced upward bias in the reported level of DTP3 coverage amounting to a 5 per cent overestimate of coverage rates across 41 African countries.
In short, pay-for-performance incentives by a donor directly undermined the integrity of administrative data systems. To invert the common concern with incentive schemes, ‘what gets measured gets managed’, in the case of statistics it appears that what gets managed gets systematically mismeasured, particularly where few checks and balances are in place.
In the case of immunisation statistics, national governments mislead international donors and their citizens, whether by accident or design. Previous analysis of African statistics has focused on this dynamic in which central governments are the producers of unreliable statistics (Jerven, 2011). But in other cases national governments themselves are systematically misled, creating an important obstacle to domestic evidence-based policy-making.
In this second accountability relationship discussed in Section 4, national governments and line ministries (the principal) seek to allocate resources between and evaluate the performance of public servants such as nurses and teachers (the agents). By and large, the information the principal relies on in such settings comes from administrative data systems based on self-reports by the very agents being managed. The result is systematic misreporting, undermining the state’s ability to manage public services, particularly in remote rural areas.
Section 4 illustrates this problem in primary school enrolment statistics. Comparing administrative and survey data across 46 surveys in 21 African countries, we find a bias towards over-reporting enrolment growth in administrative data. The average change in enrolment is roughly one-third higher (3.1 percentage points) in administrative than survey data – an optimistic bias which is completely absent in data outside Africa. Delving into the data from two of the worst offenders – Kenya and Rwanda – shows that the divergence of administrative and survey data series was concomitant with the shift from bottom–up finance of education via user fees to top–down finance through per pupil central government grants. This highlights the interdependence of public finance systems and the integrity of administrative data systems. Difference-in-differences regressions on the full sample confirm that the gap between administrative and survey of just 2.4 percentage points before countries abolished user fees grew significantly by roughly 10 percentage points afterwards.
This dual framework relating the reliability of statistics to the accountability relationships between donors (and citizens), national governments and frontline service providers clearly abstracts from certain nuances, as does any useful model. Household survey data are not only used by international donors as a tool to monitor aid recipients. Donor agencies also use survey data for research purposes, and recipient governments frequently cite survey reports in planning documents and incorporate survey data into the construction of macroeconomic aggregates like GDP which are key indicators in domestic policy-making. Conversely, international donors are far from apathetic about administrative data systems, and indeed invest heavily in education and health management information systems in the region. Nevertheless, we believe the political economy dynamics suggested by this framework, however simplistic, help make some sense of the seemingly chaotic data discrepancies documented in the article.
Seen through the lens of this framework, the agenda for a data revolution in African economic and social statistics clearly must extend beyond simply conducting more household surveys to help donors circumvent inaccurate official statistics – to avoid, as we label it, being fooled by the state. If donors are genuinely interested in promoting an evidence-based policy-making process, they must assist government to avoid being systematically fooled itself by administrative data systems built on perverse incentives. Aid resources must be directed in a way that is complementary to, rather than a substitute for, statistical systems that serve governments’ needs.

2. Seeing like a Donor versus Seeing Like a State

The different needs of donors and government present trade-offs between the comparability, size, scope and frequency of data collection. Given that donors finance a large share of spending on statistics, these differing needs can imply that national statistical systems aren’t built to produce accurate data disaggregated for use by domestic policy-makers and citizens. In stylised form, this creates a choice between (1) small-sample, technically sophisticated, possibly multi-sector, infrequent surveys designed to facilitate sophisticated research and comparisons with other countries,5 and (2) large sample surveys or administrative data sets providing regional- or district-level statistics on relatively fewer key indicators at higher frequency, designed for comparisons across time and space within a single country.6
International aid donors must make allocation decisions across countries, and in many cases they are bound to work solely with national governments as their clients. Due to this focus, donors’ preferences often (but far from always) skew toward statistics based on standardised international methodologies and homogenised questionnaire formats.7 At times this desire for international comparability is directly at odds with comparability over time within a single country. A second key implication of donors’ concern with international comparisons is less attention to domestic, subnational comparisons. Household survey data reflects this preference.
Consider the case of primary education in Kenya. The DHS provides comparable information across countries and time on both health and schooling outcomes and is designed to provide provincial estimates, with most analysis focusing on a single set of national or rural and urban statistics. At the time of the last DHS, Kenya had eight provinces. This allowed at least limited correspondence between survey estimates and units of political accountability. In neighbouring Tanzania, the mainland was at the time of the last survey divided into 21 regions, but the survey reported results for just seven aggregate zones corresponding to no known political division. To stress the point, we might say that the structure of the DHS meets the needs of a donor sitting in Washington, allowing them to evaluate, say, Kenyan and Tanzanian performance in primary schooling on a comparable basis.
But national governments need to make subnational resource allocation decisions. To be useful, data are often required at relatively low levels of disaggregation that coincide with units of political accountability. Ministries of health require up-to-date information on clinics’ stocks to manage their supply chain; ministries of education require school-level enrolment statistics to make efficient staffing decisions. In Kenya, the education ministry obtains this information from the country’s Education Management Information System (EMIS), three times a year, for all 20,000 government schools in the country.8
Arguably, citizens’ interests are better aligned with the preferences of their own government’s EMIS than donors in this case. In order for citizens to exert bottom–up accountability of public service providers, they require data in an extremely disaggregated form. Kenya’s national trends in literacy rates are likely of limited interest to citizens of rural Wajir, but the local primary school’s performance on national exams relative to neighbouring villages may be of acute interest. Thus, appropriately, the Kenyan government’s ‘open data portal’ provides access to the disaggregated administrative data EMIS system. Unfortunately, as we show in Section 4, the reliability of this data is questionable.
Kenya may not be unique in this respect, as we show, but the situation is far from uniform across the region. While quality measures in statistics are few and far between, and indeed measuring discrepancies is a key contribution of this afrticle, the World Bank’s Bulletin Board of Statistical Capacity provides some indication.9 On average, sub-Saharan Africa scores below all other regions, with an overall score of 58 compared to a global average of 64. But the variance within Africa is enormous, ranging from the very bottom of the rankings (Somalia with a score o...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Citation Information
  7. Notes on Contributors
  8. Foreword: Africa’s Statistical Tragedy: Take 2
  9. Introduction: Statistical Tragedy in Africa? Evaluating the Data Base for African Economic Development
  10. 1. The Political Economy of Bad Data: Evidence from African Survey and Administrative Statistics
  11. 2. From Tragedy to Renaissance: Improving Agricultural Data for Better Policies
  12. 3. The Invisibility of Wage Employment in Statistics on the Informal Economy in Africa: Causes and Consequences
  13. 4. Poverty in African Households: the Limits of Survey and Census Representations
  14. 5. The Making of the Middle-Class in Africa: Evidence from DHS Data
  15. 6. Random Growth in Africa? Lessons from an Evaluation of the Growth Evidence on Botswana, Kenya, Tanzania and Zambia, 1965–1995
  16. 7. GDP Revisions and Updating Statistical Systems in Sub-Saharan Africa: Reports from the Statistical Offices in Nigeria, Liberia and Zimbabwe
  17. Index