Disconnected Development: A Call for New Metrics

by Will Slotznick 

[This piece is excerpted from a longer paper.]

“We all talk about ownership -- country ownership -- people should own their own development. I think it’s going to intensify dramatically over the coming years. That, in turn, is going to change the role of USAID, of government donors, and of international NGOs… that’s going to be the next iteration in this wave of constant change.” [1]

What if every global development project was designed by people in the villages and cities who were the actual beneficiaries of that project?  What if those people could provide development agencies and INGOs with material inputs as to what their schools, roads, hospitals and utilities should deliver, what goals those systems should pursue, and how the systems should functionally operate, throughout design and implementation?  Could all of that truly work, or is it just a grand-sounding idea?  How would anyone be able to tell, and is there a role here for academic researchers?

In fact there is a name for the academic discipline.  It's called participatory action research (PAR), and it may have a new place in the field of development. On June 16, 2017, the Action Research Network of the Americas (ARNA) will welcome its members to Cartagena, Colombia to inaugurate the 1st Global Assembly for Knowledge Democracy. The convention will invite experts to share the results of their work on PAR, and to integrate strategies to assess and advance this grassroots discipline. Cartagena will also mark the 40th Anniversary of the First World Symposium on Action Research – a historic gathering of sociologists to position "popular knowledge" -- what development beneficiaries, at the field level, themselves know -- at the center of development policy (Fals Borda 1985, 10). In June, members of the PAR community will congregate to gauge and to advance that progress. As institutions intensify their commitments to local ownership, ARNA is poised to help effectuate a new paradigm in development.  But it must start, I believe, with the question of how anyone can tell if it works.  It must start with a fresh set of metrics.

PAR emerged in the late 1970s as an alternative to donor-driven and Western-led processes of development (Khurshid 2016). An originator of PAR theory, Colombian sociologist Orlando Fals Borda, urged academics: “Do not monopolize your knowledge nor impose arrogantly your techniques, but respect and combine your skills with the knowledge of the researched or grassroots communities, taking them as full partners and co-researchers” (Fals Borda 1995).  As praxis, PAR seeks to engage individuals in cycles of “research, education, and action” to identify problems and establish frameworks for “fundamental social change” (Brydon-Miller 2001, 77).  PAR employs local knowledge systems (Gonzalez 2005) to generate solutions that respond to specific community conditions (Brydon-Miller 2001, 80). Importantly, PAR challenges the expert-learner binary (Ravitch and Tillman 2010, 6), and it promotes collective decision-making (Tufte and Mefalopulos 2009, 2). Such collaboration, PAR theorists argue, improves local commitments, inter- and intra- community relations, the credibility and sustainability of new endeavors, and thus the success of the programs themselves (Rabinowitz 2015). 

In the 1990s, prominent foundations, government aid agencies, and international non-governmental organizations began to orient their work toward some of the methods advanced by PAR. In many country and organizational venues, this ‘uptake’ occurred rather erratically, often without clear strategies for integration and full implementation. As such, the idea of ‘participation’ came to evolve rhetorically at the top-line, policy-level; practically at the bottom-line, community-level; and overall with little empirical testing or evaluative cross-over. As a result, we face a dearth of evidence that truly links participatory approaches with large-scale aid and development effectiveness – a gap filled with weak claims as to the rigor and value of the institutions’ participatory endeavors.
 
The following mission statement of a leading development agency illustrates this trend: “At the core of our mission is a deep commitment to work as partners in fostering sustainable development…. [W]e work hand-in-hand with those we seek to assist as well as others striving to support the most vulnerable”  (USAID 2016). On the surface, this ‘rhetorical standard’ in development communication provides a progressive framing for international work. It also establishes at least a vague a set of assumptions and expectations as to how the agency delivers its aid and development operations. However, some commentators believe that this agency’s implementing partners often operate without adherence to such principles. In 2015, reporters lambasted one of its largest implementers for mismanaged relief work in Haiti, where it operated with a severe “lack of transparency and a lack of community participation” (Johnston 2015). 

Many in the field are attracted to participatory rhetoric, yet only a portion achieve (or even attempt to achieve) community contributions at each stage of their work (Cooke and Kothari 2001, 84). Problematically, there do not exist standard definitions of 'participation,’ standard processes for achieving it, or standard metrics for evaluating it.  There is no universal ‘continuum of participation’ to guide and authenticate practice -- meaning that the 'participatory' claims of development organizations are addressed subjectively, if at all. This phenomenon leads to a pressing concern in the development space: without clear criteria around 'participation', it is difficult to evaluate the depth and value of purportedly participatory initiatives with any consistency or reliability. In short, we lack sufficient conceptual and empirical tools to verify whether the claims of 'participation' indeed translate to in-country practice, on the one hand, and good results, on the other – a gross evaluative gap that the development community must address. 

I propose the creation of an evaluative framework that provides a standard assessment, applicable across a range of development sectors, of the collaborations that occur in global development. Such a tool should examine how relationships among relevant actors evolve at each stage of a development project, from design to implementation to assessment to reporting. It should also provide rubrics for conceiving and measuring the results of participation.

We can consider, for example, a multi-country school-strengthening initiative that aims to increase student literacy outcomes.  At the initial design stage, and at every stage of implementation, which stakeholders are at the table determining key priorities and target outcomes? Solely the sovereign financiers, the donors, and the implementers, or are the community beneficiaries engaged in co-crafting the intervention? And how are community participants sub-categorized and selected? If certain actors are excluded, for what reason and with what consequences? And how exactly should the learning outcomes be measured, both overall and as a consequence of the participatory elements?  Such a framework should improve transparency and accountability across the entire endeavor, and incentivize greater follow-on purpose as to how (and with whom) development strategies are designed and delivered. 

Development researcher Frances Cleaver’s discontent with the existing evidence for participation prompts his call for new techniques that take a nuanced approach to assessing the “linkages between the participation of poor individuals and the furthering of their social and economic good” (Cleaver 2001, 53-54).  Crafting such a tool, Cleaver writes, involves the selection and in-depth study of a “successful participatory project,” (54) including the systematic documentation of its processes and outcomes. 

Following Cleaver’s call, I have selected the Seeds for Progress Foundation (SfPF) as a critical case study to explore the dynamics of participation in one sector -- global education.  SfPF delivers holistic teacher support and technology integration in primary schools across the coffee-growing regions of northern Nicaragua. Unique for ICT4E[2] initiatives in Nicaragua, SfPF co-designs with community affiliates and continues to work with them throughout the process of implementation and evaluation. Its front-line facilitatodores co-generate curricula and classroom practices with teachers and principals, produce culturally relevant course materials and pedagogy, foster local (family and community leadership) involvement in the evaluation of program outcomes, and build inclusive spaces for dialogic exchange between school and community stakeholders for continual improvement (Ravitch and Tarditi 2014). 

My research employs a mixed-method design that combines qualitative investigation (interviews, observations, and document analyses) with quantitative assessment (surveys and scholastic data analysis) that are multi-leveled, multi-sited, and that are achieved through a process of multi-stakeholder input and review. Borrowing common techniques from Participatory Monitoring & Evaluation (PME)[3], such as community surveys, direct observations, and member checks (Estrella 2000), I seek to ground constructed terms and instruments in both local and normative understandings of ‘participation.’ Ultimately, the research seeks to add the case of SfPF to the pool of evidence on the influence of participation in educational development, and to generate tools that quantitatively monitor participatory practice and outcomes, both for SfPF and for similar initiatives in IED. Stated another way, this research will bring forth both a case and a method with which to objectively define and measure ‘participation’ and to assess its link to positive development impacts.

This is a start. We need more hands in this arena -- and a diverse set of hands -- to develop and integrate designs, produce cross-sectoral frameworks, and (if the outcome assessments so merit) promote a process of adoption that is deep and broad. Encouragingly, work is already underway. At Dexis Consulting Group, practitioners are finalizing the Collaborate, Learn, Adapt (CLA) Maturity Matrix that, in part, measures the degree of internal and external collaboration in the USAID Program Cycle. Save the Children and Oxfam recently released the  LEAF Framework – a set of design and process metrics for assessing ownership – with current application toward USAID and MCC-funded work. In Cartagena this summer, and in the fora that follow, practitioners should draw together these insights and centralize and institutionalize ‘standards for participation’ that will guide a new moment for local ownership across global development.

Will Slotznick is a UPenn senior majoring in International Relations with dual-minors in African Studies and Global Development.


Works Cited

Footnotes:

[1] U.S. Agency for International Development (USAID) Administrator Gayle Smith in public interview with Devex Editor-in-Chief Raj Shah. June 14, 2016 at Devex World, Washington, DC

[2] Information and Communication Technology for Education 

[3] PME seeks to include local stakeholders at every stage of the applied research process (Cousins 1992), and to promote relevant data gathering for joint decision-making.  Here, PM&E provides a ‘check’ that a project’s methods and decisions remain aligned with the priorities of its target beneficiaries.

Sources from in-text citations:

Brydon-Miller, Mary. 2001. Education, Research, and Action: Theory and Methods of Participatory Action Research. New York: New York University Press.

Cleaver, Frances. 2001. “Institutions, Agency, and the Limitations of Participatory Approaches to Development.” In Participation: The New Tyranny?, 36–55. London and New York: Zed Books.

Cooke, Bill, and Uma Kothari. 2001. “The Case for Participation as Tyranny.” In Participation: The New Tyranny?, 36–55. London and New York: Zed Books.

Cousins, Bradley. 1992. “The Case for Participatory Evaluation.” Educational Evaluation and Policy Analysis 14 (4): 397–418.

Estrella, Marisol. 2000. “Issues and Experiences in Participatory Monitoring and Evaluation.” In Learning from Change, 1–15. Practical Action Publishing.

Fals Borda, Orlando. 1985. “Knowledge and People’s Power: Lessons with Peasants in Nicaragua, Mexico, and Colombia.” New Delhi: International Labour Office.

Fals Borda, Orlando. 1995. Conference of Southern Sociologists. Atlanta

Gonzalez, N. 2005. Funds of knowledge: Theorizing practices in households, communities, and classrooms. New Jersey: Lawrence Erlbaum.

Johnston, Jake. 2015. “Is USAID Helping Haiti to Recover, or US Contractors to Make Millions?” The Nation. January 21. https://www.thenation.com/article/usaid-helping-haiti-recover-or-us-contractors-make-millions/.

Khurshid, Ayesha. 2016. “Empowered to Contest the Terms of Empowerment? Empowerment and Development in a Transnational Women’s Education Project.” Comparative Education Review 60 (4): 619–43.

Rabinowitz, P. 2015. “Participatory Approaches to Planning Community Interventions” In Community Toolbox. Work Group for Community Health and Development at the University of Kansas

Ravitch, Sharon, and Cathi Tillman. 2010. “Collaboration as a Site of Personal and Institutional Transformation: Thoughts from Inside a Cross-National Alliance.” Perspectives on Urban Education, 3–10.

Ravitch, Sharon, and Matthew Tarditi. 2014. Overview Description of CISA-PennGSE Semillas Digitales Initiative.

Tufte, T. and P Mefalopulos. 2009. “Participatory Communication: A Practical Guide.” 170. Washington, D.C.: The World Bank Group. https://orecomm.net/wp-content/uploads/2009/10/Participatory_Communication.pdf.

USAID. 2016. "Mission, Vision and Values." Washington, DC: U.S. Agency for International Development. https://www.usaid.gov/who-we-are/mission-vision-values