Interview with Matthew Forti of One Acre Fund
Interview with Matthew Forti
This post is part of our ongoing series highlighting high-impact leaders who combine effective strategy, thoughtful measurement and evaluation, and continuous performance improvement to make a real difference in society. Cicero Social Impact recently connected with Matthew Forti, the Managing Director of One Acre Fund, a group seeing great success in their efforts to end hunger for over 300,000 of Africa’s hardest-working smallholder farm families. We’re excited to share the interview below.
About One Acre Fund
One Acre Fund exists to make smallholder farmers, who comprise 70% of Africa’s extreme poor, more prosperous. Our typical client is a woman trying to feed and educate four children by farming under an acre of land – and it isn’t working. We provide everything a farmer needs to succeed: better inputs delivered to her doorstep (such as improved seeds), finance (credit so she can afford these inputs), training (basic techniques like proper seed spacing and composting), and market facilitation (most importantly post-harvest storage training and products to minimize food loss). On average we improve profits on the products we offer by over 50%, even after farmers repay fees to cover most of our program costs. We now serve 400,000 farm families across six countries, with a realistic plan to reach at least 1 million by 2020 – at which point we will proudly represent the largest network of smallholder farmers in Africa.
How One Acre Fund uses M&E
Why has One Acre Fund focused so heavily on M&E since its inception, especially since that happened before M&E was such a trend? Was there something unique about the founders’ vision or the nature of your work, or are there lessons that other organizations can apply as they consider greater M&E investments?
We were proud to have founded One Acre Fund at a business school (Kellogg, in Chicago), and wanted to infuse business principles in how we managed the organization. And this was even more of an imperative because our founding team lacked deep agricultural experience. We wanted a culture where we could boldly try new ways of delivering services (such as farmer training), learn whether and how it worked, and then adjust in time for the next farming season. We also wanted a culture where measurement provided instantaneous feedback loops, so that literally week to week we could measure key performance indicators (such as training attendance and repayment levels) and adjust. Our advice would be to build a culture where measurement is prized as a learning and improvement tool right from the start, so you hire and reward people that think the same way.
The value of rigorous impact evaluation
What have you learned about the value of RCTs or high-rigor impact evaluations? What should other organizations consider when deciding whether and how to make this type of investment?
The biggest lesson we’ve learned is the value of building the internal capacity to conduct high-rigor evaluations. Continuing on the theme above, if the highest-value use of measurement is to help you continuously improve, you (the organization’s leadership) should be in the driver’s seat when it comes to the hypotheses you want to test related to the decisions you hope to make coming out of a rigorous evaluation. At One Acre Fund, we’ve built the internal capacity to conduct high-rigor impact evaluations every farming season, in every country, to determine the incremental farm profit our model generates (plus answer a whole bunch of program-relevant questions). We feel far too many evaluations are undertaken at the behest of someone else (such as a funder), with the design completely outsourced to someone else (such as an academic), and which leaves the organization’s key learning questions unanswered.
All that said, there is of course a time and place for independent external evaluations that primarily seek to answer the degree to which a program generates impact and is cost effective – this is important for any program that accepts meaningful donor funding. Our advice would be to undertake this kind of evaluation only when you’ve gone through many cycles of learning and improvement, and have sufficiently proven to yourself with your own data that your model will show impact and cost effectiveness when subjected to a high-rigor outside evaluation. And even in these situations, try to work with an evaluator who is flexible about incorporating learning questions you may have as an organization, so long as they don’t compromise the independence of the evaluation.
Using measurement to improve
Can you give some examples of how you use data to improve the impact or efficiency of your programs or your “back-office” operations?
One of my favorite examples comes from our Burundi program. In our first season working in the country, our M&E department uncovered that farmer compliance with our planting methods was very low – farmers were largely ignoring our training. This resulted in low harvest results – and our program produced near-to-zero improvement in farm profit. Our M&E and program teams collaborated to design a strategy that would improve planting compliance – which was the requirement that to join our program, a farmer had to plant at least one ‘model garden’ of 100 square meters using the One Acre Fund methods. Over time, farmers saw the strong harvest results and began planting more and more of their land using the ‘model’ techniques, eventually covering most of their fields. Our M&E team carefully monitored the resulting increases in compliance, and today Burundi has dramatically improved, and is in fact our leading country when it comes to the percent improvement in farmer profit generated.
We have lots of examples like this – again underscoring why a measurement system set up first and foremost for continuous learning and improvement is the key to unlocking greater impact for the population you serve.
Using data effectively
How do you ensure data is valued and used throughout the organization? What processes, capacity, systems/tools, etc. do you use to encourage the use of data?
For the type of measurement that’s about real-time, in-season learning, we’ve found:
- It’s critical for the data to be collected by the front-line staff themselves (in our case, field officers who enroll farmers and provide training) who will be the primary users of it. Our measurement team is always in the background.
- We create the forum for these staff to review the data together in small groups. Everyone writes their key performance indicators on a large chalkboard and discusses what is and isn’t going well.
- These discussions are about learning and the spirit of improvement, rather than a punitive exercise. Of course a staffperson who consistently underperforms out of refusal to change tactics would be let go, but the vast majority of the time the data is used in a developmental way.
While great systems and tools can be enablers, at the end of the day we feel it is the culture that really drives the productive use of data.
Interestingly, the same holds true at the other extreme. Our leadership and board follow a culture of using data to drive the toughest decision for any organization: resource allocation. We carefully collect and regularly reflect on the social return on investment (impact per cost) of each program we offer, and each potential program we could offer, and combine with other criteria (such as scale potential) to drive our decisions. Here too we rely on a culture of self-improvement, transparency in sharing data, and discrete forums to reflect.
What’s next in data for NGOs? What are you looking at in terms of new and improved uses of data to increase impact?
NGO measurement seems to be one of these topics that invites a lot of ‘flavor of the month’ thinking. Lately there’s been a lot written about big data and lean data; perhaps a few years back, shared measurement and constituent voice. All of these approaches can be powerful in certain contexts, but I’m a big believer that the basics still hold: an NGO that clarifies its theory of change; designs a measurement system to test whether and how its inputs, outputs, and outcomes are linked; and then regularly uses this data to improve its programs is an NGO that is well-positioned to meet its mission.
At One Acre Fund currently, I am most excited about how we will apply our measurement principles to newer programming that seeks to improve whole-country agricultural systems, where traditional ‘test and control’ measurement is harder to do. Complex work – like advocacy, movement-building, or systems change – deserves much more measurement attention and we hope to be able to share what we are learning over time.
Allison Miller currently serves as a Principal at Ed Direction, working directly with state and district education leaders to improve teaching and learning at scale. Allison oversees content development for Ed Direction’s work with Turnaround Schools in the state of Utah. With over a decade of classroom teaching experience at the elementary level, Allison is passionate about education and the role that teachers play in improving student achievement for all learners, regardless of their level of ability. Her experience in the classroom gives her a unique perspective when working with schools and districts across the country. Allison uses her knowledge of research and pedagogy to help teachers implement changes that lead to improvements in instruction and learning.