Skip to content

The New Evaluation Frontier

|

3 mins read

It wasn’t that long ago that most development projects would only track output indicators for their quarterly and annual reports. More sophisticated evaluations techniques are needed to assess gains in literacy rates and related contributions of the intervention.  

Evaluation practice in the international development sector has evolved significantly over the last decade. It wasn’t that long ago that most development projects would only track output indicators for their quarterly and annual reports. Evaluating the success of a project’s ability to create behavior change, such as utilizing a mobile app to increase literacy rates, by only tracking outputs (e.g. number of phones distributed), or some outcomes (e.g. number of people using the app) is not enough to evaluate the impact of an intervention. More sophisticated evaluations techniques are needed to assess gains in literacy rates and related contributions of the intervention.  

Many factors led to this narrow approach, including a dearth of talent skilled in evaluation techniques and tools, limited community of practice, and the fact that major donors rarely required evaluations (especially external evaluations).  

Today, evaluation tools and techniques are more prolific in the development community than they were 10 years ago, several donors and funders require them and consider them best practice, and a thriving evaluation community of practice has developed around the globe. Communities and platforms such as the ICT4D ConferenceMERLTech, and the Principles for Digital Development community have created peer learning networks and a place to share tools, standards, and lessons learned.   

Given the need for implementers to demonstrate the efficacy of their projects and  funders to choose the best investment alternative, we need to galvanize the development community once again, to avail the tools, methodologies, and standards for measuring socio-economic returns on ICT4D investments. We could learn from the history and evolution of evaluation practice  as we discuss costs, benefits, and social return on investment (SROI) of digital and development interventions. 

Funders want the best value for their money, implementers need to demonstrate that value, and governments have to make a call about where that money needs to go. That decision can be anything from whether to build a new road to investing in the next digital identity system. This is the reality of a world of limited resources and competing priorities.  

In an ideal world, decision makers would have access to an analysis of different investment alternatives related to costs and socioeconomic returns. Currently, SROI analysis is not standard practice in ICT4D evaluation, as reported by a recent study by FHI360 Mapping the Evidence Base for ICT4D Interventions. Very few, less than 20% of surveyed, ICT4D evaluations report any cost data, or total cost of ownership (TCO), let alone conducted any cost effectiveness analysis or SROI. Using SROI methodologies could help provide a more holistic evaluation of benefits vs. costs of ICT4D interventions and better evidence upon which to make investment decisions, just like combining monitoring and evaluation techniques helped produce better insights and evidence of what works. 

This triple nexus of donor engagement, community of practice, and methodologies and techniques are what is needed to evolve SROI practice in the development community and provide different stakeholders better means to evaluate their investments.  

To this effect, DIAL has been hosting discussions around SROI and TCO of digital goods and interventions with the aim to encourage a community of practice and promote conversations around this topic. Additionally, DIAL will help produce an SROI methodology toolkit that reviews existing tools and techniques and related guidance around use. 

Share article