Skip to content

Deciphering DIAL’s Impact on the Digital Ecosystem, Staff Spotlight: Scott Neilitz

|
3 mins read

More than ever, evidence and its use in evidence-informed policymaking and decision-making is increasingly crucial. Recently, the organization Evidence Action made waves in the non-profit world when they completely restructured their “No Lean Season” program (which aimed to give small subsidies to farm workers in rural Bangladesh so they could migrate to urban areas for richer job opportunities between harvest seasons) when they conducted research that concluded its program as implemented didn’t work, and public ally released those results to donors in a transparent manner.

Interestingly enough, even at an individual scale, people are trying to use more evidence to inform their charitable giving decisions. Givewell, a charity assessment organization, recommends international organizations that will give people the most effective use of their charitable dollars. Within this environment, monitoring, evaluation, and learning is becoming more systematized and important.

When I was in graduate school, I had a chance to consult with social enterprises and think about different ways that impact can be measured in ever-changing business environments. In particular, I worked with Water and Sanitation for the Poor to develop a measurement plan that could be adapted to multiple countries and sanitation business models. With my colleagues, we produced a map of a consumer lifecycle in order to see when a customer interacts with the business. From there, we developed indicators based on that lifecycle and a theory of change that involved multiple inflection points based on if the social business model was having the intended effect. The actionability and flexibility was key.

The idea of pre-assessed action plans based on data collection is also critical. Too often, measurements and evaluations are preconditions that are begrudgingly accepted from funders. Organizations are increasingly becoming more aware of the cost and benefits of data collection and analysis. Innovations for Poverty Action has established a Right-Fit Evidence unit that advocates for creating M&E systems that is catered to the goals and theory of change of an organization. The evidence behind this is even more fleshed out in The Goldilocks Challenge by Mary Kay Gugerty and Dean Karlan.

At this point, the question becomes how this all ties back into the Digital Impact Alliance (DIAL). As an organization with a number of different projects and programs focused on ecosystem-level interventions ranging from supporting open source software programs to supporting the increased use and analysis of data for development to producing how-to guidance for digital development practitioners, measuring impact through randomized controlled trials is not necessarily feasible. With the pervasiveness of technology, we could neither create nor compare a counterfactual close enough to a control and comparison group. Therefore, we would need to be more creative and diligent in how we identify behavior change through the use of digital technology.

As part of the monitoring, evaluation, and learning team, I will be helping a number of initiatives to try and measure how DIAL is influencing the digital ecosystem. That means we’ll be exploring multiple quantitative and qualitative methods to show the impact that DIAL’s project and programs show throughout the next few years. This could include network analysis of Digital Principles endorsers and the reach of different Principles used by the endorsers. Process tracing will allow us to identify and test our theory of change. Bayesian analysis would also allow us to further analyze the process tracing and show probabilistic likelihood of behavioral change due to DIAL’s work. As we have the results, the monitoring and evaluation team will be sharing the results over the coming years.