Skip to content

The promise of DPI won’t be realized with good design alone. Safeguards are key.

|
4 mins read

The rise of digital public infrastructure (DPI) in recent years has brought with it the promise of a more innovative and equitable digital landscape.  Already, we’ve witnessed its ability to improve people’s lives by expanding economic participation and fueling innovation across the public and private sectors. In turn, this has grown the recognition of DPI’s power to democratize opportunities.

Today, open-source solutions, such as MOSIP, OpenCRVS, and MIFOS, offer the potential to accelerate DPI adoption around the world by leveraging high-quality, reusable solutions and global technical talent. But this same openness introduces a new vulnerability. Key design features within open source technologies increasingly used in DPI can be altered, omitted, or manipulated – intentionally or otherwise – in ways that marginalize, exclude, or even target individuals and communities. Without deliberate efforts to mitigate these threats, the positive societal benefits DPI offers risk being undermined by mistrust and diminished legitimacy.

Effective safeguards at every stage of the DPI lifecycle – from design to deployment to ongoing maintenance and governance – are critical. These solutions help ensure that DPI serves the interests of all people, both by mitigating potential misuse and by flagging resultant harms. To this end, the recently-announced initiative from UNDP and the UN Tech Envoy’s Office to develop a Universal Safeguards Framework for Digital Public Infrastructure is an important and timely step forward. The effort also serves as a welcome acknowledgement that, while DPI’s power to improve service delivery is clear, “success has been limited by the lack of people-centric governance and safeguards” as noted by UN Tech Envoy, Amandeep Gill.

To ensure DPI works for people, codifying global norms is necessary, but ultimately, insufficient in safeguarding them from harm. Any high-level framework must be complemented by practical tools and mechanisms that equip a range of stakeholders to prevent, monitor, identify, and call out the misuse of DPI.

This is why we advocate for effective oversight tools – alongside well-designed digital public infrastructure.

Safeguards must extend beyond design and high-quality technical standards to mechanisms that enable visibility into the lived experience of people. Practical “oversight tools” can specifically address the risk that DPI source code can be altered or solutions can be implemented poorly – no matter how well designed. Fortunately, we can draw from other sectors for examples of tools and mechanisms that have been effective in mitigating against and exposing misuse.

At the Digital Impact Alliance, we are thinking about how to build a suite of effective oversight and transparency tools to ensure DPI works for all people. We welcome your ideas to build out and refine this illustrative list:

  1. Test frameworks. Automated testing tools can help measure compliance with standards and discover weaknesses in code by comparing software implementations against reference specifications like Govstack. This would allow regulators to determine whether a specific product conforms to best practices for safe and trustworthy DPI.
  2. Score cards. Similar to Freedom House’s Global Freedom Index, scorecards can provide an important reference to help determine whether countries are actively improving their DPI systems to benefit people or not.
  3. SupTech tools. Supervisory technologies have helped financial regulators better oversee digital financial products and transactions. Such tools can be developed for ID authorities, ICT regulators, and others to better monitor consumer complaints, conduct market surveillance, identify systemic misuses, and much more.
  4. Market conduct tools. Market conduct tools can be a useful way to analyze DPI services in practice. For example, these initiatives could include “mystery shoppers” who test DPI products and then report back on their experiences.
  5. Data rights tools for people. Organizations like Privacy International have already created toolkits to help people understand – and exercise – rights over their data. Much more can be done to mainstream a rights-based approach within efforts to scale DPI.
  6. Ethnographic research. Targeted research is essential to better understand how DPI does – or doesn’t – benefit communities. Independent analysis documenting people’s experiences as they interact with technology can help governments determine how best to implement DPI solutions that truly work for everyone.

As the development community navigates rapid digitalization and leans into digital public infrastructure, collaboration is crucial.

No single safeguard framework or tool can ensure people’s rights and aspirations are being met. Rather, it will take a variety of approaches – and a multiplicity of stakeholders – to understand, monitor, and respond to these needs.

Help us further identify effective, versatile safeguarding tools and mechanisms. What other measures could be useful to learn from? With concerted effort, we can ensure a safe, equitable and inclusive approach to DPI – and to our collective digital future.