Skip to content

Artificial intelligence can drive us toward an equitable digital future. The Digital Principles help point the way.

|
7 mins read

In an era where artificial intelligence is reshaping our world, the foundation of an inclusive digital future rests on how we handle our most valuable resource: data. The Principles for Digital Development offer a crucial framework for navigating this complex landscape.  

Since their inception in 2014, these Principles have served as a vital compass for the digital development community. Now, following an extensive consultation with over 300 practitioners, they’ve evolved to address emerging challenges in our rapidly changing digital ecosystem. While the Principles remain technology-agnostic, they’ve been enhanced to tackle pressing concerns about inclusion, potential harms, and most notably, the responsible use of digital data. 

At the heart of this evolution is a new addition to the Principles: “Establish people-first data practices.” This principle arrives at a critical moment, as AI systems increasingly influence everything from healthcare to education. With data being generated and processed at unprecedented scales, how do we ensure these systems serve the interests of all people, especially those who have historically been marginalized? 

To explore these crucial questions and their implications for AI development, we convened a closed-door roundtable through our Digital Donor’s Exchange. Three expert speakers helped us examine how these Principles can guide us toward more equitable data practices in an AI-driven world:  

  • Rachel Adams, the Founder and CEO of the Global Center on AI Governance, and one of the lead drafters of the African Union Commission’s Continental AI Strategy.  
  • Josh Mandell, an international development specialist and executive at IBM Consulting on the Foreign Affairs team. Josh served on the Digital Principles Advisory Council and on the working group for the Digital Principles refresh.  
  • Joseph Simukoko, the COO of Green Giraffe Zambia, which sources sustainable snacks directly from local, small-scale farmers and dedicated artisanal food processors, leveraging a blockchain-enabled digital traceability system integrated with AI technology.  

The discussion covered many of the nine principles, including:  

Understand the existing ecosystem 

Understanding the existing ecosystem requires close examination of the gender and social norms, political environment, the local economy, technology infrastructure, and other factors that can affect an individual’s ability to access and use a technology or to participate in an initiative. 

In the context of AI, the roundtable participants noted that it is particularly crucial at this stage to understand the state of AI and data governance, in terms of policies, institutions, and enforcement, in the relevant country or region. Key factors to consider include:  

  • Is there an AI strategy? Has it been copied from another country, or has it been developed locally through extensive consultation with civil society groups, academics, and the private sector? 
  • Is civil society present and capable of engaging with AI? This may include an academic ecosystem that is exploring the implications of these technologies across different areas and independent advocacy organizations who are committed to technology and human rights.    
  • Are the non-technology aspects of AI adequately funded? While funding is often targeted at technology itself, the non-digital corollaries are critical to the effective and responsible functioning of AI tools. Participants urged donors to think about how can shift more of their support toward these categories, including governance, advocacy, and consumer awareness. 

 

Design with people

To design with People means to invite those who will use or be affected by a given technology policy, solution, or system to lead or otherwise meaningfully participate in the design of those initiatives. This includes not only the intended “end users” but also those who will maintain, administer, and be impacted by the initiative.  

In the context of AI, the participants noted that it is easy to overlook this step, as the large language models (LLMS) underlying generative AI can been seen as ‘black boxes’ that are too complex for people to understand or to influence. To break this paradigm, the discussants noted several ways to involve people, including: 

  • Building smaller models in partnership with targeted users in order to include their languages, preferences, and information needs.  
  • Testing any applications built on top of LLMs alongside potential users and use this testing phase to ensure that applications provide culturally relevant examples and use familiar language.  
  • Looking beyond the end users to all of those in the ecosystem that may be impacted, including where jobs might be lost or created.  

 

Establish people-first data practices

Digital services and initiatives generate, rely on, and/or use data derived from people or their assets. To establish people-first data practices emphasizes the need to avoid collecting data that is used to create value (financial or otherwise) for a company or organization, without delivering any direct value back to those people from whom the data is derived. 

It is thus critical to consider people and to put their rights and needs first when collecting, sharing, analyzing, or deleting data. In this context, ‘people’ includes those who directly interact with a given service, those whose data was obtained through partners, and those whose are impacted by non-personal datasets (such as geospatial data.) 

In the context of AI, people-first data practices must address two sides of the same coin: first, how people’s data is used; and second, where there may be gaps in the data that leave people’s needs, languages and cultures out of the AI revolution entirely. The participants noted several ways to establish people-first data practices in the context of AI, including:  

  • Invest time and resources to retrain models, adding more data from the local context, and in some cases, removing data to remove bias. This also include unlearning, a new and expanding technique to improve LLM relevance and compliance. 
  • Building innovative approaches to gathering proper informed consent and providing clear value back to data owners, in order to ensure that data gaps can be filled without furthering exploitation.    

Share, reuse, and improve

To share, reuse, and improve is, in essence, to collaborate. We have the most impact when we share information, insights, strategies, and resources across silos related to geographies, focus areas, and organizations. By sharing, reusing, and improving existing initiatives, we pool our collective resources and expertise, and avoid costly duplication and fragmentation. This is greatly facilitated by adopting open standards, building for interoperability and extensibility, using open-source software, and contributing to open-source communities. 

If companies open-source AI model development, it can create the foundations for collaboration and trust and ensure that global majority countries are able to access and customize these models. Today, the movement for open-source AI is strong, with over 1,000,000 open-source models listed on Hugging Face. Open-source technology has allowed organizations like Green Giraffe, to leverage AI tools that would have otherwise been unaffordable.  

The DDX discussion acknowledged this while also debating some of the tensions inherent to open-source AI models.  

  • First, the term open-source in the context of AI is not clearly defined. Even with open-source models such as Meta’s Llama series, while the weights are open, it is not clear what data these models have been trained on, making it hard to understand what biases they may contain – and what data gaps need to be filled. 
  • Second, open-source models such as Llama can lag behind cutting-edge closed models, such as ChatGPT and Claude.  

The private sector representatives in the roundtable noted that companies can help to navigate these tensions by avoiding policies and business models that lock-in users to specific models, such as long-term subscriptions or exclusivity clauses. 

 

Anticipate and mitigate harms

Technology is now part of our everyday lives: no program or technology solution operates in isolation. Therefore, to live up to the commitment to do no harm as declared in the Principles’ preamble, policymakers and practitioners need to anticipate and work to mitigate harms, even those that originate outside of a given initiative. While harms are present with all technology, these harms are particularly relevant, and the impacts are less known, when it comes to generative AI. 

In working with generative AI, the discussion noted practical ways to put this principle into action:  

  • Establishing Institutional Review Boards or research ethics committees throughout the world so that they are adapted to different local and cultural contexts.  
  • Ensuring that each organization and governments working with generative AI has processes in place for how to use, vet, and manage AI models. 
  • Encouraging participation in and dissemination of AI transparency reports to help governments and others understand how each AI model is performing on various metrics related to safety and use.  

The Digital Principles can guide us through the AI revolution and beyond.

As new technologies emerge, we all must grapple with their impact. The Principles for Digital Development were designed to be applicable to any new technology, with potentially unforeseen impacts, and provide a framework to guide design, implementation, and evaluation. As we grapple with the current digital transformation helmed by AI, the Principles can help us build the right foundations, implement the necessary safeguards, and optimize the benefits.  

The Digital Principles are supported by a community of over 300 endorsing organizations, who contribute supplementary tools and resources. The more we share and reuse these resources, the faster we can drive progress. Have a resource you think may be helpful to others? Reach out to us at PrinciplesAdmin@dial.global 

 

Learn more about the Digital Donors Exchange.