Skip to content

Having an ID is a fundamental need. So is trust.

|

5 mins read

When we stop and think about it, it is easy enough to see how our driver’s licenses, credit cards, work badges, transit passes, and national ID cards play an important role in our daily lives.  

Having an officially recognized identity is not only a fundamental human right, but also increasingly essential to function and participate in society today. In many countries, a government-issued ID is required to open a bank account or apply for credit, get a SIM card, exercise one’s rights as a citizen and voter, travel, and participate in various other aspects of social, political, and economic life.  

For digital ID systems to fulfill their promise of enhancing access to rights and promoting inclusive social and economic development, they must be broadly accepted and adopted. That means they must be trusted by all parties.  

In the past, trust was rooted in personal relationships with our local government officials, doctors, bankers, local merchants, and even our local media. Today, many of these interactions (and transactions) happen instantly, at a distance, and with individuals and institutions we do not know personally. This, combined with a variety of other factors, has fundamentally upended our concept of trust. 

On the one hand, the internet has radically expanded the opportunities available for those who are connected. We can access a far wider range of information, goods, services, and so much more. But this has also come at a significant cost. Today, we are exposed to an ever-growing array of threats: identity theft, online fraud, dis- and misinformation, many other types of cybercrime, among other pressing risks.   

Simply put, traditional forms of ID are no longer fit for purpose. Individuals now need trusted and verifiable ways to prove who they are both in the physical world and online.  

Enter digital ID systems that instill and maintain trust.

There are many factors that contribute to a “good” digital ID. While the considerations outlined below are far from exhaustive, they do represent a useful starting point for defining what good looks like.  

To fulfill their potential as a catalyst for equitable social and economic development, digital ID systems should:  

Enable digital trust between all parties: issuers (e.g., governments), holders (typically individuals), and relying parties (any individual or institution, public or private, that uses the system to verify an individual’s identity as part of a business process). This is the primary function of digital ID.  

Promote equity by enabling all people to exercise their fundamental human rights and access government commercial services without respect to their race, ethnicity, national origin, religion, tribal affiliation, age, sex, gender identity, physical and mental disability, etc. This is a critical first step toward ensuring equity, but it is only the start.

Alongside this, care must be taken to ensure the systems built do not create new barriers (e.g., cost, access to technology, connectivity, literacy, etc.) to access. One of the ways this can be achieved is by building systems that support multiple form factors, affording individuals who do not have (or even wish to use) a mobile device the option of carrying their credentials in an equally secure physical format (e.g., a smartcard).

Enhance data privacy and security by incorporating Privacy by Design principles, which will help ensure that privacy considerations are considered across all stages of development and implementation.

Good technology is necessary but, ultimately, insufficient if taken alone. Of equal importance are robust governance and policy and regulatory frameworks that will provide an additional backstop to protect the rights of all stakeholders. 

Governance frameworks codify the relationship between all parties and establish a set of rules for participation. These frameworks, as well as any accompanying policies or regulations, should be developed with extensive and transparent public engagement, incorporating input from public and private sector stakeholders, as well as civil society organizations.  

Data protection policies should address (at a minimum) data minimization, retention, processing, sharing, and the sale of consumer data, as well as provisions outlining the circumstances and processes under which a government agency may access an individual’s personal data for national security or to support law enforcement investigations.   

And finally, regulations are of little value if they are not enforceable. Strict enforcement mechanisms and penalties – civil and criminal – for noncompliance should provide individuals with accessible recourse when their rights are violated.

Empower individuals to make informed decisions by ensuring meaningful informed consent. Not only should individuals be able to exert agency over their data, but they should also be able to access a detailed accounting of where, when, with whom, and for what purposes that data was shared. 

Because people have varying degrees of digital literacy, achieving “meaningful informed consent” that protects everyone (another aspect of equity) is incredibly difficult. Tools, such as “smart” or digital agents, can help protect less sophisticated users but also reduce the cognitive load for all users, making selective disclosure decisions more transparent and easier to manage.  

Provide genuine value to users by enabling them to assert their identity across institutional and geographic borders and for a variety of purposes. Portability is closely related to interoperability, and the systems being built should adhere to internationally recognized open standards to promote interoperability and prevent vendor lock-in. 

To remain relevant and useful across an individual’s entire life, governments must also plan for long-term financial sustainability, including updates to support future technological advances (forward compatibility).  

Take the risks seriously by requiring a comprehensive assessment to understand the various types of risks and potential harms and outline plans to mitigate harms when they do occur. Governments should be transparent about these risks by sharing risk assessments and mitigation plans with the public and inviting extensive public review and comment. 

While the risks associated with new technologies are many and varied (and, certainly not limited to data breaches), issuers (e.g., governments) and relying parties (governments and private sector) should be equally subject to public disclosure requirements, ensuring that service providers alert individuals in the case of a data breach. 

While no technology is without its risks, a heightened level of caution is required when talking about something as sensitive as identity. On the one hand, social and economic development must not be hindered by applying a “precautionary principle” approach in which all risks must be reduced to zero. At the same time, the “move fast and break things” mantra has no place in the context of digital public infrastructure.  

Rebuilding trust takes time. It also requires change.

Lack of trust in our institutions – government, corporations, nonprofits, and the media – poses a significant threat to global stability and social and economic development. We recognize that rebuilding trust between citizens and governments will require work against a sturdy headwind.

However, the process of building digital ID systems itself can contribute to repairing the frayed relationship between citizens and the institutions that serve them.

On this Identity Day, we are calling for continued collective action to ensuring that digital ID systems are designed and implemented in a collaborative, open, and transparent manner. We look with anticipation to a day when all people have a legal identity and can access the full range of rights and services they require not only to live, but to thrive. 

Share article