Citizens’ deliberation for a safe AI

Starting on 3 Nov 2017, the Université de Montréal, a premier higher education institution in North America and beyond, initiated an ambitious journey that involved hundreds of citizens that came together to discuss Artificial Intelligence.

In partnership with multiple stakeholders including the provincial and local governments and academic think tanks including the prestigious Mila—Quebec AI Institute, the goal of this exercise was ambitious and pioneering at the same time: defining the key ethical principles who should drive the development of AI.

Through multiple sessions covering different topics and themes, around 500 participants started discussing broad ranging ethical principles that should always be at the foundations of any discourse on AI. The whole undertaking was defined as a “collective”, an informal initiative where associations, think tanks, government agencies, academic institutions and citizens come together to discuss and deliberate on one of the most daunting topics of our society.

We are talking of an unprecedented technology with untapped potential that, at the same time, carries enormous risks. The shift towards an AI centered economy, if not properly and adequately managed could trigger tectonic consequences that can be devastating.

Nepal recently approved its first ever AI Policy. This is, without questions, an important milestone for the country but from here, where to go? How to ensure that this new document will be different from other policies that, almost by default, always struggle to get implemented? The new policy envisions also a set of new institutions like an AI Regulation Council and a National AI Center.

A new AI-driven and centered governance is being shaped but will these institutions be effective, meaningful and, importantly, inclusive? Will experts and citizens alike be enabled and allowed to participate beyond the usual tokenistic approaches? Like for climate change, our societies are utterly unprepared for what might happen with an unregulated AI.

As I wrote in this column a few weeks ago advocating for a new set of multi-stakeholder governance that can address the challenges of climate warming, I do believe that an emerging nation like Nepal that aspires to become a lower middle income economy over the next decade, must be prepared. Both challenges, climate and AI, will test the resilience of our systems.

Certainly, more developed and industrialized nations will have to face more daring times, especially in relation to the shocks their economies might suffer from a race to the bottom in which corporations will cut their work-force and rely more on AI agents. In both cases, the resilience of our political systems, especially in democratic settings like the ones Nepal is enjoying, could come under stress.

We are already aware of the risks associated with waves of social media driven waves of misinformation and disinformation. These problems are going to be further magnified by AI. That’s why we need to talk about a Just Transition, an important element of the climate discourse, also for the rollout of AI, ensuring that no one is left behind, including the most vulnerable classes.

Frankly speaking, the concept of leaving no one behind might be way too timid for a future dominated AI. Actually, the risks posed by the AI are more about crashing and rolling over millions of people rather than leaving them behind. In order to be able to tackle a potentially devastating scenario, the Institute for Human-Centered Artificial Intelligence based at Stanford University, has come up with a series of important research papers inspired by what the Founding Fathers of the American republic had done with the Federalist Papers.

Entitled the Digitalist Papers, the contributions, written by renown luminaries from across different disciplines, offer insights and suggestions to ensure that AI systems can, as Dario Amodei explained in a powerful essay, “Machines of Loving Grace”, be capable of doing incredible and so far unthinkable things for the benefits of humanity.

Amodei, the CEO and co-founder of Anthropic, is one of those sector leaders who are the most aware of the potential downside of an unrestricted, unethical turbocharging of AI systems. Among these essays that are aimed at rethinking America’s social compact and strengthening its democratic political systems in such a way that it can thrive in an era of AI, Lawrence Lessing, a legal scholar at Harvard Law School, penned “Protected Democracy”.

In an era where democracies are already being tested and are showing deep cracks in the system, Lessing calls for forums where citizens can discuss and deliberate without any undue influence and undeterred by the polarization that already is eroding the trust in democracies. He proposes the establishment of forms of “protected democracy” based on citizens being able to come together, discuss and deliberate based on reasons and facts. “Democratic choice requires participants engaging on the basis of a common understanding of a common set of facts. We already don’t have that; AI will give us even less” he wrote.

“We live now in an unprotected democracy. As we come to our views about what is to be done and who is to be supported, we are exposed to information by a media that has an agenda unrelated to crafting collective, coherent understanding”. Lessing thinks of citizens’ assemblies as forms of “protected assemblies”.

The risks associated with AI can derail the democratic fabric of the United States of America due to its power to further polarize the society by spreading misinformation, disinformation and overall turbocharging orchestrated campaigns of maligned political influence. It will also widen the equality gap because AI systems will be controlled by a miniscule group of powerful interests, a combination of political and economic actors within a few nations.

Lessing concludes,”We, as a people, are thus increasingly vulnerable politically to the effect of AI.” While the Digitalist Papers are focused on America, also developing nations, especially democracies like Nepal, must be prepared. That’s why it is important to start a conversation in a very structured fashion on how AI can shape the future development trajectories that Nepal is striving to achieve.

Deliberative democracy, a topic I often cover in my pieces, can truly make the difference in involving and engaging the people, especially the young ones, in a future where AI will increasingly play a significant role. Slowly the effects of AI systems that potentially might not be completely under human control whose outcomes cannot be understood (the problem of interpretability), will also be felt here in Nepal.

This is not a dystopian scenario but the phase in which AI reaches the level of Artificial General Intelligence, AGI that equals and exceeds human capacities, is not far from now. AGI will be the biggest scientific breakthrough that, as fascinating and as potentially scary as it will be, will represent a steppingstone for a further giant jump, the arrival of an inevitable superintelligence akin to what we watch now in the movies.

Internationally, there have been also discussions to create a Global Citizens Assembly focused on AI. ISWE Foundation, a leader in the promotion of transnational models of citizens’ deliberation, together with Connected by Data, has already conducted some studies. Can also the policy makers of Nepal imagine similar initiatives in which the people are empowered to first understand and second to decide how AI could be developed?

Because of its young generations who thrive in the digital world, Nepal could stop being a slow mover that just simply copycats the best practices. While it would be silly not to learn from the experiences matured by the major developed economies in the field of AI, Nepal must also take the lead. From a late adapter, the country could become a trailblazer at least in terms of showing the world that is doing its homework to lay down a pathway to harness AI for the benefit of its people.

On 4 Dec 2018, after a year of intensive debate, amid the cold of Quebec’s winter, the Montréal Declaration for a Responsible Development of Artificial Intelligence was endorsed. Its ten principles are centered on well-being, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, prudence, responsibility and lastly sustainable development. The declaration is a blueprint to guide any nation trying to develop a safe and secure AI framework.

It was a truly pioneer document that ensured some basic forms of legitimacy because citizens ‘participation was a key cornerstone to the whole initiative. Interestingly as a collective, the stakeholders involved in facilitating the discussions also conducted other activities including research and educational training on AI and human rights.

How will AI help transform Nepal? Will the country be able to gain from this new technology while minimizing its side effects or will the nation continue to blindly follow others without any homework? For AI to be a WIN WIN in the country, let’s involve and engage its citizens. The AI policy that the federal government just approved is important but the way it will be executed will be even more crucial.