AI needs UN oversight

Mr Elon Musk, the chief executive officer of SpaceX and Tesla and owner of X, formerly Twitter, gestures as he attends the Viva Technology conference dedicated to innovation and startups at the Porte de Versailles exhibition centre in Paris, France, on June 16, 2023. He predicts that artificial general intelligence might become smarter than the smartest human in two years and would be dangerous if it actually becomes more intelligent than humans. PHOTO/REUTERS

What you need to know:

  • The United Nations should intervene and set up a watchdog agency to regulate artificial intelligence, just like the international Atomic Energy Agency was created to curb blind pursuit of technological advances that might pose threats to humanity and the planet in the future, writes Peter G. Kirchschläger.

Many scientists and tech leaders have sounded the alarm about artificial intelligence in recent years, issuing dire warnings not heard since the advent of the nuclear age. 

Elon Musk, for example, has said that “AI is far more dangerous than nukes,” prompting him to ask an important question: “[W]hy do we have no regulatory oversight? This is insane.”

The late Stephen Hawking made a similar point: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

Given the potentially catastrophic consequences of unchecked AI, there is a clear need for international guardrails to ensure that this emerging technology – more accurately called data-based systems – serves the common good. Specifically, that means guaranteeing that human rights are upheld globally, including online.

To that end, governments should introduce regulations that promote data-based systems that seek to protect the powerless from the powerful by ensuring that human rights are respected, protected, implemented, and realised within such systems’ entire life cycle, including design, development, production, distribution, and use.

Equally important, the United Nations must urgently establish an International Data-Based Systems Agency (IDA), a global AI watchdog that would promote safe, secure, sustainable, and peaceful uses of these technologies, ensure that they respect human rights, and foster cooperation in the field. It would also have regulatory authority to help determine market approval for AI products. 

Given the similarities between data-based systems and nuclear technologies, the International Atomic Energy Agency (IAEA) would be the best model for such an institution, not least because it is one of the few UN agencies with “teeth.”

The success of the IAEA has shown that we are capable of exercising caution and prohibiting the blind pursuit of technological advances when the future of humanity and the planet are at stake. 

After the bombing of Hiroshima and Nagasaki revealed the devastating humanitarian consequences of nuclear war, research and development in the field of nuclear technology was curtailed to prevent even worse outcomes. This was made possible by an international regime – the IAEA – with strong enforcement mechanisms.

A growing number of experts from around the world have called for the establishment of an IDA and supported the creation of data-based systems founded on respect for human rights.

The Elders, an independent group of global leaders founded by Nelson Mandela, have recognised the enormous risks of AI and the need for an international agency like the IAEA “to manage these powerful technologies within robust safety protocols” and to ensure that they are “used in ways consistent with international law and human-rights treaties.” Consequently, they encourage countries to submit a request to the UN General Assembly for the International Law Commission to draft an international treaty establishing a new AI safety agency.

Among the influential supporters of a legally binding regulatory framework for AI is Sam Altman, the CEO of OpenAI, whose public release of ChatGPT in late 2022 kicked off the AI arms race. 

Last year, Altman called for an international authority that can, among other things, “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security.”
 
Even Pope Francis has emphasised the need to establish a multilateral institution that examines the ethical issues arising from AI and regulates its development and use by “a binding international treaty.”

The UN, for its part, has highlighted the importance of promoting and protecting human rights in data-based systems. In July 2023, the Human Rights Council unanimously adopted a resolution on “New and emerging digital technologies and human rights,” which notes that these technologies “may lack adequate regulation” and stresses the need “for effective measures to prevent, mitigate, and remedy adverse human-rights impacts of such technologies.” 

To that end, the resolution calls for establishing frameworks for impact assessments, for exercising due diligence, and for ensuring effective remedies and human oversight and legal accountability.

More recently, in March, the UN General Assembly unanimously adopted a resolution on “Seizing the opportunities of safe, secure and trustworthy artificial-intelligence systems for sustainable development.” 

This landmark resolution recognises that “the same rights that people have offline must also be protected online, including throughout the lifecycle of artificial-intelligence systems.”

Now that the international community has recognized the imperative of protecting human rights in data-based systems, the next step is obvious. The UN must now translate this global consensus into action by establishing an IDA.

Ethical dilemmas on AI
Biased AI
Type “greatest leaders of all time” in your favourite search engine and you will probably see a list of the world’s prominent male personalities. How many women do you count? 

An image search for “school girl” will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. Surprisingly, if you type “school boy”, results will mostly show ordinary young school boys. No men in sexualised costumes or very few.

These are examples of gender bias in artificial intelligence, originating from stereotypical representations deeply rooted in our societies.

AI-systems deliver biased results. Search-engine technology is not neutral as it processes big data and prioritises results with the most clicks relying both on user preferences and location. 

To not replicate stereotypical representations of women in the digital realm, Unesco addresses gender bias in AI in the Unesco Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject.
   
AI in the Court of Law
The use of AI in judicial systems around the world is increasing, creating more ethical questions to explore. AI could presumably evaluate cases and apply justice in a better, faster, and more efficient way than a judge.

Some argue that AI could help create a fairer criminal judicial system, in which machines could evaluate and weigh relevant factors better than human, taking advantage of its speed and large data ingestion. AI would therefore make decisions based on informed decisions devoid of any bias and subjectivity. 

But there are many ethical challenges:

•Lack of transparency of AI tools: AI decisions are not always intelligible to humans.

•AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.

•Surveillance practices for data gathering and privacy of court users.

New concerns for fairness and risk for Human Rights and other fundamental values.

So, would you want to be judged by a robot in a court of law? Would you, even if we are not sure how it reaches its conclusions?

This is why Unesco adopted the Unesco Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject.

AI creates art
The use of AI in culture raises interesting ethical reflections.

In 2016, a Rembrandt painting, “the Next Rembrandt”, was designed by a computer and created by a 3D printer, 351 years after the painter’s death. 

To achieve such technological and artistic prowess, 346 Rembrandt paintings were analysed pixel by pixel and upscaled by deep learning algorithms to create a unique database. Every detail of Rembrandt’s artistic identity could then be captured and set the foundation for an algorithm capable of creating an unprecedented masterpiece. To bring the painting to life, a 3D printer recreated the texture of brushstrokes and layers of pain on the canvas for a breath-taking result that could trick any art expert.  

But who can be designated as the author? The company which orchestrated the project, the engineers, the algorithm, or… Rembrandt himself?

In 2019, the Chinese technology company Huawei announced that an AI algorithm has been able to complete the last two movements of Symphony No.8, the unfinished composition that Franz Schubert started in 1822, 197 years before. So what happens when AI has the capacity to create works of art itself? If a human author is replaced by machines and algorithms, to what extent copyrights can be attributed at all? Can and should an algorithm be recognized as an author, and enjoy the same rights as an artist? 
 
While AI is a powerful tool for creation, it raises important questions about the future of art, the rights and remuneration of artists and the integrity of the creative value chain. 

We need to develop new frameworks to differentiate piracy and plagiarism from originality and creativity, and to recognise the value of human creative work in our interactions with AI....This is why Unesco adopted the Unesco Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject.

Source: United Nations Educational, Scientific and Cultural Organisation (Unesco).

Peter G. Kirchschläger, Professor of Ethics and Director of the Institute of Social Ethics ISE at the University of Lucerne, is a visiting professor at the ETH Zurich’s AI Center.
© Project Syndicate 1995–2024