How can humanity regulate AI?


I have been asked this question a lot. Should artificial intelligence (AI) be regulated? And how do we, as humanitydo it effectively without sacrificing the colossal and momentous benefits it brings?

The short answer is, yes, it should be. It must be governed and regulated. It is the most powerful technology ever developed by mankind. Nuclear technology comes a distant second. I agree with Elon Musk; running it loose is a risk we as humanity must not take. Deep AI is the only technology aside from nuclear technology capable of being a threat to human civilisation itself.

The decision-makers have realised this, and there is now an intermittent consensus that it needs to be regulated.

The European Union did good by passing the EU Artificial Intelligence Act, which came into effect a few months ago. It’s the first and most comprehensive regulatory framework for AI to date.

The US is working on its own framework. AI Bill of Rights blueprint, made by the White House Office of Science and Technology Policy, less than a couple of years ago. It’s important to note that it’s only a “guide” and not enforceable, unlike the EU AI Act. Most of the US states have enacted or proposed their own laws with respect to AI, but almost all of them are limited to disclosures of training data, employment discrimination, and consumer privacy protection.

The United Nations, as usual, is doing its thing. Resolution A/78/L.49 by its general assembly emphasises the importance of ethical AI principles and adherence to international human rights law.

The UN AI Advisory Board, put in place by the secretary general, came up with a report titled Governing AI for Humanity more than a month ago. The report advises global AI governance regimes to work on averting “AI arms races” or a “race to the bottom” on safety and rights, amongst others. It advocates for the creation of an AI standards exchange, a global AI data framework, a global AI capacity development network to boost AI governance capacities, and an international scientific panel on AI, amongst others.

These are small steps forward, but regulating AI is easier said than done.

AI, by nature, is borderless and inherently difficult to govern and regulate. And so-called deep AI, more so.

Reality is that every organisation, company, and country on the planet yearns to surpass its peers in AI capabilities. All major states are harnessing the powers of AI in their statecraft, though I need not get into specifics as it becomes controversial in no time.

For these reasons alone, AI can’t only be regulated by national or local laws. But rather by comprehensive ones forged and monitored by an intergovernmental organisation, which in turn can pave the way for local and national regulatory frameworks.

While the UN as an institution has done more than a little, it has fallen short of convincing and lobbying its members to consider forming an intergovernmental organisation to ensure that deep AI does not become a threat to human civilisation in the future.

On the lines of the International Atomic Energy Agency, an intergovernmental organisation within the UN system governed by its own founding treaty, should be enacted. Such an intergovernmental organisation with adequate rights and obligations can only help humanity reap the epoch-making advantages of AI, simultaneously safeguarding us from its dangers in the coming decades.

The writer is divisional head of data sciences, heading AI at TVS group