Logo

Logo

Govts to Big Tech: Create responsible AI first and then make it public

The generative AI era came fast. However, the government scrutiny into Big Tech on AI came much faster to promote responsible innovation and risk mitigation as AI chatbots begin to worry lawmakers and regulators worldwide.

Govts to Big Tech: Create responsible AI first and then make it public

(Representational Image) File Photo

 The generative AI era came fast. However, the government scrutiny into Big Tech on AI came much faster to promote responsible innovation and risk mitigation as AI chatbots begin to worry lawmakers and regulators worldwide.

The governments feel that companies have a fundamental responsibility to make sure their AI products are safe and secure before they are deployed or made public.

Advertisement

In the US, Vice President Harris and senior Administration officials met with CEOs of four American companies at the forefront of AI innovation this week to share concerns about the risks associated with AI.

Advertisement

President Biden dropped by the meeting to underscore that companies have a fundamental responsibility to make sure their products are safe and secure before they are deployed or made public.

Harris told top Big Tech CEOs, including Microsoft Chairman and CEO Satya Nadella, Alphabet and Google CEO Sundar Pichai and Sam Altman, CEO of OpenAI (ChatGPT fame), that the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.

She stressed that in order to realise the benefits that might come from advances in AI, “it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security”.

“Every company must comply with existing laws to protect the American people. I look forward to the follow through and follow up in the weeks to come,” Harris said.

She told the CEOs that advances in technology have always presented opportunities and risks, and generative AI is no different.

“AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy,” she stressed.

Meanwhile, the UK’s competition watchdog this week launched an initial review of “AI foundational models” like OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard, saying the move will help create an early understanding of the market for AI models and what opportunities and risks these could bring.

The Competition and Markets Authority (CMA) said the initial piece of work will help create competition and consumer protection principles to best guide the development of AI-driven markets going forward.

“AI has burst into the public consciousness over the past few months but has been on our radar for some time. It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth,” said Sarah Cardell, Chief Executive of the CMA.

“Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection,” she added.

In a white paper, the UK government in March asked regulators, including the CMA, to think about how the innovative development and deployment of AI can be supported in line with the 5 overarching principles of: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

In line with the government’s AI white paper and the CMA’s role to support open, competitive markets, the review seeks to understand how foundation models are developing and produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future.

The development of AI touches upon a number of important issues, including safety, security, copyright, privacy, and human rights, as well as the ways markets work.

The Indian government also wants Big Tech to create a responsible and ethical AI.

“AI is a kinetic enabler of the digital economy and innovation ecosystem. Government is harnessing the potential of AI to provide personalized and interactive citizen-centric services through digital public platforms,” according to the Ministry of Electronics and IT.

Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, has broadly outlined the IndiaAI programme and how it will help the country build world-class platforms, solutions and tools to not only solve local issues but also help the world.

“We aim to build commercially deployable AI platforms and solutions, developed by a vibrant startup ecosystem in the country which is our primary goal,” according to the minister.

India aims to become a global powerhouse of AI which does not just stop at integrating foreign chatbots but building next-generation AI-based innovations to empower billions of citizens.

Advertisement