EU acts

European Union Representational Image (Photo: IANS)


In a landmark move, the European Union (EU) has embraced a pioneering role in shaping the future of artificial intelligence (AI) with its recent provisional deal on regulations. This agreement, reached after exhaustive negotiations and debates in the European Parliament, positions Europe as the first major world bloc to enact comprehensive laws governing AI. European Commissioner Thierry Breton rightfully hailed it as historic, emphasising the EU’s understanding of its crucial role as a global standard setter. One of the key aspects of this regulatory framework is its focus on transparency, particularly for foundation models like ChatGPT.

This includes requirements for technical documentation, adherence to EU copyright laws, and the dissemination of detailed summaries about the content used for training. Such measures aim to demystify the workings of AI systems, providing a clearer understanding for both regulators and the public. High-impact foundation models, considered to have systemic risk, face even more stringent obligations. This involves conducting thorough model evaluations, mitigating systemic risks, engaging in adversarial testing, and reporting serious incidents to the European Commission. Notably, the emphasis on cyber-security and energy efficiency aligns with the broader goals of sustainability and responsible AI development.

Governments, too, are subject to limitations on real-time biometric surveillance in public spaces. This restriction is only applicable in specific cases, such as victims of certain crimes, or the prevention of genuine present or foreseeable threats such as terrorist attacks. The agreement explicitly bans cognitive behavioural manipulation, untargeted scraping of facial images from the internet, social scoring, and biometric categorisation systems that infer sensitive personal attributes.

These prohibitions underscore the EU’s commitment to safeguarding individual privacy and preventing potential misuse of AI technologies. However, not everyone is applauding these regulations. Business groups argue that the rules impose an additional burden on companies, especially with the last-minute attempt to regulate foundation models. The debate over a risk-based approach versus regulating the technology itself highlights the challenges in striking the right balance between fostering innovation and addressing potential risks associated with AI.

Privacy rights groups express reservations about the legislation. While the EU has taken steps to regulate live public facial recognition, concerns linger about the overall impact on biometric surveillance and profiling. Despite the Parliament’s efforts to limit potential damage, some critics view the package as lukewarm in effectively addressing these critical issues. The fines for violations, ranging from 7.5 million euros to 7 per cent of global turnover, are intended to hold companies accountable. However, scepticism remains, with critics questioning whether such penalties are sufficient deterrents or merely additional costs for businesses.

As the legislation heads toward formal ratification and eventual implementation in two years, it marks a crucial juncture in global AI governance. Europe’s ambitious rules could serve as a blueprint for other jurisdictions seeking to navigate the delicate balance between harnessing AI’s advantages and establishing necessary safeguards.