On 23 November 2021, the 194 member states of UNESCO adopted the recommendations on the Ethics of Artificial Intelligence. The recommendation provided a basis for making AI systems work for the good of humanity, individuals, societies, the environment, and ecosystems and to prevent harm. In the years following the resolution, AI has grown sophisticated and capable of both good and harm very efficiently. It has automated complex tasks, aided coders, assisted creators, and even replaced key organisational functions. AI Chatbots that communicate as effectively as real human beings, AI image editors that rival human creativity, and AI video creators that generate hyper-realistic deepfakes have raised ethical and privacy concerns worldwide. Deepfake pornography has emerged as a concern for women and school girls, while audio cloning software and deepfake videos have aided cyber frauds in duping people of their hard-earned money. Unless AI is used responsibly, the deepfake tools will become the Frankenstein monster that ruins the reputation of individuals, destroys the credibility of content on the internet, and abets the destruction of organisations that fund its advancement.
Holding AI to a leash
“Never summon a power you can’t control”, said Author Yuval Noah Harari in an interview with a reputed newspaper. On the face of it, it looks like common sense advice that will be adhered to by default. In practice, the AI ecosystem has thrown caution to the winds and enables indiscriminate creation with zero control over where and how the output will be utilised.
Chatbots that generate essays don’t consider the ethical implications of where university students will submit them. Deepfake image creation software doesn’t watermark the design so that its distribution is restricted to the platforms it purports to get published. AI audio cloning tools are freely available to replicate the voice of any individual without requiring the consent of a real human being. Deepfake videos are being used for identity fraud at financial institutions when verification is through digital mediums. The internet is replete with deepfakes. About 98 per cent of the deepfake videos online are deepfake pornography with no statistics on consent. With an AI ecosystem devoid of controls or monitoring, Artificial Intelligence is out of control. It isn’t surprising that an alarming 94 per cent of female influencers on Instagram become victims of this. AI startups have summoned a power they can’t control, investors have funded ventures that are harming humans, and founders have dug their graves with the risk of bankrupting themselves or getting the tool banned by regulators.
Making humans accountable for AI output
The UNESCO recommendation inter alia required “member states should ensure that it’s always possible to attribute ethical and legal responsibility for any stage of the life cycle of the AI systems, as well as remedy related to AI systems, to physical persons or to existing legal entities.” In light of recent developments in the usage of AI, this recommendation to fix responsibility on real individuals and existing legal entities has grown in utility. In a survey of 2,778 AI researchers conducted in 2023, more than a third gave at least a 10 per cent chance of advanced AI leading to outcomes as bad as human extinction. AI is an unprecedented threat because it can replace human ingenuity and decision-making skills with no accountability.
Guidelines and innovations to curb cyber scams
The AI startup ecosystem has already summoned a power that isn’t fully under its control. A slew of AI resources have come into existence that create millions of content autonomously without human supervision. However, commercial interests have forced AI startups to retain a degree of control over the switch. Free AI tools created without any registered trademark and shady deepfake pornography apps with no information about the developer are grave threats because commercial interests and legal ownership don’t dictate their functioning. Regulators and governments cannot govern their usage policy nor make the legal entity answerable for its harmful content. Further, tracing the creator of the AI model for misuse of the user is akin to imprisoning kitchen knife manufacturers when the knife is used to kill instead of its intended purpose of cutting vegetables. It won’t deter future perpetrators and may instead stifle advancements in AI. In the future, AI will be increasingly used for cyber scams, malware attacks, ransomware, identity fraud, examination fraud, criminal activities etc.
The rise of AI can be managed only if a corresponding system is developed to monitor AI with AI. Governments should encourage AI models and technological innovations in business operations and deepfake detection tools in their IT infrastructure so that organisations don’t unwittingly fall prey to AI fraud. Outright bans need to be enforced on certain harmful AI apps like deepfake porn creators that are purely illegal and harmful in nature. Strict guidelines and fair usage policies should guide the developers and users, and a just criminal system should punish the wrongdoers so that misuse is penalised. Regulators must walk a thin line between fixing ethical and legal accountability on real humans and pre-existing entities while ensuring that their interventions don’t stifle the fledgling innovations.
The writer is Founder & CEO, pi-labs.ai