How Google News fought misinformation in India in 2024
From major elections across the globe to conflicts and crises, the year 2024 was critical for the news industry, said Google News on Wednesday.
Optimists who had thought that AI would never be able to replicate tasks that require human creativity were wrong. AI is invading more and more domains of human activities to generate outputs with ever increasing efficiency as it learns from more and more data. Generative models like ChatGPT today can perform astounding tasks
In November 2022, the world was taken by the storm by an AI tool with astounding capabilities ~ ChatGPT, a chatbot from Microsoft’s OpenAI which allows users to “converse” with it in a way that mimics natural conversation with a human. Within five days of its launch, the intuitive and user-friendly tool drew one million users and is now threatening traditional search engines like Google or Bing.
Microsoft, which has just invested $10bn in OpenAI, designed it for generating text, images and video in a way that they cannot be distinguished from human creations. In educational campuses across the world, teachers are having a hard time because students are using it to write their assignments, and the outcome is indistinguishable from, and often better than, human generated responses. It closely replicates what Noam Chomsky calls a machine for “hi-tech plagiarism”.
Advertisement
Bots have been there since January 1966 when Joseph Weizenbaum, an MIT computer scientist, unveiled Eliza, the world’s first chatbot that communicated with humans. Its responses were initially superficial, but with time, the self-learning program learnt to produce realistic responses as it was exposed to more and more data and feedback.
Advertisement
One of the things Eliza did was to advise on psychotherapy, and some practicing psychiatrists started wondering about the future of their patients as well as their own future. Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, newspapers, journals,
Wikipedia articles, chat logs and other information available on the internet. These networks are called large language models (LLMs) that work by analysing and identifying patterns in the text data and learn to generate text on their own, including tweets, term papers and computer programs.
They could carry on a normal conversation with a human on any subject, produce and summarise text, write poetry, analytical reports or philosophical articles and even write self-learning programs software codes for apps. As estimated by Goldman Sachs, this may cost humans some 3000 million jobs globally, including the jobs of programmers, analysts and report writers, teachers, lawyers, etc. It has the potential for automation of more than a thousand occupations.
Optimists who had thought that AI would never be able to replicate tasks that require human creativity were wrong. AI is invading more and more domains of human activities to generate outputs with ever increasing efficiency as it learns from more and more data.
Generative models like ChatGPT today can perform astounding tasks, like producing images, photo-realistic pictures of made-up people, creative art, video and film, audio outputs like songs and music compositions – tasks that require superlative intelligence.
Intelligence we believe is an attribute of consciousness, but there is no evidence, as of now, that AI-powered machines have developed any consciousness. Or do we need to redefine these terms? Are humans unique? With AI mastering each new territory of human ingenuity and creativity, the idea of human exceptionalism keeps on shrinking.
Karel Capek’s vision of robots controlling robots is no longer science fiction. Palm-e, created by researchers at Google, uses an “embodied” LLM to control a robot. AutoGPT, created by a startup called Significant Gravitas can generate and develop business ideas from a range of online resources.
The ever-expanding Capability of AI raises disturbing questions: Are we about to strike a Faustian bargain with technology that would grant us our every wish while taking away our soul ~ that is our ability to think creatively? And what happens if the trade-off does not work the way we design it, what if the tool becomes a Frankenstein and starts deciding for us?
As historian Yuval Noah Harari reminds us, the ability to manipulate and generate language has given AI the capability to hack “the operating system of our civilisation”.
In an essay in The Economist, he voices his fears about the consequences of AI taking over the hitherto exclusive human command of language and manipulating it to subvert or destroy the human civilisation and its values as we know them. Language is not just words.
Language is much more powerful than words ~ it is human civilisation. Words constitute the DNA of humanity. As the Stanford Professor Percy Liang asserts, language is “a representation of the underlying complexity” of the world, and a model of how language works also contains elements of how the world works.
For the first time in our history, a non-human intelligence is challenging and threatening human intelligence, and has the power to articulate itself. With infinite potential to generate fake news, mass-produce political content and mould people’s opinion, it can be used for political purposes to manipulate the outcome of an election. It can create cults as powerful as religion, supplanting the only non-human source of authority humanity has known in the form of holy scriptures.
By manipulating the use of language, it can make and unmake human relationships, create intimacy with people and use that intimacy to change their opinions and worldviews, especially when people would not know whether it is a human or a nonhuman source that they are developing an intimacy with. Already there are reports of people who are unable to forge bonds with fellow humans but have fallen in love with their chatbots.
AI can also spell the doom of many industries, as Harari points out, like advertising or Google Search engine. Why bother to look into the advertisement when ChatGPT can just tell you which is the best product available that meets your requirements, or search Google when it can give you the correct answer in an instant? How can you be sure that this text has been written by a human and not AI? As Harari says, “What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions?
Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.” It can completely annihilate our social, cultural and political world. To quote Harari again, “Democracy is a conversation, and conversations rely on language.
When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.” Language gave us the ability to create stories and illusions powerful enough to believe in an idea, which can be both creative and destructive.
History is replete with plenty of examples of both ~ Religion, Nationalism, Socialism, Communism, Fascism, Nazism, Liberalism, Humanism to quote only a few. So far these stories and illusions were created by humans. But powered by language, once AI starts creating these illusions and stories, we may be living inside what Harari calls “the dreams of an alien intelligence”, much like living permanently trapped inside Maya as in the ancient Indian thoughts, or like in the movie Matrix in which humanity is unknowingly traped inside the Matrix, a simulated reality created by intelligent machines while using their bodies as sources of energy.
To prevent that dystopian future from becoming a reality and to ensure that AI can only be used only for the good of humanity, AI needs urgent regulation and safety checks for every new AI-powered tool. Generative models like ChatGPT work like black boxes and no one knows how they work, and hence how to control them, like the Sorcerer’s Apprentice in Goethe’s poem, who was left by his Master Sorcerer in his workshop to perform some boring chores. Tired of fetching water by pail, the apprentice enchants a broom to do the work for him, using magic in which he is not fully trained.
The broom starts working by fetching pails of water and the floor soon starts getting flooded. When he tries to stop it, he realises that he does not know the magic required to do so. In desperation, he splits the broom into two with an axe, but then each piece becomes a whole broom that takes up a pail and continues fetching water, now at double the speed.
When all seems lost, the old Sorcerer returns and quickly breaks the spell, advising the apprentice that only a master should invoke powerful spirits. Are we humans such powerful masters?
Even then, there is always the risk that there might be someone who would love to unleash the hidden power of AI without grasping the consequences, like the unthinkable prospect of some terrorist pressing the nuclear button after taking control of a country’s nuclear assets. AI is driven by algorithms that do not understand ethics or morals and have no idea about the boundaries between ethical and unethical practices.
They also have no idea about the concepts of safety and security that we humans cherish so much. Algorithms only understand profit and know how to maximize them, they care nothing about the privacy of individuals. Data being the oil of the tech age, they trade private data for the commercial purposes of companies that deploy them without any compunction and often beyond the control of their masters. In doing so, they can easily endanger human security, liberty, and even human life
(The writer is a commentator, author and academic. Opinions expressed are personal)
Advertisement