The idea of Artificial Intelligence (A.I) is not from a distant future but is a palpable reality. The essence of this is felt in our day to-day life. Either is it the ordering of food on Zomato or scrolling for options on Netflix. A.I is like your close companion who knows you all too well; it knows your likes, dislikes and more importantly your behavioral patterns. The schematic mechanism behind the working of A.I is a deep machine learning process under which A.I is incessantly endeavoring to recognise the deep patterns in your preferences so that it can make your website-experience friendlier
However, there is a darker side to A.I, which to be honest is not of its own creation, and which certainly has the potential of shaking the ‘Equality Jurisprudence’ of India in coming years, if it has not done so already. Most of the State and Non-State actors have started to realise the bias in the working of A.I. To be honest this is not a structural flaw in the functioning of A.I. It is an accepted fact that human beings are the subject of artificial intelligence learning, it is the learning of human behaviour that has endowed in A.I the implicit bias of human beings.
Advertisement
In one such instance, a multinational corporation ‘Amazon’, had used A.I technology for screening resumes of potential employees; however, when the result came out, it caught the corporation by surprise. Almost 70 per cent of the candidates who qualified for interview were male, which led to a huge uproar against the working of the company. The main problem here was that the A.I used by Amazon for screening resumes had gone through the last 20-odd years information of hiring patterns of the company, which was good enough to create an implicit bias in the functioning of the A.I. for preferring male candidates over females.
There is a plethora of instances like these where the result has been discrimination against a class of people. As more often than not the functioning of A.I heavily relies on data/information fed to it or the deep patterns of behaviour picked up while observing human beings, there are bound to be instances when A.I will discriminate. Apart from these, the other major reason for this conduct of A.I is its creator. As most of the persons creating the deep-learning mechanism for an A.I are heavily paid male employees, this tends to transfer their implicit bias in the creation of A.I.
However, the logical discourse which follows this line of thought is as to what impact this could have in the functioning of a ‘State’. There was news in 2014 that most public distribution services (PDS) might be using A.I technologies for distribution of goods among citizens. Keeping the history of bias and class discrimination in India in mind, it won’t be inexplicable if this might eventually turn into a fiasco of class-discrimination in the name of efficiency.
However, apart from the vertical relationship between the ‘State’ and its ‘Citizens’, the question which needs to asked is can there be an A.I discrimination between two citizens? The question of horizontal relation comes into existence. The mandate for prohibiting discrimination among the citizens is primarily governed by Article 15 (2) of the India Constitution, which articulately lays down that “(2) No citizen shall, on grounds only of religion, race, caste, sex, place of birth or any of them, be subject to any disability, liability, restriction or condition with regard to— (a) access to shops, public restaurants, hotels and places of public entertainment; or (b) the use of wells, tanks, bathing ghats, roads and places of public resort maintained wholly or partly out of State funds or dedicated to the use of the general public.”
The reformative role which article 15 (2) plays cannot be belittled for a country like India. Though Article 15 (1) lays down a general impediment against the State from discriminating against any of its citizens on grounds of religion, race, sex, caste and place of birth, 15 (2) strikes at the core of Indian society, which for centuries has been practicing class stratification and ghettoization. It is hard if not impossible to apply the noble vision of the Constitution without ratifying the private spheres of Indian citizens. And that can only happen by making sure places of ‘public usage’ are not subjected to prejudicial class-discrimination. Therefore, Article 15 lays down an obligation both vertically as well as horizontally.
However, in the digital era, the necessary predicament of A.I has a huge role to play under this scenario. The question which needs to be asked is whether A.I can become the basis of discrimination between citizens. Then who shall be held liable for such discrimination. Another viable question will be as to where the discrimination will be located – in the intention of the maker or the subject of A.I.? The very fact that human beings working at various shops/public markets suffer from their inherent prejudicial bias and the same bias is somehow imbibed in the working of A.I, it will eventually lead to discrimination against a fellow citizen.
The very fact that discrimination occurred in the appointmentprocess of Amazon, opens the gates to a new thought. What if something like this happens in India? If so, can it be said that it violates the mandate of article 15 (2)? The intricacies of these questions have still not been fleshed out, but they raise viable concerns. As most of the companies operating in and offering services in India are A.I-technology based, who will be held responsible if a situation of discrimination comes up in the near future – the person who created the A.I technology or the business owner, whose implicit behavior A.I read? The term ‘Shop’ under article 15 (2) is large enough to include any corporation, company or individual offering goods or services within the territory of India. Therefore, this is a question of great jurisprudential inquisitiveness, which needs to be addressed as soon as possible.
WHAT CAN BE DONE?
One of the major solutions to put a check on the bias of A.I is to make sure that it is inclusive in nature. A.I technology is like a small child; it feeds on any small information fed to it by the creator or the subject it studies. So, the better option will be to make sure that creation of A.I is inclusive in nature. Secondly, till the time this inclusivity principle is not satisfied in the making of A.I, it is necessary to make sure that functions of core public importance such as criminal justice, healthcare, social welfare and education should not use A.I technology.
The writer is Assistant Professor of Law, National Law University, Odisha.