The ‘Chat GPT Effect’ – Parsing Privacy and AI Regulation in India

19.04.2023

The increasingly common use of AI in our daily lives raises multiple concerns, particularly around data privacy and regulatory preparedness. In this article, we discuss the privacy risks associated with AI and the position under Indian law.

Chat GPT and its various ‘features’ are at the centerpiece of many a recent dinner time conversation. Its popularity underscores just how much AI based tools such as chat bots, facial recognition software, text editors, personal assistants, etc., have already become a part of our everyday life. Since regulation always lags innovation, discourse on regulating AI is still at a nascent stage. But this discussion has been given a shot in the arm by the ubiquity of Chat GPT and its various avatars (and, to be fair, there is a separate conversation to be had about the privacy gaps in Chat GPT itself).  

Regulation of AI in India is presently attempted indirectly, via regulations on issues such as data privacy, intellectual property, and cybersecurity. Regulators have made efforts to get a dialogue going on using these technologies. For instance, the Indian Telecom Regulator has issued a consultation paper discussing leveraging AI and big data for the telecommunication sector. Similarly, the Reserve Bank of India (viz., the financial regulator) has encouraged adoption of AI for "Know-Your-Customer" processes , etc., while reiterating its ethical use with a consumer safety perspective (owing to privacy, security, profiling, etc.). In 2018, NITI Aayog (i.e., policy think tank of the Government of India) had issued a discussion paper titled "National Strategy for Artificial Intelligence", identifying healthcare, agriculture, education, smart cities and mobility as the focus sectors for deployment of AI. Separately, the Ministry of Electronics and Information Technology (“MEITY”) has confirmed the commencement of the "National Program on AI" for transformational use of AI and set up a 'knowledge hub' for AI developments.

Recent global legislative attempts contemplate a graded regulation, i.e., regulate AI based on potential risks. The Indian government's approach may be similar. During a recent consultation session on the framework of the proposed "Digital India Act" (potential successor to extant Indian IT laws), MEITY stated that the proposed law may define and regulate "high risk AI systems". Regulation will rely on legal, quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat & vulnerability assessment, examining AI based ad-targeting, content moderation, etc.

A key consideration around AI is data privacy; this will be particularly relevant for India, since it is in the process of agreeing a new data privacy regime. While the proposed law is sector agnostic and does not specifically address the challenges that may arise due to the use of AI, it does contemplate a certification-based mechanism for use of ‘new technologies’. In the future, there may be a number of questions that the Indian data regulator may need to parse and answer, based on a AI use cases. Here are 3 examples:

  1. Individual Profiling: AI and machine learning solutions are increasingly being deployed to chalk an outline of the consumer and their likely preferences. Since AI primarily relies on the information fed into its systems and typically uses 'patterns' (viz., common data points such as behavioral trends) to arrive at its inferences, an individual that exhibits these patterns is likely to be pegged into a certain profile. From a business perspective, the purpose of profiling is to be able to pitch services and products to a particular individual based on their profile i.e., likelihood of opting for a certain product/service due to certain attributes. Although this approach of gauging an individual's choices based on their body language and speech is an intrinsic part of offline forms of business as well, AI has the capability to record such attributes and deploy it for purposes apart from mere transactions. For instance, profiling based on markers (such as location, educational background, residential address, past purchasing trends, etc.) can be used to classify an individual as more likely to commit an offence, as opposed to others. Alternatively, it may be used to foster bias to make recruitment decisions due to a certain profile (based on markers such as location, educational background, financial position, purchase history, etc.).

  2. Non-consented purposes: Due to the dynamic nature of AI, it can be used for purposes that a data subject has not consented to. AI may be used for purposes that the data subject did not consent to, for e.g., aggregating data of individuals from a particular location for marketing strategies, extracting health information for use by insurance companies, etc. Since data subjects have no sight on how their data is used further, it may be used to unfairly influence their opinions, choices, and/or offers made to them by a particular business (for e.g., higher interest rates for loans, etc.). Also, given that AIs can be ‘opaque’, it may not be easily apparent whether a particular algorithm used a data point for a decision (say, a hiring decision), or not.

  3. Surveillance: Profiling when used with sensor-equipped devices that gather data from voice control, gestures, biometrics can be used to identify individuals, and geo-tracking can trace an individual's movements continually. As such, businesses (or Governments) may be able to leverage data for surveillance. Introducing the processing power of an AI in this equation takes it to another level. For instance, CCTVs are omnipresent at public places and if used in conjunction with facial recognition technologies and an AI tracking model, it may be a serious intrusion to privacy. Unregulated use of these technologies, but particularly in conjunction with increasingly powerful AI, is troubling from a data privacy perspective.

 

How should these issues be handled in the emerging regulation? Or are they even being capable of being legislated?


Historically, Indian lawmakers have opted for a reactive approach for drafting regulations. Regulations are formulated when a legal issue simmers in a sector or loose ends are identified after being used to defraud the public. While these knee jerk regulations have managed to patch flaws in some cases, the dynamics of the digital world are different and ever evolving; not having adequate tools to deal with its consequences could have a lasting impact.

Regulating AI would likely need nuanced laws, which are conducive for its growth and simultaneously address issues such as the potential threat to privacy. It will be interesting to see how increased use of AI challenges the status-quo under Indian and other data privacy laws.

 

Article provided by INPLP member: Vikram Jeet Singh (BTG legal, India)

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

What is the INPLP?

INPLP is a not-for-profit international network of qualified professionals providing expert counsel on legal and compliance issues relating to data privacy and associated matters. INPLP provides targeted and concise guidance, multi-jurisdictional views and practical information to address the ever-increasing and intensifying field of data protection challenges. INPLP fulfils its mission by sharing know-how, conducting joint research into data processing practices and engaging proactively in international cooperation in both the private and public sectors.