Feeling Automated: Ethical Development of Emotional & Interactive AI Systems
Feeling Automated: Ethical Development of Emotional & Interactive AI Systems
The Feeling Automated Project at The Pranava Institute aims to explore the legal and societal dimensions of the increasing use of Social AI systems (i.e, human-mimicking AI systems, which emulate empathy), and explore (i) the risks and harms of these technologies, (ii) the need for policy action, and (iii) the shape that regulation can take.
This panel discussion marks the launch of our report on policy pathways to ensure the ethical design and development of Social AI, reflecting the findings from a literature review, multi-stakeholder consultations and testing of Social AI systems in consultation with experts, informing policy recommendations for the safe and ethical development of Social AI. The panel consists of experts whom we have interviewed through the course of the project, sharing their insights on the various dimensions of Social AI use, and to reflect on the latest developments in this space and the future of the technology, policy responses, user safety, and raising important questions about where we go from here.
Tune in to listen to our global panelists:
Dr. Nomisha Kurian
Assistant Professor, Department of Education Studies, University of Warwick
Professor Jeannie Paterson
Professor of Consumer Protection and Technology Law
Co-founding director of the Centre for Artificial Intelligence and Digital Ethics (CAIDE), University of Melbourne
Ms. Banita Singh
Psychologist and Founder at Your Human Side with Banita
Dr. Renwen Zhang
Assistant Professor, Nanyang Technological University
About the Feeling Automated project:
How can Social AI systems and emulated empathy create positive outcomes? Can emulated empathy be ethical, or is it fundamentally deceptive? Is it desirable to frame such systems as being able to substitute mental health workers and human experts? We also aim to explore larger questions around the desirability of these systems and interrogate our changing relationship with machines.
There is a sharp rise in the deployment and usage of interactive, human-mimicking AI systems, which emulate empathy (for the purposes of this project, referred to as ‘Social AI’). These take the form of tools which are specifically marketed as AI companions, therapists or characters one can interact with, or general-purpose systems (such as ChatGPT, Meta AI) which are used for emotional or affective use cases.
The evidence of harms (both individual and societal) arising from Social AI is on the rise, and is being documented both in the APAC region and across the world. Among the harms which have been identified, are adverse mental health consequences, users developing addiction to and emotional dependency on AI systems, and harmful outputs which are aggressive or perpetuate stigmas. Social AI systems are often designed to be addictive, deceptively anthropomorphic, and sycophantic, raising concerns about manipulation and interference with user rights.
This project explores the legal and societal dimensions of these systems, and explore (i) the risks and harms of these technologies, (ii) the need for policy action, and (iii) the shape that regulation can take. We aim to release a comprehensive report on policy pathways to ensure the ethical design and development of Social AI, conducting a literature review, multi-stakeholder consultations and testing of Social AI systems in consultation with experts.
About The Pranava Institute
The Pranava Institute is a New Delhi-based research organisation that works at the intersection of emerging tech, society, policy and design to shape sustainable technological futures. TPI's work spans two verticals: Digital Economy and Tech Geopolitics, as well as Technology, Society, and Design. Our areas of work include Responsible Deployment of AI in the Public Sector, Governance of Digital Public Infrastructures (DPIs), Critical Minerals Supply Chains, Semiconductors and Electronics Manufacturing, Trust and Safety Online, and Youth and Digitalisation. We believe in building on India's unique social, cultural and epistemic context to shape emic technological futures.
