Share article

Embodied Health
Reunifiying Bodies With Their Data

RADHIKA RADHAKRISHNAN is a PhD scholar at the Massachusetts Institute of Technology (MIT). Her doctoral reasearch focuses on feminist surveillance studies and critical algorithm studies in India. She has worked with civil society organisations to study the intersections of gender justice and emerging digital technologies using feminist, qualitative research methodologies. Her scholarship has spanned the domains of artificial intelligence, digital surveillance technologies, health data, and feminist internets. Radhika holds an M.A. in Women’s Studies and a B.E. in Computer Science Engineering from India.

From AI-enabled diagnostic technology to wearable health monitoring devices, digital innovations and the advent of big data are changing healthcare systems across the world. In many ways, these tools can help medical practitioners ensure more preventative care, achieve greater patient insight, and increase efficiency in the healthcare sector. Yet this expansion in digital health also makes it more critical than ever that we consider the effects of working with new technologies – the biases that are baked into algorithms and the legal ramifications of working with sensitive data – to keep patients at the centre of the healthcare systems that serve them. In India, this challenge is made more difficult by the fact that the companies developing and testing digital health technologies are predominantly based in the Global North, far removed from and with
a poor understanding of the communities who produce the data.

In this interview with Radhika Radhakrishnan, we explore the dangers that arise
when health data is understood as a resource independent from the bodies producing it. Radhakrishnan, who is a PhD scholar at MIT focused on the challenges
faced by gender-minoritized communities with digital technologies, argues that
emerging health technology has the potential to cause harm not just to one’s
privacy in the legal sense but also to cause a physical, embodied harm. She argues
for an orientation towards using health data to serve communities whose bodies
are producing it.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

BECOME A FUTURES MEMBER

In your work, you put forward that health data is increasingly being treated as a disembodied resource; can you elaborate how bodies become disconnected from the data they produce?

One example that I’m really fascinated by is how regulatory policies currently
treat the non-consensual sharing of women’s images online. As a rule, it is viewed
and treated as a ‘data harm’ and a data privacy issue – regardless of whether it’s
an intimate photo or not. Based on the conversations I’ve had with women who
have been in these situations, this treatment doesn’t capture what they experience when their data is violated. They never describe their experience in terms of data harms but as intimate, physical, and corporeal harms. Even if it’s an image circulating in cyberspace, their physical selves still experience the consequences of its misuse. This is what the framework of seeing data as a resource does not capture.

In India during COVID-19, drone surveillance was implemented to prevent the
spread of the disease. But the public health surveillance was also surveilling bodies
of people – the vectors of the disease, rather than the disease itself. This surveillance undoubtedly evoked physically distressing experiences. A nurse I spoke to
needed to assist her neighbour in a childbirth and had to leave the house in the
middle of the night to prevent the drones from seeing her. She hid in the shadows
of buildings and felt like a criminal whilst doing her job.

If you put bodies and experiences back into the framework of what we understand data as, then it becomes immediately clear that something like our ability to move is being affected through the control over our data. In this case, surveillance does not only cause harm to data privacy but also to your bodily integrity and autonomy. When we are looking to regulate data, we cannot just look at data as a resource that is disconnected from our bodies. Healthcare is care of the body. When data is treated as a disembodied resource it obscures this obvious link.

Why are AI-enabled medical diagnosis systems being trained and tested in India?

The cost effectiveness of collecting data, the unregulated healthcare ecosystem,
and the diversity of Indian populations make India an attractive country for
training machine learning algorithms. AI-enabled diagnosis systems, and the
algorithms that automate such systems, are being developed at a rapid pace in
India with the intent to improve healthcare access to underserved parts of the
country which have an acute shortage of skilled doctors. They aim to assist doctors
in making diagnostic decisions and may supplement the doctor’s presence in the
future. However, because these interventions are happening exclusively within
a predatory, unaffordable healthcare sector, the introduction of new technologies
can become a method of simply using bodies and medical records of the sick and
poor as data to train machine learning algorithms.

Can you give an example of how bodies are separated from the diagnostic systems they are training?

One of the algorithms I studied was used for diagnosing diabetic retinopathy (a
diabetes complication that affects eyes, ed.). In this case, there is a clear conflict
of interest between the interests of the company that developed the algorithm, and the best interests of the patient, as patient diagnosis is being combined with experimental algorithmic trials to reduce the cost of data collection for the technology companies and healthcare providers. This raises ethical concerns as the already marginalised sick and poor have a reduced ability to bargain. What this can result in is therefore the favouring of market-driven private interests over patient interests.

The technology company that developed this AI-enabled diagnostic tool had no
understanding of the practical reality their technology would operate in, such as
whether consent forms were being given out to patients for the use of the automated tool in diagnosing them. At the same time, the patients I spoke to – most of them being agricultural workers in southern India – were not properly informed about the process, and they were unaware of how the diagnosis is made. Although patients are given a consent form, I observed that most of them could not read or write. In distress, they simply accepted any procedure being asked of them. There is no effort being made to understand the experiences they have with the usage of these automated tools. Their choice, consent, privacy, and preferences are not considered.

How do we bring data back to its original context of creation and how does doing so resist the harmful, often unintended, consequences of new health technologies?

First, we cannot simply look at data as a resource. Second, we need regulation
that takes the risk of harm to the body into account.

I also think greater accountability is needed amongst the designers, medical
practitioners, and companies developing the technologies. The pressing accountability question right now should be figuring out who’s held responsible in cases where something goes wrong with the diagnosis or where incorrect data was used. Currently, I think medical practitioners in India are sidestepping this accountability question somewhat. As AI is often marketed as a social good, there is a deliberate ignorance about the harmful risks which results in an evasion of ethical responsibilities to the sick and poor.

The questions we must ask should centre around what machine learning is replacing when we sell its applications as products that are a panacea for social problems. We need to be able to pre-empt certain dangers stemming from technological interventions. If companies are using the experiences of underserved communities, especially marginalised ones, then they need to ensure that the applications they build benefit those communities in return and increase their agency.


CHECK-UP is a Q&A series by Maya Ellen Hertz and Sarah Frosh exploring advancements and providing critical reflections on innovations in digital health. From telemedicine and electronic health records to wearables and data privacy concerns, the article series includes interviews with experts in fields across law, engineering, and NGOs who shed light on the myriad of complexities that must be considered in the wake of new digital health technologies.