‘We should protect the intangible aspects of our mental worlds’

“Once we start seeing ourselves only in terms of data, we’ve reduced ourselves to something that can be bought and sold.” Interview with philosopher and cognitive scientist Ophelia Deroy.

Professor Ophelia Deroy (Ludwig Maximilians Universität München, Munich Interactive Intelligence lab and University of London) is an expert in an interesting and relatively new domain: Neuro philosophy. She recently spoke at the workshop ‘Neurotechnology and neurorights – Privacy’s last frontier’ organised by the European Parliament’s Panel for the Future of Science and Technology (recorded).


As a philosopher working with new technologies, what do you think is better for the development of neurotechnology (in other words any method or electronic device which interfaces with the nervous system in order to monitor or modulate neural activity): a solid theoretical foundation or a simple technological push?

Ophelia Deroy profileOphelia Deroy: It is important to say that everyone is free to express their ideas about the pros and cons of futuristic technologies – as long as they respect moral and legal standards about human rights and dignity. Take the recent Techno-optimist manifesto by Andreessen Horowitz, where he raves about “the eros of the train, the car, the electric light, the skyscraper. And the microchip, the neural network, the rocket, the split atom”. If his idea of a good distant future is one where we have microchips in our brains, he has the right to express it and try and convince people to listen to his views.

I don’t share the appeal of such views, but it is, in principle, good to have debates about the future in a democracy.

So the question is not who has the right to speak about the future, but who has the right to influence politics, education, diplomacy, law, climate or financial markets with their views. These are common goods, and we need to ensure they are shaped by shared and sound visions of the future. This is where we need móre than technological manifestos: we need lawyers, politicians, scientists and experts, and, of course, principally, civil society representatives and citizens.


What would then be the risk of letting certain people or views influence society?

Ophelia Deroy: In the area of neurotechnologies, letting individual or shallow agendas dominate over shared and sound perspectives raises serious issues. First, some technologies like brain implants or EEG-headsets are turned into hype, presenting gadgets as solutions to major problems when, in fact, the goal is to make money.

Second, the promises are often shallow and misleading: they blur predictions and fiction, and they press some very questionable value systems about human enhancement and hierarchies.

Take brain-computer interfaces, which could enable people with paralysis to control devices mentally, or brain implants that might treat conditions like Parkinson’s disease. These are highly regulated medical devices. There is no cheap “gadget” version that can be transferred to the market for enhancement purposes: it is not plausible, and even if it was, the consequences for the individuals and society are not something that a few actors in the market are entitled to decide for all of us.

So what we need first and foremost, even before experts, is for the public to participate in these decisions. For this, we need debates between philosophers, neuroscientists, and legal and medical experts to debunk shallow claims and propose a sound, positive vision of those technologies.

At the moment, we let a few entrepreneurs be the only voices to offer a shiny, optimistic view of the future based on technology. Public institutions are pushed into catching up and stressing the limits and risks. Who do you think people will prefer listening to?

So, in the end, I see the philosophical challenge as that of shaping another positive vision for neurotechnologies and technology in general that serves the common good, and promotes equality and freedom.


What is, in your opinion, the most important aspect/angle of the neurorights debate? Is there anything overlooked currently?

Ophelia Deroy: To me, the debate around neurorights is mostly a Trojan horse to address wider concerns we’ve ignored for too long. Neurorights are about setting ethical and legal boundaries to protect our privacy, identity, and agency against unauthorised manipulation or data extraction. The focus tends to be on the brain and neurotechnologies, but this is just the tip of the iceberg.

In reality, there are many ways in which our behaviour and physiological responses are being monitored and used without our permission, affecting our privacy and autonomy just as much. It’s important to recognise that while neurorights highlight brain-related issues, they are part of a larger conversation about respecting and safeguarding our overall mental and personal integrity.


In your recent presentation at the STOA workshop in the European Parliament where experts discussed neurorights, you highlighted the importance of distinguishing between regulating and/or protecting the individuals “brain state” vs. ones “mental state”. You compared the latter with UNESCO’s intangible heritage preservation framework. What do you think are necessary neuroscientific preconditions to enable future discussions on mental state regulations?

Ophelia Deroy: Protecting an individual’s mind and personal identity is a matter of profound complexity. We must ask ourselves what it means to preserve not just the integrity of the brain as an organ but also what it means to be a person with a mind.

Wearing a cognitive neuroscientist hat, I would say it means protecting thinking, reasoning, imagining, feeling, moving, exploring, sleeping, remembering, deciding, hesitating, attending, defining ourselves, asking questions, protesting, and changing our minds. Even sleeping and dreaming are parts of that list, and interestingly, these are core areas where neurotechnology promises optimisation and intervention.

I believe we need to think that way to get away from the talk of data and who owns it. Once we start seeing ourselves only in terms of data, we’ve reduced ourselves to something that can be bought and sold. This is particularly relevant in the ongoing debates about large language models (LLMs). When you write down a thought, it’s tempting to think that your thought and the language data you create are the same. But consider sleep: sleeping is not the same as generating sleep data”. Our minds produce outputs and data, but we are more than just what we generate. Just like we work to preserve the intangible cultural heritage of societies, we should also protect the intangible aspects of our mental worlds.

We must be careful. We should protect not only a single viewpoint of what the mind is, but also the diverse range of abilities that human minds possess. I’m currently thinking a lot about how to recruit existing perspectives to do this, such as the ideas of philosopher and economist Amartya Sen about capacities, or UNESCO’s frameworks about cultural heritage, to find new ways to think about this issue.


Design fiction is a design practice to explore and also criticise possible futures by creating speculative scenarios (like the dystopian scenarios portrayed in the Black Mirror TV series). What role, in your opinion, does that play when steering public debates on new technologies and their potential risks?

Ophelia Deroy: Fictional scenarios like those in Black Mirror can play a crucial role in shaping public debate about new technologies and their potential risks. By presenting dystopian scenarios, design fiction doesn’t just forecast doom; it provides a framework for critical thinking. It encourages the public to question and scrutinise new technologies’ ethical, social, and psychological implications before they become deeply integrated into our lives. This is crucial in an era where rapid technological changes often outpace the development of corresponding ethical guidelines and regulations.

It is not just about books and movies: increasingly, you see visual artists and exhibitions in museums presenting us with versions of ourselves in the future, for instance, the works by Refik Anadol showing what an AI-generated dream could be like. I was recently at the Biennale of Technology in the Museum of Tomorrow in Rio, and there were so many amazing works that challenged visitors to experience human-technological enhancement.

Those act as concrete meeting points for society’s collective imagination, allowing us to explore the “what ifs” of technological advancements in a sensory and narrative form, but also crucially to have these reference points in common.

This is essential because human beings are naturally inclined to understand complex concepts through stories and their senses, but also because we are better at reasoning and discussing these concrete examples.

So in my view, design fiction doesn’t just reflect societal fears and aspirations; it actively shapes the collective discourse around technology.


Regarding “the idea of oneself”, what do you think of real-time modelling of users’ mental states – a kind of “digital twin”? How far are we, in your opinion, from such developments? Could you see the potential benefits of such a “digital twin” predicting/monitoring someones mental state e.g. in education, mental health or in self-development?

Ophelia Deroy: We are pretty far from the technological advancements required to do this. But again, it is essential to see what vision of the future is at stake here: say we can create your “digital twin” capable of accurately modelling and predicting your mental state; what are the potential benefits and risks?

In education, it could tailor learning approaches to your cognitive styles and emotional states, potentially enhancing learning efficiency. In mental health, it could provide early warning signs of distress or deterioration, allowing for timely intervention. The idea of having one’s thoughts and inner states constantly monitored and analysed could raise serious concerns about autonomy and mental privacy.

But philosophically, there is more: to me, the very concept of a “digital twin” primarily reflects our current preoccupation with individual decisions and actions. This perspective ignores the deeper aspects of our mental life: the thinking, reflecting, hesitating, and conversing that precede our actions. Our sense of self is influenced as much by the actions we choose not to take as by the ones we do. Moreover, our identity isn’t just a personal matter; it’s inherently social and shaped through interaction with others. What is your twin without everyone’s twin around?

European Science-Media Hub
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.