A scientist’s opinion: interview with Dr Mariya Stoilova on recent social media bans for children

Dr Mariya Stoilova is a Postdoctoral Researcher in the Department of Media and Communications at the London School of Economics and Political Science (LSE) and Manager of the Digital Futures for Children centre (DFC) investigating children’s rights in the digital environment. She recommends amongst others to combine strong enforcement of existing regulation, child-rights-by-design obligations on platforms, and ongoing consultations with children. “This moves the debate from symbolic action towards structural change.”


Your work shows how hard it is to prove technology causes harm. How strong is the evidence supporting the laws to ban or strictly restrict social media for children?

Teenager chatting on phone in bed at nightMariya Stoilova: A central challenge is that the evidence base does not support simple causal claims between technology use and harm. The research landscape is genuinely mixed: there is robust evidence of both risks and benefits associated with technology use, but far less evidence demonstrating that restricting access improves outcomes. Much of the strongest evidence points to specific design features – algorithmic amplification, recommender systems, and data-driven engagement loops – that shape children’s experiences. There is no evidence that bans work to make children’s lives better.

This leads to a disconnect: laws are often shaped more by public concern and political demands than by solid evidence. The Australian case illustrates this clearly – millions of accounts were removed, but we still lack evidence on wellbeing outcomes or behavioural change.


Digital divides worsen social inequalities. Who is most affected, and is this being discussed in policy circles?

Mariya Stoilova: For many vulnerable or marginalised young people, digital platforms are not simply leisure spaces; they are critical infrastructures for social connection, identity formation, and access to information or support that may be unavailable offline. Restricting access risks removing these benefits disproportionately from those who rely on them most. Meanwhile, more advantaged children are better able to circumvent restrictions – through multiple devices, parental support, or higher levels of digital literacy – while others cannot. This makes restrictions deepen inequalities rather than mitigate them.


Your research suggests that risks and opportunities coexist in unregulated digital spaces. Does limiting access cut harm, or just shift it elsewhere?

Mariya Stoilova: The evidence strongly suggests that restriction alone does not eliminate harm – it redistributes it. Digital environments are fluid, and young people adapt quickly to regulatory changes.

Early evidence from Australia illustrates this displacement effect clearly: children have not stopped going online; instead, they have shifted to alternative platforms – gaming environments, messaging apps, and AI tools – many less regulated or scrutinised. Screen time has not necessarily decreased; it has moved.

Moreover, displacement can increase risk exposure. When young people migrate to fringe or less moderated spaces, they may encounter more extreme content or weaker safeguards.


Why do current ban proposals focus on access, while commercial exploitation of children’s data – a serious and overlooked risk – remains largely absent from legislative discussions?

Mariya Stoilova: Our research at the Digital Futures for Children centre – a joint initiative by the London School of Economics and Political Science and the 5Rights Foundation – consistently highlights that commercial data exploitation, one of the most serious and well-evidenced risks, is not accidental; it’s structurally embedded in platform business models. Algorithmic systems, targeted advertising, and engagement-driven design are explicitly engineered to maximise attention and data extraction. A ban does little to disrupt these systems.

If policy does not engage with these underlying economic and technological drivers, it risks addressing the most visible aspect of the problem while leaving its root causes intact.


Have children and adolescents been consulted when developing these bans, and what does research say about their views?

Mariya Stoilova: Children’s voices have been notably underrepresented in the design of proposed bans, despite being the group most directly affected. Where children have been properly consulted, the findings are strikingly nuanced. Large-scale consultations – for example, with tens of thousands of UK children – show that young people recognise both the risks and benefits of digital technologies. They do not simply call for bans; rather, they consistently ask for safer, more age-appropriate environments.

Research also shows that young people feel overwhelmed by digital pressures but simultaneously held responsible for managing them, despite having little control over platform design.


What’s missing in today’s debate on social media and youth?

Mariya Stoilova: Perhaps the most important missing piece in the debate is a shift from a binary framing (ban or not ban) towards a systemic, design-led approach. The focus shouldn’t just be on restrictions, but also on what we offer. If access is limited, what digital or physical spaces can support young people’s socialisation, learning, and well-being?

A more effective approach would combine strong enforcement of existing regulation, child-rights-by-design obligations on platforms, investment in digital literacy and public infrastructure, and ongoing consultations with children. This moves the debate from symbolic action towards structural change.

Related article

European Science-Media Hub
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.