The key point I would make about regulating deepfakes is that, in some cases, new laws may be needed, but much of the harm deepfakes cause is covered by existing laws, or could be covered by amending existing laws.
Despite alarmist news stories about deepfakes heralding the end of democracy or truth itself, the technology – for better or worse – is far from perfect, which suggests that there is still a window of opportunity to prepare society, institutions and regulatory frameworks for the moment it is.
Rhetoric about the ‘end of truth’ plays into the hands of people who already are saying you can’t believe anything – and that is neither true of most audiovisual material, nor true yet for deepfakes. We should not panic but prepare instead.
There is a strong need for data protection laws that can adequately address the challenges of user data being misused and harvested without consent by third-party applications, political parties/politicians, researchers, and others.
How useful can machine-learning be in dealing with vectors of disinformation such as deep fakes or bots, and what are the implications of AI-powered fact-checking and deprioritising systems for media pluralism and freedom of expression?