Interview with Nicholas Diakopoulos, associate professor in communication studies and computer science at Northwestern University where he directs the Computational Journalism Lab (CJL). His research focuses on computational journalism, including aspects of automation and algorithms in news production, algorithmic accountability and transparency, and social media in news contexts.
One of the conclusions of the 2019 LSE-report on AI in journalism is that cultural resistance is one of the challenges that might hold back the application of AI. Do you have an advice on how to tackle this challenge?
Do you think that the corona-crisis might accelerate the application of AI in journalism?
Industries often change after a systemic shock. After the financial crisis of 2008 there were huge personnel cuts in newsrooms. The corona-crisis might also force newsrooms to reimagine their workflow. If an AI-tool can do certain things that people were doing before, a news organization might need less people for the same work, but it will also need to reskill people to work together with AI. Overall I’m optimistic that AI is not going to hurt journalism in terms of the number of jobs in the long run. AI just can’t do all the complex thinking and communication that is needed for good journalism. But the type of jobs in journalism will evolve. Some jobs will look less like traditional reporting jobs and will involve more IT-skills.
What will be the impact on journalism of AI-tools that can create text, sound, images and video automatically?
We hear a lot right now about deep fake images and videos. How to verify whether an image is real? What will deep fakes mean for trust in the media? Journalism will need to get a grip on these issues. On the positive side, such AI-tools can be potentially very powerful in creating useful content. You might feed it with datasets, press the button, and it will generate a paragraph that accurately describes what the data are telling us. AI-driven organizations are investing in and researching this. If they succeed, they will be able to produce content at large scale and that might be a competitive blow for the traditional media. That’s my worry.
Earlier this year, Microsoft used a bot for writing news. Quickly however, it was discovered that the bot discriminated. What is your opinion on such a news writing bot?
If Microsoft would have asked me for advice, I would have advised them to keep a human in the loop of the editorial process. Every time a company removes people completely from the process some mishap occurs. AI easily picks up on popularity signals, but people are better at picking up on important editorial criteria that are not as easy to measure as popularity.