Prof. Mike S. Schäfer on ChatGPT and other generative AI tools: ‘A gamechanger for science communication’

Mike S. Schäfer, Professor of Science Communication at the University of Zürich (Switzerland), has been investigating communication and artificial intelligence (AI) for several years. He is currently focussing on how the development of the technology and its impact on society are envisioned in public debates in China, the US and Germany. He has recently published an article on the implications of generative AI for science communication.


What is ChatGPT?

Mike S. SchäferMike S. Schäfer: ChatGPT is a chatbot. It’s based on GPT, which stands for Generative Pre-trained Transformer, of which there are several versions. GPTs are large language models (LLMs) developed by Open AI with a dialogue functionality that provides original, human-like responses to user prompts about all kinds of topics, from science and literature to sports. The responses are based on extensive digital training data and human feedback.

LLMs have been around for a while, but ChatGPT’s high quality answers and chat functionality have made it a huge success. Only two months after it was launched in November 2022, it had reached 100 million users.

However, ChatGPT and its competitors like Google’s BARD or Anthropic’s Claude, are ‘black boxes’: details about the training data and processes that were used have not been disclosed. A lot of information around ChatGPT and many generative AI tools is proprietary, which hinders detailed analyses of copyright infringement and biases.

Responses will likely overrepresent some topics and views, may come more from certain regions of the world, may have certain values embedded in them etc.

In order to analyse this, many researchers are now trying to ‘reverse engineer’ GPT, that is, reconstruct information about the training data from GPT’s output to different questions.


What are the possibilities of using ChatGPT for science communication?

Mike S. Schäfer: I am convinced that generative AI is a gamechanger for science communication. That includes ChatGPT, but also includes tools like Midjourney or DALL-E that generate imagery, tools that translate or paraphrase text, produce sound, video, etc. These tools will have a profound effect on science communication. From what I’ve read, scholars and practitioners in the field agree, but their assessments oscillate widely between highly positive and rather dystopian.

On the positive side, generative AI can help generate content ideas for communicators and journalists, there are some interesting ongoing projects generating headlines or story angles for science journalists based on AI. GPT can summarise scholarly publications and findings and adapt the answers to user needs.

And, while ChatGPT currently produces written text, VisualChatGPT is coming. Some users are already able to use GPT together with other tools to produce visuals, computer animations, videos and even games.

Importantly, generative AI might be able to enable dialogical science communication at scale. Users not only receive quick replies to their science-related questions, they can ask questions repeatedly, until they get responses they find fully satisfactory. They can ask for simpler explanations, for examples, for sources etc. This is a tantalising prospect for broadening, even democratising, dialogical science communication, which has often been limited to small groups.


Surely, there are also risks, right?

Mike S. Schäfer: Absolutely, and considerable ones. ChatGPT and similar products still have substantial weaknesses when it comes to science communication. Their answers are based on training data without a deeper understanding of the actual content. This also applies to things like numbers or references, which can end up being wrong or fictitious. Although the latest version, GPT-4, shows considerable improvements in accuracy and sourcing, users should remain wary of this limitation and check factual claims carefully when using these tools.

Researchers have also expressed concerns On topics on which there is a considerable amount of dis- and misinformation, generative AI can become a powerful tool to further spread misleading narratives that could have serious consequences, for example on health issues.

There are also fears that AI tools will put writers’ jobs at risk. Given the tight financial situation of universities, scientific institutions and science journalism in many countries, this is a legitimate concern.


How will ChatGPT affect the reception of scientific knowledge?

Mike S. Schäfer: This is really hard to say – we urgently need studies on user interactions with generative AI that assess how people use it, whether they trust it (or not), and how it affects their knowledge, even what they think about knowledge in general and how they act upon these beliefs. Such studies would need to take differences between people into account. In science communication, ‘the public’ is never a unified, homogenous mass of people. There are different target groups in science communication and each is likely to engage with ChatGPT differently.

One pressing question is whether generative AI creates new divides between users. There could be ‘first-level’ divides, in terms of access: generative AI may not be available to everybody because it may be too expensive for some users. But, there could also be ‘second-level’ divides: because of differing skills in dealing with the technology, some people may be able to use generative AI more fruitfully than others, thus widening gaps between groups of people.


Are GPT-generated answers easily spotted?

Mike S. Schäfer: There is a lot of work going on in this field right now. Educational institutions, plagiarism detection companies and some providers of generative AI themselves are working on modes of detection that could ultimately be used in education institutions. In general, AI-generated content is more difficult to detect compared to plagiarism because it is not simply copied and pasted from another text – the content is new, created in a novel way, and it differs from the underlying training data.

Detection tools will likely only be able give a probability, a percentage to indicate whether a piece of text might have been generated by AI. Universities are dealing with generative AI in different ways; many are developing their own guidelines with some demanding disclosure when using AI-tools.


How do you envision the future of ChatGPT?

Mike S. Schäfer: ChatGPT has cornered the market for more than half a year now. In this time, it has fundamentally changed the public perception of generative AI, triggered important discussions in science and academia, and has shifted the balance of power between technology companies who were a step ahead with the technology, like Microsoft, and others who were not there yet, like Google. Many competitors have now positioned themselves – we’ll have to see whether ChatGPT remains as popular as it is right now.

With regards to generative AI in general, beyond ChatGPT, the technology is here to stay and we have to get used to these kinds of tools being around. Instead of seeing them as illegitimate, we should include and integrate them into our work where possible. That means embracing their opportunities, playing around with them, raising our knowledge about them and honing our skills in using them.

How to properly prompt generative AI – our ‘prompting literacy’, if you like [communicating with AI to get the response you are looking for, ed.] – will become an important skill for example.

We should keep using generative AI tools critically, and be aware of their limitations and underlying biases; they exist, they are helpful in some ways, and they are problematic in others. If we use, deliberate and regulate the technology in the right way, we might also be able to remedy these challenges somewhat. Science communication scholars and practitioners should actually be quite well positioned to do so.