Scientific communication in the age of artificial intelligence  EOS Sciences

Scientific communication in the age of artificial intelligence EOS Sciences

In recent months, we have all learned about the amazing possibilities of generative artificial intelligence (AI). Scientists also experimented with new tools. Meanwhile, AI is penetrating all levels of the scientific process, and the comparison with the digitization of scientific publications since the 1990s is clear. Artificial Intelligence is changing the way knowledge is acquired and shared, whether it is extracting, interpreting and visualizing data, producing text, or turning text into graphs or animations. This challenges us on many levels.

Internal scientific communication

At the end of September, the German Research Foundation (DFG), one of the largest scientific research funding organizations in Europe, presented its first guidelines on generative AI models. The main question was: Is AI allowed to contribute to research results or new project applications? Initially, this question is about “internal scientific communication,” that is, the exchange of information and ideas between experts in the peer review process. Scientific progress still depends on this essential peer-to-peer communication.

In recent months, not only professional creatives and advertising experts have discovered the possibilities of ChatGPT, Llama, Bard, Midjourney, Dall-E or Stablediffusion. These AI tools create entirely new text and image experiences in an amazingly short time. They are based on large language models (LLMs), which have names like PaLM 2 (Google), Llama 2 (Meta), or GPT-4 (Open AI), for which large parts of the Internet have been a training ground.

Only a few months after the launch of ChatGPT and Midjourney, the DFG was already obligated to clarify how and where these tools were allowed to be used in the scientific process and internal communication. Initial answer: In principle, scientists are allowed to use AI tools, for example to help analyze data and write scientific publications or proposals. Both usage and specific tools must be cited – not as a co-author. They are good guidelines because they create transparency. A general ban would be meaningless and it would not be possible to verify compliance with it.

See also  Antarctica still bears traces of air pollution by Maori

The only exception is that artificial intelligence may not be used in preparing expert reports for DFG. This makes sense, because confidentiality and intellectual property are violated when raters include other people’s ideas in their prompts — instructions to the AI. There’s a reason for that. Anyone using generative AI feeds into the system, helping the companies behind it improve their products. Open AI, Meta, Google, Microsoft, and Stability AI constantly need new content to further develop their models, preferably content that has not yet been (co)written by the AI. Otherwise, there is a risk of “mode breakdown”: if the model is trained several times in a row on data that has already been generated by the AI, it will quickly become unable to produce a wide range of context-dependent results. The same error-prone examples will appear over and over again.

AI tools will enhance the productivity of science. In addition, the number of specialized publications will continue to increase as the number of researchers around the world continues to grow. Large publishers do not accept this, because their business model depends on a constant flow of new publications. At the same time, there has been a shortage of reviewers for some time, so scientific publishers are investigating whether AI can also speed up the peer review process. On the other hand, there has long been a shortage of reviewers due to the increasing number of publications. Therefore, development departments at major scientific publishers are at least considering whether the peer review process can also be accelerated by artificial intelligence. The risk of manipulation is real, for example by adjusting the acceptance score in advance, allowing mediocre and even lower quality works to be published.

External scientific communication

What will happen to the flow of primary publications in the age of artificial intelligence? What resources will we use as a community to make results digitally available to more diverse public target groups? There may be even greater revolutions in the near future in the field of “external scientific communication.”

See also  Three active underwater volcanoes are likely discovered near Sicily

First, several new methods of dissemination will emerge. Using LLM applications, specialized papers can be summarized into news items that can be accessed at the touch of a button. They can in turn be converted into a variety of languages ​​using tools like DeepL Translate and then further enhanced using Llama or ChatGPT for specific target groups – from 10-year-olds to sick relatives to well-read, highly educated people. Finally, texts can be linguistically polished using software such as DeepL Write.

Other tools such as Elevenlabs.io make it possible to convert the obtained texts into audio versions, similar to audiobooks or podcasts. The speaker’s voice can be chosen freely. AI can also create animated images or even short videos with text, which can then appear on TikTok, YouTube and Instagram, with or without animated infographics.

Tailoring any science topic to suit a target group in a short time creates opportunities for equal education and policy opportunities. Not only will scientists themselves, who professionally have to spend part of their time on science communication, but also communication departments of knowledge institutes, funding organizations and professional media will benefit from it.

In principle, anyone can intervene in the publishing process using AI tools, which means that in addition to the unexpected treasures of scientific communication, a deafening cacophony also appears on the horizon. Not only does AI produce “hallucinations” – fabricating information – itself, it also spreads disinformation and intentionally fake content. For conspiracy theorists, generative AI is a godsend: scientific papers can be easily falsified, to the point of being visually and stylistically indistinguishable from real peer-reviewed material.

Conversely, anyone with AI extraction tools will extract from scientific papers content that suits their personal interests. One potential upside is the emergence of new engagement models that can be compared to citizen science. In collaboration with scientists, citizens can help better adapt texts and graphics to consumers’ needs and level of media use. AI tools can be useful, for example by suggesting topics that are only interesting to smaller target groups and that receive almost no attention in traditional media.

See also  Science Editor - NTR, Hilversum / Villamedia

These traditional media are also subject to change. Newspaper and magazine publishers no longer have to arrange access to their (archival) content through a simple full-text search, which returns a list of more or less related articles. Alternatively, they can offer (paid) users highly individualized interaction with content, including personal dialogues. Dictate questions or enter comments via a smartphone, and the media content is then played, perhaps supplemented by photos and other (moving) footage. This way, new formats can be created, such as endless automated podcasts on your favorite topic.

The abundant supply of knowledge will pose new challenges for professional science journalism. Journalistically produced content and independent outside scholarly communication remain important to democratic functioning in our society.

It’s not just data journalists who benefit from AI. Better evaluation of scientific disciplines or faster collection of reliable data from heterogeneous sources will increase the standard of journalism. The real added value of scientific journalism – critical evaluation, commentary, and provision of context – will not come from generative AI. This is where it reaches its limits and the thinking of human experts remains crucial.

There are opportunities here to ensure quality journalism. AI companies, which always need fresh, premium science and media content to train their models, should help refinance quality journalism through a mandatory tax. This will not happen voluntarily, and moreover, resources must be distributed fairly in the highly complex media ecosystem of the age of artificial intelligence. The legislator must take the initiative here

Leave a Reply

Your email address will not be published. Required fields are marked *