Skip to content

ChatGPT: scientific publications will not accept articles signed by the AI

Springer Nature has banned articles written with language tools such as ChatGPT in their scientific publications when they are identified as authors because they understand that an artificial intelligence (AI) cannot take responsibility for the work.

LOOK: How are apps that imitate analog photography conquering the networks?

ChatGPT of OpenAI has surprised by its ability to write texts in such a way that it is not possible, or at least very difficult, to detect that they are not the work of a person. This has generated concern in areas such as scientists about the ethical use of this tool and other similar extended language models (LLM).

“The great concern in the research community is that students and scientists can misleadingly pass off text written by the LLM as one’s ownor use LLMs in a simplistic way (such as to perform an incomplete literature review) and produce work that is unreliable.explain the publishing group Springer Nature in an editorial.

LOOK: YouTube: they publish a video and pass it off as the oldest published on the platform

This situation has led update the guidelines for the publication of research in journals like Nature, in search of greater transparency and veracity in the articles.

For this reason, they will not accept publications in which a language model such as ChatGPT be identified as the author, because “Any attribution of authorship carries responsibility for the work, and AI tools cannot assume such responsibility.”

LOOK: Naughty Dog confirmed that there will be no more installments of Uncharted and questioned a possible TLOU 3

Nor will they accept works that have used one of these tools as support for documentation, but they have not indicated it in the methods or acknowledgments sections, although they also admit that it is indicated in the introduction “or other appropriate section”.

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular