The news that some scientists have included OpenAI (the consortium that has developed some artificial intelligence systems including Chat GPT) among the authors of scientific papers has aroused immediate reactions from both the research community and that of peer reviewed journal editors (1). In reality, Chat GPT is not the first artificial intelligence (AI) to become a co-author of a scientific paper: already last year a paper also written by the GPT-3 bot appeared on a preprint site, a precursor of the system that in recent weeks is intriguing and worrying many content producers, from journalists to scientists (2).
And it is not even the most curious among the authors, given that the chronicle of scientific production counts among the co-authors of scientific papers several animals and characters invented with funny or ironic names, starting from an iconic case, that of the American physicist William Hoover who included among the co-authors of some of his works his colleague Stronzo Bestiale (an expression that Hoover had heard by chance during his trip to Italy) (3).
The responsibilities of the author
Why then so much agitation over what seems to be a largely expected evolution of technology?The answer lies in the definition of author of a scientific paper, the result of years of evaluations in the ethics of scientific research and the many cases of scientific fraud, some of which (the most striking ones) are based on totally invented data (4). Scientific fraud has sometimes been committed only by a few participants in the research but the presence of several authors has made it difficult to attribute to each of them a precise responsibility. For this reason, the norm accepted today is that all the authors of a scientific paper are responsible for every part of it, and are required to control each other. There are some exceptions to this rule (such as large consortia signing clinical trials with thousands of names), mitigated by the increasingly common request to specify, at the bottom of each paper, what each author’s factual contribution was, regardless of his position on the list of co-authors.
Authorship requirements
In the most recent version of the guidelines, which have been updated twenty times since 1979, the International Committee of Medical Journal Editors (ICMJE) recommends basing authorship assessments on four criteria.To be included as an author, you must:
- have made a substantial contribution to the conception or design of the work, or to the acquisition, analysis or interpretation of data;
- have drafted the article or critically revised it, adding important intellectual content;
- have given final approval of the version to be published;
- have agreed to be held accountable for all aspects of the work, ensuring that issues relating to the accuracy or integrity of each part of the work are properly investigated and resolved.
Although these rules are mainly applied in the biomedical sector, they have also been extended to other disciplines through institutions such as the Committee on Publication Ethics (5).
A tool to mention
Returning to ChatGPT and generative systems (i.e. able to generate texts based on the complex laws that govern natural language), their use appears to be in contrast with most of the rules that define the right to call oneself an author, first and foremost that of taking responsibility for the results obtained.And this is the direction that the various publishing companies are taking: after the position taken by Nature, which decided not to accept AI as an author, other journals such as JAMA have also followed the same path (6).
This does not mean that the tool cannot be used: it is allowed but must be mentioned as such, in the section dedicated to the methodologies with which the study was conducted, as is done for any instrument.
However, the problems are not over, particularly in areas where accuracy of information is crucial.
«The academic publishing community has been quick to raise concerns about the potential misuse of these language models in scientific publications,» the authors of the editorial write in JAMA. «Some have experimented by asking ChatGPT a series of questions on controversial or important topics (for example, whether childhood vaccinations cause autism), as well as specific technical and ethical questions related to publication.The results showed that ChatGPT’s textual answers to questions, while mostly well-written, are difficult to distinguish and out of date, false or fabricated, lacking accurate or complete references and, worse still, with fabricated and non-existent evidence to support the claims made.»
In addition, the texts generated by relying on information already published could fall within the definition of scientific plagiarism, even if tools such as Chat GPT are able to recreate the text with such variability that they cannot be detected by normal anti-plagiarism software, but only by those that the creators themselves are making available these days.
«OpenAI recognizes some limitations of the language model, including that of providing plausible but erroneous or nonsensical answers, and that the recent release is part of an open iterative implementation intended for human use, interaction and feedback to improve it.» In essence, experts say, the model is not ready to be used as a source of reliable information in the absence of careful human oversight and review, at least in the field of medicine.
Broader ethical issues
There are, however, other ethical issues on which the scientific community will have to reflect, since the instrument will only improve over time.For example, such a tool could bridge the linguistic gap between native English-speaking scientists and everyone else, facilitating the publication of research conducted and written in other languages.
On the other hand, there is an objective problem of overproduction of scientific content, such as to make it almost impossible for an expert to follow the evolution of his disciplinary field and it is not clear why the scientific community should promote a tool that increases the speed and quantity of papers, while it could be interested if it allowed to make a science of better quality and greater statistical significance. Finally, the improvement of these tools could declassify the ability to write a scientific paper from an essential requirement for doing science to ancillary competence, enhancing the ability to verify data and the structure of texts, in order to maintain intact human responsibility on these products of the intellect.
In the meantime, all those planning an article written with the help of artificial intelligence should follow the recommendations that the editors have shared these days:
- the sections created with the AI must be appropriately highlighted and the methodology used to generate them must be explained in the paper itself (also including the name and version of the software used, in the name of transparency);
- the submission of papers entirely produced by AI, in particular if they are systematic reviews of the literature, is strongly discouraged, also because of the immaturity of the system and its tendency to perpetuate the statistical and selection biases present in the instructions of the creator of the system, unless they are studies aimed precisely at evaluating the reliability of these systems (an objective that must obviously be explained in the article itself);
- The generation of images and their use in scientific papers is discouraged because it is contrary to the rules of ethics of scientific publications, unless such images are themselves the subject of the investigation.
Access to the site is restricted and reserved for healthcare professionals
You have reached the maximum number of visits
Source — https://www.univadis.it/viewarticle/chatgpt-come-coautore-di-paper-scientifici-si-pu%25C3%25B2-fare-2023a100029a