Listen to the audio description:
There's no doubt that Artificial Intelligence (AI) will be increasingly present in humanity's daily life. Aiming to emulate human cognitive capacity, AI easily surpasses us, able to perform tasks in mere seconds that would take a human being days, weeks, or even months.
In addition to automating tasks, its ability to constantly learn allows it to evolve and expand into new areas, as we can see from image-generating AI – producing extremely realistic illustrations – to communication-focused AI, such as ChatGPT.
AI can bring many benefits to humanity; however, its indiscriminate use can lead to catastrophic consequences. Therefore, in this text, I'll address the dangers associated with using AI in scientific research, especially in the healthcare field.
Aspiring to understand more about technological advancement, I discovered the world of conversational AI. My first contact was through a coworker who warned me about the danger such platforms could pose to the future of scientific research, especially concerning academic plagiarism, as many students began using them to write their academic papers (YANG, 2023).
His concern was based on the principle that texts produced by AI are based on a vast amount of information available on the internet, but without proper referencing of the data used, making it impossible to determine the original source or whether the text was plagiarized from another author. In this context, he asked himself: "Could current plagiarism programs identify these 'patchwork writing'?". However,despite my concerns, I initially didn't believe there was a technology capable of creating texts with such a level of sophistication. To my surprise, it was more than possible.
I then began to understand more about conversational AIs, having been surprised by their writing ability. They could emulate scientific writing so perfectly that it would be impossible to tell whether the text was written by a human or a machine. So, with that in mind, I began my journey exploring ChatGPT, which I soon discovered was a source of great potential, but also of serious dangers.
From the beginning, I wondered where all that information came from. When I asked, out of curiosity, to create an example article and inquire about the sources it used, the answer was always the same: "As a language model, I do not have access to the specific list of sources". This question immediately raised an alarm for me regarding my coworker's warning, but what was just a "nagging doubt" soon became a real "elephant in the room".
Testing the possibilities of assistance during the research, I asked the chatbot to provide me with data regarding the participation of nursing students in projects and research groups. In response, it listed several research studies I could use to gather this information: "National Survey of the Profile of Nursing Graduates (2018)", "Higher Education Census (INEP)", and "National Registry of Research Groups (CNPq)" – apparently nothing to be suspicious of.
Additionally, I asked it to specify the data related to this information. That's when I encountered the following excerpt in response:
I immediately thought, "Excellent! This was the data I really needed for my research and couldn't find anywhere". Obviously, as a researcher, I did the right thing: I searched for this data on Google to find and review the original research. However, there were no results. So, I started asking the chatbot for more information, requesting everything from the exact title of the "research" to the access link, but neither yielded positive results.
It was at this point that I encountered something extremely dangerous: all the links the chatbot sent me were either broken or redirected me to another article or page with no correlation to the "research" mentioned. Even the titles of the articles, when searched on Google, yielded no results. So, I decided to ask it directly if it was fabricating the presented data, and its response was:
Since I knew that the "Profile of Nursing in Brazil" survey really existed, I began a 'reverse engineering' process to verify the information the chatbot had presented to me. However, the only survey available on the website of the Federal Nursing Council – COFEN (http://www.cofen.gov.br/perfilenfermagem/index.html) is not from 2018 and doesn't present data on the participation of nursing students in research projects during their undergraduate studies. Similarly, no study cited such a document.
Considering the detail of the answer, one might even wonder if this research probably already existed but was removed from the internet. So, I took the liberty of contacting the researchers responsible for the study, asking about the information presented. By email,they gave me all the publications derived from the original research, as well as the final report and the databases available on the COFEN website. After much reading, no information generated by AI was identified in any of the materials, indicating a high possibility thatthis data was fabricated by the chatbot.
It's worth noting that when reference lists were requested, all of them, without exception, were entirely fictitious studies. It was observed that the chatbot credits real authors, experts in the specific research area, but for studies that were never conducted by them. In other cases, the titles were invented or mixed, meaning the titles of two or more studies were combined to generate a new one. However, at first glance, the references seem real, since, due to its learning, the chatbot takes care to format the reference according to standards, presenting the name of a journal, volume, pages, and even a Digital Object Identifier (DOI), which was inevitably false or belonged to a different study.
Although it might seem like an isolated case, recently, in a news article published in the journal Nature about the use of ChatGPT, the author Brian Owens warned of concerns about the reliability of chatbot tools, highlighting their tendency to create completely fictitious literature lists (OWENS, 2023). This news reinforces another article previously published on the journal's website, which aimed to criticize the use of ChatGPT as a co-author of scientific papers, explaining the structural and ethical problems involved (STOKEL-WALKER, 2023).
Given that several studies retracted due to misconduct are still cited today (OLIVEIRA, 2015), such weaknesses point to a dangerous future in the healthcare field. They indicate the existence of studies with fictitious information, whose impact could be immeasurable and whose retractions might not even reach the general public.
In this context, as scientific data form the basis for treatments and public health policy planning, their indiscriminate and malicious use can cause permanent damage, impacting the reliability of professional performance and the standing of the professional class. This also affects the process of shaping the identity of these professionals, who, for now, have chosen to replace their role as researchers with the prompt responses of AI.
However, it's worth noting that these problems, although serious, are common for an AI that is still in its learning phase. ChatGPT, for example, was launched in November 2022, so its AI is still learning and may be able to address all of its current problems in the very near future. Therefore, despite its weaknesses, its constant use can help transform it into a potential tool for teaching and research.
In conclusion, students are advised to wait before venturing into the world of conversational AI, as they need to develop the critical skills to distinguish false information from true information. Teachers and researchers are advised to exercise caution and should strive to perform their functions with mastery, always questioning the data presented by these tools. It's also understood that conversational AI, such as ChatGPT, will continue to play a fundamental role in scientific research, becoming, in the not-so-distant future, essential tools in researchers' daily lives and, consequently, powerful allies in consolidating the teaching-research-extension model.
References
OLIVEIRA, M.B. A epidemia de más condutas na ciência: o fracasso do tratamento moralizador. Scientiae Studia, São Paulo, v. 13, n. 4, p. 867-897, 2015. DOI: 10.1590/S1678-31662015000400007. Available from: https://www.scielo.br/j/ss/a/hMs7Y4FVZvbjwHkmYKTn3Ny/?lang=pt#. Accessed on: 11 mar. 2023.
OWENS, B. How Nature readers are using ChatGPT. Nature, London, v. 615, p. 20, 20 fev. 2023. DOI: 10.1038/d41586-023-00500-8. Available from: https://www.nature.com/articles/d41586-023-00500-8. Accessed on: 11 mar. 2023.
STOKEL-WALKER, C. ChatGPT listed as author on research papers: many scientists disapprove. Nature, London, v. 613, p. 620-621, 18 jan. 2023. DOI: 10.1038/d41586-023-00107-z. Available from: https://www.nature.com/articles/d41586-023-00107-z. Accessed on: 11 mar. 2023.
YANG, M. New York City schools ban AI chatbot that writes essays and answers prompts. The Guardian, New York, 06 jan. 2023. Available from: https://www.theguardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-chatbot-chatgpt. Accessed on: 11 mar. 2023.
How to cite (ABNT Style):
ALMEIDA, Y. S. Os perigos do ChatGPT (e outras IAs) para a pesquisa na saúde. Enfermagem Pesquisadora, Rio de Janeiro, 18 mar. 2023. Available from: https://enfermagempesquisadora.com.br/os-perigos-do-chat-gpt-e-outras-ia-para-a-pesquisa-na-saude/.
This work is licensed under a Creative Commons Attribution 4.0 International License.