Back

Artificial Intelligence & Scholarship

Article image
INCORPORATING TECHNOLOGIES
Artificial Intelligence & Scholarship

MC Contact profile image
Author :
Makena Neal & ChatGPT
Artificial Intelligence & Scholarship

MC Contact profile image
Author :
Makena Neal & ChatGPT

The following are a collection of publicly sourced articles related to the recent rise of artificial intelligence in everyday applications. 

note: following each [hyperlinked] title/subtitle are abstracts generated by ChatGPT after receiving prompts from M. Neal. "Write an abstract for [article link]" is indicated with [URL] abstract. "Write an abstract for [article text]" is indicated with [text] abstract. These very similar prompt-responses are shared consecutively in the hope that reading the abstracts will showcase the variations in ChatGPT output (despite similar input). 


Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

[URL] abstract: The recent release of the AI chatbot ChatGPT by OpenAI has raised concerns among the scientific community regarding the ethical use of Large Language Models (LLMs) in research. The chatbot's ability to produce fluent language has made it easier for students and researchers to pass off LLM-generated text as their own. As a result, publishers and editors have taken a stance on the matter, setting principles for ethical usage of LLMs in research. Nature, along with all Springer Nature journals, has outlined two principles: 1) LLMs will not be accepted as credited authors on research papers as they cannot take responsibility for the work, and 2) researchers using LLMs must document their use in the methods or acknowledgements sections of their work. Currently, editors and publishers can detect LLM-generated output but advancements in AI may make it more challenging. The need for transparency and integrity in research remains a key concern, and the scientific community must ensure that methods and authorship are kept transparent and trustworthy.

[text] abstract: Artificial Intelligence (AI) has gained the ability to generate fluent language, making it increasingly difficult to distinguish from text written by humans. The AI chatbot ChatGPT, developed by OpenAI in San Francisco, has brought the capabilities of large language models (LLMs) to a wider audience, with millions of people using it for various purposes such as generating research papers. However, there are concerns about the potential for LLMs to produce unreliable results and for students and scientists to deceitfully pass off the text as their own. To address these ethical concerns, scientific publishers such as Nature and Springer Nature have established guidelines for the use of LLMs in research, including that no LLM tool will be credited as an author on a paper and that researchers using LLMs must document their use in the methods or acknowledgements sections. Although current methods can detect text generated by LLMs, future developments may improve the LLMs' outputs. As such, researchers should consider how the transparency and trustworthiness of the scientific process can be maintained when using AI tools.

Abstracts written by ChatGPT fool scientists

Researchers cannot always differentiate between AI-generated and original abstracts.


last updated 02/01/2023
profile-img
Posted by:
Makena Neal AI & Education