ACM Logo  An ACM Publication  |  CONTRIBUTE  |  FOLLOW    

Navigating the AI Frontier: A guide for ethical academic writing

By Timothy Ros, Anita Samuel / October 2024

TYPE: OPINION
Print Email
Comments Instapaper

Artificial intelligence (AI) has been subtly influencing various aspects of our world for decades. From personalized recommendations on platforms like Netflix to the auto-sensing capabilities in modern vehicles; AI’s presence is deeply embedded in everyday life [1, 2]. The public introduction of ChatGPT in November 2022 marked a significant turning point, making the power of AI accessible to a broader audience. This sparked both excitement and a wave of ethical concerns regarding its potential applications [3, 4]. Generative AI tools like ChatGPT can produce a wide range of content, from complete poems and medical diagnoses to presentation scripts and marketing posts [5]. In academic contexts, ChatGPT is transforming writing processes by generating ideas, refining text, expanding content, summarizing lengthy documents, and offering deeper insights [6]. However, ChatGPT’s ability to generate entire articles from simple prompts raises critical ethical concerns for publishers. When authors can utilize AI to analyze data, create outlines, edit content, and even draft complete manuscripts, issues of authorship, plagiarism, and academic integrity become paramount [7, 8]. This editorial explores best practices and guidelines for the responsible use of AI in academic writing. 

Defining Terms

Let's untangle some jargon:

  • Artificial intelligence (AI): The broad concept of machines mimicking human intelligence.
  • Generative AI (GAI): A subset of AI focused on creating new content (text, images, music).
  • Large language models (LLMs): A type of GAI model specializing in understanding and generating human language.

There are two broad categories of AI that must be considered: embedded AI and generative AI (GAI). Tools such as Grammarly, MS Word, MS PowerPoint, etc., have AI capabilities embedded in them. GAI, on the other hand, is fully dependent on its AI capabilities. The concerns surrounding AI in academic writing primarily relate to the use of GAI, while the use of tools with embedded AI is more generally accepted. It is important to draw this distinction.

Challenges of GAI

There are three primary concerns in using GAI:

  1. Privacy: Generative AI models are trained on large datasets and information shared with them by users. These data could also include personal and sensitive information, raising concerns about privacy [9].
  2. Validity: GAI tools are prone to "hallucinations" or generating inaccurate information.
  3. Bias: GAI tools can inherit biases from the data they are trained on and from the algorithms that guide the models [10].

Given these issues, publishers have developed various guidelines on the appropriate use of AI in academic manuscripts. While publishers have developed their individual guidelines, the policies share some commonalities.

  • AI tools cannot be included as authors on manuscripts. For example, ChatGPT cannot be included as an author on an article.
  • The AI tools used must be transparently declared in the manuscript. Information to be provided includes the name and version of the GAI tool and what it was used for.
  • Authors must review all content generated by AI tools to ensure its validity and attest to this.
  • Just as research data is carefully maintained when conducting and writing up research, the use of AI tools must be accurately recorded, and interactions must be saved for future reference.

Staying Ethical: Publication Guidelines and Best Practices

Although publishers have developed AI policies, authors remain unclear on how to accurately credit the use of AI tools.

We recommend using this template to acknowledge the use of GAI tools:

[Name of tool, version] was used on [date of use of AI] for [how the tool was used]. The authors reviewed the AI-generated content and [revised/adapted/used] the output. 

This manuscript was developed with the assistance of OpenAI ChatGPT-4o, which was used for generating initial drafts and conducting preliminary literature searches. The authors reviewed the AI outputs.

 Another challenge for authors is understanding where to include the statement in the manuscript. There are three possible placements in the manuscript: immediately following the use of the AI tools at different points within the manuscript, a comprehensive paragraph in the Methods section of the manuscript listing all the tools and how they were used, or at the end of the manuscript. Below are a few examples:

This case scenario was generated by Google Gemini. The authors reviewed the output for accuracy and adapted it.

In this table, ChatGPT-4o was used to develop a list of sustainable practices in urban planning. The authors reviewed this list and adapted it to the final version in this table.

OpenAI ChatGPT-4o generated a list of best practices for using generative AI in academic writing on August 15, 2024. The authors reviewed and adapted the list. Google Gemini then optimized the drafted version of this section. The authors reviewed this output for accuracy.

Conclusion

AI is fundamentally reshaping the landscape of academic writing. LLMs present a remarkable opportunity to streamline writing processes and improve research productivity. However, as we integrate these powerful tools into academic work, it is essential to uphold ethical principles, ensure transparency, and protect the integrity of scholarly publishing. By adhering to these guidelines, researchers can leverage the capabilities of AI to advance knowledge while preserving the rigorous standards that underpin academic excellence. 

References

[1] Gilbey, J. The master algorithm: How the quest for the ultimate learning machine will remake our world. Times Higher Education September 17,  2015.

[2] Russel, S. adNorvig, P. Artificial Intelligence: A modern approach (4th ed.). Pearson, 2021.

[3] OpenAI. Introducing ChatGPT. November 30, 2022.

[4] Vincent, J. ChatGPT proves AI is finally mainstream???and things are only going to get weirder. The Verge. December 2. 2022.

[5] Bommasani, R. et. al. On the opportunities and risks of foundation models. 2021. arXiv: 2018.07258 [cs.LG]

[6] Szücs, B. Not replacing but enhancing: Using ChatGPT for academic writing. Times Higher Education. June 14, 2023.

[7] Anderson, M. and Anderson, S. L. (Eds). Machine Ethics. Cambridge University Press, 2011. 

[8] Floridi, L. and Cowls, J. A unified framework of five principles for AI in society. Harvard Data Science Review. 1, 1 (2019).

[9] Rajappa, S. An introduction to the privacy and legal concerns of generative AI. Forbes. March 29. 2024. 

[10] IBM Data and AI Team. Shedding light on AI bias with real world examples.October 16, 2023.

Disclaimer
The opinions and assertions expressed herein are those of the author(s) and do not necessarily reflect the official policy or position of the Uniformed Services University or the Department of Defense. This work was prepared by civilian employees of the US Government as part of their official duties and therefore is in the public domain and does not possess copyright protection.
In this article, we used Google Gemini to develop an outline for the article. This outline was reviewed by the authors and used as a starting point for drafting the manuscript. Grammarly was used as an editing tool to provide grammar checks within the article.

About the Authors

Dr. Timothy J. Ros is an experienced academic leader and educator with a diverse background in business and educational leadership. He currently serves as the School of Business Chair, MBA Program Director, and Assistant Professor of Business at McKendree University, where he has made significant contributions to curriculum development and research in digital pedagogy. Dr. Ros holds a Doctor of Education in educational leadership, a Master of Education in adult education and lifelong learning, and a master’s in business administration with a focus on International Business. His extensive military career, culminating as a Command Sergeant Major in the United States Army, has equipped him with unique leadership skills and a deep commitment to education. Dr. Ros is also an active researcher with affiliations in projects focused on leveraging artificial intelligence in education and reducing transactional distance in blended graduate courses. He is a recognized leader in both academic and military communities.

Anita Samuel, Ph.D., is Assistant Dean for Graduate Education, Associate Professor at the School of Medicine, and Vice Chair of Distance Learning at the Department of Health Professions Education for the Uniformed Services University of Health Sciences, Maryland. Her areas of expertise are online learning, educational technology, and adult education. She is the Editor-in-Chief of ACM eLearn Magazine.

© Copyright is held by the owner/author(s). 1535-394X/2024/10-3694981 $15.00 https://doi.org/10.1145/3703094.3694981


Comments

  • There are no comments at this time.