Abstract
ChatGPT represents a groundbreaking AI application that has garnered significant attention since its inception. However, despite its promising potential, its ethical implications have sparked considerable debate. This study aims to examine the key concerns surrounding the ethical governance of ChatGPT by conducting a bibliometric analysis and cluster-based content analysis of relevant scientific literature. The bibliometric analysis identifies influential authors, countries, and pivotal publications, revealing three primary categories of ethical issues associated with ChatGPT: human-related ethics, academic integrity and technical literacy, and artificial intelligence (AI) technology ethics and derived ethical concerns. Additionally, content analysis further refines these categories by synthesizing frequently occurring keywords. Building on this framework, the study provides a comprehensive discussion of the major ethical challenges faced by ChatGPT, as well as outlining future research priorities. Furthermore, this research investigates the knowledge base underlying ChatGPT's ethical governance, exploring key high-citation and high-link-strength literature through co-citation analysis, thereby mapping the research landscape and highlighting areas of growing scholarly interest. This study offers valuable insights for policymakers, researchers, and technology practitioners, emphasizing the need for more stringent policies, comprehensive guidelines, and robust ethical design in the development of ChatGPT and similar AI technologies.
References
Alkaissi, H., & McFarlane, S. I. (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus. https://doi.org/10.7759/cureus.35179
Boyack, K. W., & Klavans, R. (2010). Co‐citation analysis, bibliographic coupling, and direct citation: Which citation approach represents the research front most accurately? Journal of the American Society for Information Science and Technology, 61(12), 2389–2404. https://doi.org/10.1002/asi.21419
Cartwright, O., Dunbar, H., & Radcliffe, T. (2024). Evaluating Privacy Compliance in Commercial Large Language Models—ChatGPT, Claude, and Gemini. In Review. https://doi.org/10.21203/rs.3.rs-4792047/v1
Chen, J., Cadiente, A., Kasselman, L. J., & Pilkington, B. (2024). Assessing the performance of ChatGPT in bioethics: A large language model’s moral compass in medicine. Journal of Medical Ethics, 50(2), 97–101. https://doi.org/10.1136/jme-2023-109366
Dowling, M., & Lucey, B. (n.d.). ChatGPT for (Finance) Research: The Bananarama Conjecture.
García-Peñalvo, F. J. (2023). Generative Artificial Intelligence: New Scenarios in Teaching, Learning, and Communication. https://doi.org/10.5281/ZENODO.8319875
Guo, Y., & Wang, C. (2024). Improvement Path of Legal System Related to ChatGPT Application Combined with Decision Tree Algorithm. Applied Mathematics and Nonlinear Sciences, 9(1), 20241396. https://doi.org/10.2478/amns-2024-1396
Haupt, M., Freidank, J., & Haas, A. (2024). Consumer responses to human-AI collaboration at organizational frontlines: Strategies to escape algorithm aversion in content creation. Review of Managerial Science. https://doi.org/10.1007/s11846-024-00748-y
Heston, T. F., & Lewis, L. M. (2024). ChatGPT provides inconsistent risk-stratification of patients with atraumatic chest pain. PLOS ONE, 19(4), e0301854. https://doi.org/10.1371/journal.pone.0301854
Khowaja, S. A., Khuwaja, P., Dev, K., Wang, W., & Nkenyereye, L. (2024). ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review. Cognitive Computation, 16(5), 2528–2550. https://doi.org/10.1007/s12559-024-10285-1
Lalar, S., Kumar, T., Kumar, R., & Kumar, S. (2024). Unveiling Privacy, Security, and Ethical Concerns of ChatGPT: In P. Sharma, M. Jyotiyana, & A. V. S. Kumar (Eds.), Advances in Computational Intelligence and Robotics (pp. 202–215). IGI Global. https://doi.org/10.4018/979-8-3693-6824-4.ch011
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750
Madden, M. G., McNicholas, B. A., & Laffey, J. G. (2023). Assessing the usefulness of a large language model to query and summarize unstructured medical notes in intensive care. Intensive Care Medicine, 49(8), 1018–1020. https://doi.org/10.1007/s00134-023-07128-2
McIntire, A., Calvert, I., & Ashcraft, J. (2024). Pressure to Plagiarize and the Choice to Cheat: Toward a Pragmatic Reframing of the Ethics of Academic Integrity. Education Sciences, 14(3), 244. https://doi.org/10.3390/educsci14030244
Moreno, E., Alvarez-Lozada, L. A., Arrambide-Garza, F. J., Quiroga-Garza, A., & Elizondo-Omaña, R. E. (2023). Comment on Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. Healthcare, 11(21), 2819. https://doi.org/10.3390/healthcare11212819
Naeem, M. R., Amin, R., Farhan, M., Alotaibi, F. A., Alnfiai, M. M., Sampedro, G. A., & Karovič, V. (2024). Harnessing AI and analytics to enhance cybersecurity and privacy for collective intelligence systems. PeerJ Computer Science, 10, e2264. https://doi.org/10.7717/peerj-cs.2264
Niloy, A. C., Akter, S., Sultana, N., Sultana, J., & Rahman, S. I. U. (2024). Is Chatgpt a menace for creative writing ability? An experiment. Journal of Computer Assisted Learning, 40(2), 919–930. https://doi.org/10.1111/jcal.12929
Roberts, J., Baker, M., & Andrew, J. (2024). Artificial intelligence and qualitative research: The promise and perils of large language model (LLM) ‘assistance’ Critical Perspectives on Accounting, 99, 102722. https://doi.org/10.1016/j.cpa.2024.102722
Sedaghat, S. (2023). Early applications of ChatGPT in medical practice, education and research. Clinical Medicine, 23(3), 278–279. https://doi.org/10.7861/clinmed.2023-0078
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613(7945). https://doi.org/10.1038/d41586-023-00107-z
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313–313. https://doi.org/10.1126/science.adg7879
Tzelves, L., Kapriniotis, K., Feretzakis, G., Katsimperis, S., Manolitsis, I., Juliebø-Jones, P., Pietropaolo, A., Tonyali, S., Bellos, T., & Somani, B. (2024). ChatGPT in Clinical Medicine, Urology and Academia: A Review. Archivos Españoles de Urología, 77(7), 708. https://doi.org/10.56434/j.arch.esp.urol.20247707.99
Uğraş, H., Uğraş, M., Papadakis, S., & Kalogiannakis, M. (2024). ChatGPT-Supported Education in Primary Schools: The Potential of ChatGPT for Sustainable Practices. Sustainability, 16(22), 9855. https://doi.org/10.3390/su16229855
Uludag, K. (2023). Exploring the hidden aspects of ChatGPT: A study on concerns regarding plagiarism levels. SCIENTIFIC STUDIOS ON SOCIAL AND POLITICAL PSYCHOLOGY, 29(1), 43–48. https://doi.org/10.61727/sssppj/1.2023.43
Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. https://doi.org/10.1007/s11192-009-0146-3
Weidener, L., & Fischer, M. (2024). Artificial Intelligence in Medicine: Cross-Sectional Study Among Medical Students on Application, Education, and Ethical Aspects. JMIR Medical Education, 10, e51247. https://doi.org/10.2196/51247
Yang, C., & Xiu, Q. (2023). A Bibliometric Review of Education for Sustainable Development, 1992–2022. Sustainability, 15(14), 10823. https://doi.org/10.3390/su151410823
Zhong, M. (2024). Development and Prospect of ChatGpt in the Medical Field. Transactions on Computer Science and Intelligent Systems Research, 5, 942–946. https://doi.org/10.62051/zsa6dp28
Zhou, J., Müller, H., Holzinger, A., & Chen, F. (2023). Ethical ChatGPT: Concerns, Challenges, and Commandments (No. arXiv:2305.10646). arXiv. https://doi.org/10.48550/arXiv.2305.10646

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright (c) 2025 Bo Wang (Author); Rozaini binti Rosli (Co-Authors)