Skip to content
 

Blog post Part of series: Artificial intelligence in educational research and practice

Generative artificial intelligence and higher education: A double-edged sword

Achala Gupta, Lecturer at University of Southampton

The rise of generative artificial intelligence (GenAI) tools has sparked significant debate within the academic community (for a systematic mapping review see Yusuf et al., 2024). These technologies, capable of creating text and images present both benefits and disadvantages for education (see Labadze et al., 2023). In 2023 I carried out a research project to further explore these GenAI-induced opportunities and challenges. The underlying focus of this exploration was academic integrity, which refers to the ethical standards and honesty expected in academic work that GenAI usage appears to interfere with (see Yusuf et al., 2024). Funded by BERA, the project involved pursuing semi-structured one-to-one formal interviews (n=10) and two focus groups (n=5) with academic integrity officers across faculties at a Russell Group university in England. Drawing on data produced, this blog post provides an overview of the key findings that are elaborated on in my recently published project report entitled, ‘When Generative Artificial Intelligence Meets Academic Integrity’.

Research participants identified specific opportunities GenAI could offer for enhancing specific higher education practices; however, in all of the cases, the notion of opportunity was coupled with participants being concerned about specific problems GenAI usage entails. For example, the participants discussed GenAI’s usefulness for introducing new and simplifying complex phenomena, its value in expanding the scope of higher education in subjects such as engineering, as well as students using it as a personal tutor, thereby widening the scope of shadow education research that focuses on exploring the role of non-formal educational institutions such as tuition and coaching centres in preparing students for academic excellence (Gupta, 2022a, 2022b). However, they simultaneously highlighted the possibility of GenAI providing educators and students with a potentially erroneous knowledge base (Bender et al., 2021). The participants, therefore, recommended critical engagement with this technology instead of complete reliance on GenAI outputs for educational purposes.

‘Excessive use and reliance on GenAI tools in everyday learning was seen to de-skill students in core skills such as communication and exercising integrity in academic work.’

The most notable challenge highlighted by participants was that GenAI appears to undermine the principles and values of universities by creating conditions where criticality and creativity in engaging with academic subjects were compromised. In addition, excessive use and reliance on GenAI tools in everyday learning was seen to de-skill students in core skills such as communication and exercising integrity in academic work. Interestingly, while GenAI was seen as problematic for higher education and participants felt that its usage needs addressing, they felt that it was difficult to secure consensus among staff whose views on this may differ by, for instance, their personal and professional attitude, discipline and experiences in higher education.

Unsurprisingly, the impact of GenAI on higher education seemed to vary significantly by discipline. Specifically, its use was deemed less helpful in teaching practice-oriented courses (such as undergraduate medicine), but GenAI was seen as a valued mechanism for saving students’ and educators’ time (for instance, by carrying out computational tasks) in other disciplines such as engineering. Furthermore, its imminent usage (either through the promotion by educators, or unprompted by students) was viewed more positively in engineering, where GenAI – like other forms of artificial intelligence – is embedded in the relevant industry rather than in social sciences more broadly.

‘It is crucial to maintain a focus on academic integrity, ensuring that the values of honesty, respect, originality and responsibility are upheld in this GenAI-led digital age.’

In conclusion, GenAI usage has implications for contemporary notions of academic integrity and the social construction of the digital age in higher education. Perhaps, there is a need to review what constitutes academic misconduct in the context of the pervasiveness of GenAI in everyday life and to ensure that students understand the ethical use of GenAI tools in education. Moreover, educators are encouraged to incorporate discussions about GenAI – and AI more broadly – ethics into their curricula, fostering a culture of responsible AI use among students. Finally, GenAI is here to stay – its impact on education will only grow. As we embrace these socially transformative technological innovations, it is crucial to maintain a focus on academic integrity, ensuring that the values of honesty, respect, originality and responsibility are upheld in this GenAI-led digital age.

This blog post draws on the author’s project report, When generative artificial intelligence meets academic integrity: Educational opportunities and challenges in a digital age, published (and the research funded) by the British Educational Research Association. The report is published under a Creative Commons licence – BY-NC-ND.


References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922

Gupta, A. (2022a). Social legitimacy of private tutoring: An investigation of institutional and affective educational practices in India. Discourse: Studies in the Cultural Politics of Education, 43(4), 571–584. https://doi.org/10.1080/01596306.2020.1868978

Gupta, A. (2022b, January 24). The ‘shadow education’ phenomenon. BERA Blog. https://www.bera.ac.uk/blog/the-shadow-education-phenomenon

Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20(56). https://doi.org/10.1186/s41239-023-00426-1

Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 21. https://doi.org/10.1186/s41239-024-00453-6

Yusuf, A., Pervin, N., Román-González, M., & Md Noor, N. (2024). Generative AI in education and research: A systematic mapping review. Review of Education, 12(2). https://doi.org/10.1002/rev3.3489