Skip to content
 

Blog post

Let’s not blame students for the shortcomings of assessment strategies of universities that turn a blind eye to artificial intelligence: A pre-crisis warning

Fawad Khaleel, Head of Global Online at Edinburgh Napier University Patrick Harte, Head of Postgraduate Programmes at Edinburgh Napier University

This blog post highlights the evolution of artificial intelligence (AI) aligned with how the higher education (HE) sector needs to adjust assessment strategies and academic integrity policies to reflect these changes. Current assessment strategies and academic integrity policies are increasingly impacting on student experiences, with the number of cases exponentially increasing in Scottish universities, as detailed Table 1


Table 1: Academic integrity cases processed between 2020 and 2022
* Data for 2021/22 was redacted by Abertay University for 21/22 in FOI

 

Cases for Academic Year 2020–21

Cases for Academic Year 2021–22

University of Stirling

606

1,827

University of Glasgow

786

1,066

Heriot-Watt University

1,529

2,545

Glasgow Caledonian University

422

742

University of Aberdeen

210

409

University of Strathclyde

358

491

Abertay University

36

184* (2022/23)

Our research, based on a Freedom of Information inquiry, across 16 Scottish universities suggests that the investigation of academic dishonesty cases (ultimately through oral examination) costs an institution 2,697 hours to process 1,000 cases (disaggregated as 933 hours of academic time and 1,764 administrative time) (see Khaleel, et al., 2024). This monetary impact alone should cause alarm, but so far it remains a hidden cost of academic dishonesty.

There is a significant body of academic discourse on the technological developments in generative AI (see Bin-Nashwan, 2023), some focusing on AI’s adaptability to learning, teaching and assessment (see Baidoo-Anu & Owusu Ansah, 2023), with others concentrating on threats from AI to academic integrity (see Sullivan et al., 2023). While the impact of student use of AI is extensively debated, change is not being seen in assessment strategies. In the seven Scottish and 24 English universities we reviewed (2021–24) institutional assessment strategies were dominated by a logic based on word count.

Word count serves as the academic proxy for students’ depth of critical thinking and the instrument which distinguishes the level of study, the credit value of the unit and weighting of assessment (Cheetham et al., 2023). This logic must be questioned as subject experts within Business Schools, for instance, do not understand the potential of AI and its future trajectory, and accessibility beyond the now generic ChatGPT. Subject experts are experts in their respective disciplines, not AI nor its exponential rate of development. This deficit in technological understanding results in untenable optimism, indefensible pessimism, or a completely rational confusion within HE communities.

‘Subject experts are experts in their respective disciplines, not AI nor its exponential rate of development. This deficit in technological understanding results in untenable optimism, indefensible pessimism, or a completely rational confusion within HE communities.’

Many studies choose to blame students (see Parnther, 2022), particularly international students (see Hayes & Introna, 2010), for attempting to gain the system when they are simply using the most contemporary resources available to them – the ‘new Google’. However, it is reasonable to suggest that increases in academic breaches of academic integrity is not student misconduct but an issue founded on dated assessment design, obsolete assessment strategies (see Shepard, 2000) and a quality logic using the doctrine of precedent to regulate LTA practice (Taras, 2010). Archaic perspectives on academic integrity compound the issue.

Many higher education institutions (HEIs) have academic integrity processes based on Turnitin similarity reporting and plagiarism policing (see Belli et al., 2020). They do not consider the different capacities with which students engage with AI. Acceptable examples of this engagement could include the acknowledged use of AI to: generate contextual research materials when drafting a report; or, to structure or plan an essay; and, to generate material in unmodified form.

We recommend that UK HEIs find an effective forum to collaborate and to share their good practice. At institutional level, HEIs need to co-construct clear and coherent policy and guidelines on acceptable uses of AI with students and academics. Students need clearly defined boundaries within which AI can be used productively and authentically – for instance traffic light systems with an amber for ‘maybe’ simply causes ambiguity and confusion. To this end, we suggest use of a coversheet which includes a reflective self-reporting section to present and report the extent to which AI is utilised – as currently operated by Newcastle University, Northampton University and the University of Birmingham based on the templates designed by UCL.

This self-reporting initiates reflection and reflexivity (Feucht et al., 2017) and enables deep understanding of learning processes and experiences vital for students’ personal and professional development. The self-reporting requirement may also improve active participation and engagement with the ethical use of AI. In addition, the data collected through self-reporting may reveal trends and patterns that allow for more tailored and effective interventions.


References

Baidoo-Anu, D., Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. https://doi.org/10.61969/jai.1337500

Belli, S., Raventós, C. L., & Guarda, T. (2020). Plagiarism detection in the classroom: Honesty and trust through the Urkund and Turnitin software. In Á. Rocha, C. Ferrás, C. Montenegro Marin, & V. Medina García, (Eds.), Information technology and systems. ICITS 2020. Advances in intelligent systems and computing. Springer. https://doi.org/10.1007/978-3-030-40690-5_63

Bin-Nashwan, S. A., Sadallah, M., & Bouterra, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370. https://doi.org/10.1016/j.techsoc.2023.102370

Cheetham, J., Bunyan, N., & Samaca Uscategui, S. (2023). Calculating student assessment workloads and equivalences. Centre for Innovation in Education. https://www.liverpool.ac.uk/media/livacuk/centre-for-innovation-in-education/diy-guides/calculate-student-assessment-workload-equivalences/calculate-student-assessment-workload-equivalences.pdf

Feucht, F., Lunn Brownlee, J., & Schraw, G. (2017). Moving beyond reflection: Reflexivity and epistemic cognition in teaching and teacher education. Educational Psycholgist, 52(4), 234–241. https://www.tandfonline.com/doi/full/10.1080/00461520.2017.1350180

Hayes, N., & Introna, L. D. (2010). Cultural Values, Plagiarism, and Fairness: When Plagiarism Gets in the Way of Learning. Ethics & Behavior, 15(3), 213–231. https://doi.org/10.1207/s15327019eb1503_2

Khaleel, F., Harte, P., & Borthwick Saddler, S. (2024, March 1). The financial impact of AI on institutions through breaches of academic integrity. Higher Education Policy Institute blog. https://www.hepi.ac.uk/2024/03/01/the-financial-impact-of-ai-on-institutions-through-breaches-of-academic-integrity/

Parnther, C. (2022). International students and academic misconduct: Considering culture, community, and context. Journal of College and Character, 23(1).  https://doi.org/10.1080/2194587X.2021.2017978

Shepard, L. A. (2000). The Role of Assessment in a Learning Culture. Educational Researcher, 29(7), 4–14. https://doi.org/10.3102/0013189X029007004

Sullivan, M., Kelly, A., Mclaughlin, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1). https://doi.org/10.37074/jalt.2023.6.1.17

Taras, M. (2010). Assessment for learning: Understanding theory to improve practice. Journal of Further and Higher Education, 31(4), 363–371. https://doi.org/10.1080/03098770701625746