Skip to content
 

Blog post

Moving the AI in education conversation forward: Dialogue to practice

Rachael Blazewicz-Bell, Senior Lecturer at Nottingham Trent University Aleksander Blaszko, Lecturer at Nottingham Trent University

Artificial intelligence (AI) is transforming education. Effective integration of AI into higher education (HE) can scaffold students and academics with attributes to thrive within the competitive employment market. However, based on our discussions with HE colleagues and students across the sector, it appears that both still remain divided and confused about the best strategies for the use of AI in HE.

Adapting higher education for the age of AI: Building AI-literacy and ethical practices

Evidence indicates that academics need to reassess methods of assessment, given AI’s significant impact. For instance, Fleckenstein and colleagues (2024) highlight that students use GenAI to write in a way that is undetected by tutors. This is supported by Waltzer, Pilegard and Keyman (2024) who highlight that in their study, tutors were unable to identify 30 per cent of AI-generated assessment content. This poses a challenge to the way in which the sector can accurately and sustainably certify students’ knowledge and understanding, which has created the need for AI-aware innovative and creative assessment methods.

‘Our future conversations should be around sharing ideas on how the academic community can adapt existing practices, so that both academics and students can use GenAI responsibly, ethically and effectively.’

Current conversations, however, are often limited to academic misconduct and instruction not to use GenAI. Given that academics are unable to identify so much AI-generated content, our future conversations should be around sharing ideas on how the academic community can adapt existing practices, so that both academics and students can use GenAI responsibly, ethically and effectively.

With this in mind, we believe that it is important that institutions engage in research-based practice to co-create:

  • AI policies and training
  • informed AI curricula
  • AI-aware assessment types
  • scales to declare the extent to which AI has been used within a piece of work.

Similar to Yan’s (2023) arguments in his ‘Beyond the Hype’ BERA Blog post, it is crucial that the HE sector revisits and prioritises digital competencies to ensure students and academics are GenAI literate. Implementing institution-wide policies and AI-aware curriculum, and instructing implementation of AI into creative planning, assessment and module delivery, can establish AI literacy. This approach has the potential to develop critical thinking and digital skills, helping members of the academic community to use AI functions responsibly. This is important because while GenAI might provide HE stakeholders with meaningful functions, Singh (2023) believes that it is vital that those stakeholders can curate these for ethical and accurate use, but some are not adequately AI-literate to do this. Are these stakeholders, therefore, set-up to fail without an AI-aware curriculum, policies and training?

Embracing AI assessment scales: A path to ethical integration and enhanced learning

One solution is the use of AI assessment scales (AIAS) that enable students to declare the level of AI use when submitting a piece of work. Perkins and colleagues (2024) have promoted the use of an AIAS framework to guide transparent integration of GenAI into assessment through a five-scale declaration approach about the level of use of AI within an assessment. This helps to determine the appropriate level of AI use based on learning outcomes in the build up to assessment submission. The pilot study demonstrated promising results through reduced academic misconduct, improved student performance and higher module pass rates. This suggests that AIAS addresses ethical concerns and enhances learning, emphasising the need to move towards leveraging AI to improve pedagogy and support learning.

Conclusion

In conclusion, moving away from talking about GenAI in education, and starting to share practice that effectively prioritises AI literacy to navigate the AI landscape responsibly, can encourage an academic community equipped to thrive in an AI-driven world. We can open the door to an exciting era in HE by first enabling students to use GenAI, and then following-up with integrating the use of AI into our academic practices.


References

Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6. https://doi.org/10.1016/j.caeai.2024.100209

Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of Generative AI in educational assessment. Journal of University Teaching and Practice, 21(6). https://doi.org/10.53761/q3azde36

Roe, J., & Perkins, M. (2022). What are Automated Paraphrasing Tools and how do we address them? A review of a growing threat to academic integrity. International Journal For Educational Integrity, 18(15), 10–15. https://doi.org/10.1007/s40979-022-00109-w

Singh, M. (2023). Maintaining the integrity of the South African university: The impact of ChatGPT on plagiarism and scholarly writing. South African Journal Of Higher Education, 37(5), 203–220. https://doi.org/10.20853/37-5-5941

Waltzer, T., Pilegard, C., & Heyman, G. D. (2024). Can you spot the bot? Identifying AI-generated writing in college essays. International Journal for Educational Integrity, 20(11). https://doi.org/10.1007/s40979-024-00158-3

Yan, L. (2023). Beyond the hype: The practical and ethical implications of generative AI in education. BERA Blog. https://www.bera.ac.uk/blog/beyond-the-hype-the-practical-and-ethical-implications-of-generative-ai-in-education