Blog post
The complementary use of audience response systems and online tests to implement repeat testing
Getting students to engage with subjects which have a reputation as being inaccessible or difficult is a constant challenge many teachers face, whether at school or in higher education. This situation is even worse in hierarchical subjects, where poor understanding of topics discussed early in the syllabus causes ever increasing problems for students’ subsequent learning. This makes it particularly important to find didactic tools which encourage students’ consistent engagement with a subject, provide ongoing feedback on their understanding and the effectiveness of their learning strategies, and enhance their confidence.
Cognitive psychology provides ample theoretical arguments and experimental evidence of the positive impact repeat testing can have on students’ ability to recall as well as process and apply new knowledge. Moreover, repeat testing is expected to help develop students’ metacognitive skills by helping them to identify misconceptions or gaps in their knowledge, adjust their study habits and employ more effective learning strategies.
However, there is, so far, hardly any guidance on how to implement repeat testing in actual learning environments. One notable exception is Roediger et al.’s (2011) article on implementing repeat testing in a social studies class in a US middle school. However, given the large class sizes in Higher Education, it is essential to implement repeat testing in a way which doesn’t place unrealistic demands on staff workloads and ensures students receive swift feedback.
This is possible by combining two well established tools for computer aided assessment – audience response systems (ARS) and online tests (Stratling, 2017). While prior literature suggests that these tools are typically employed individually, by using them complementarily it becomes fairly straightforward to implement spaced repeat testing even in very large classes. Moreover, as both forms of assessment have different strengths and weaknesses, combining their use also helps mitigate some shortcomings of each.
Using ARS throughout lectures provides students with the opportunity to immediately test their understanding and provides instant feedback to lecturers regarding the quality of their explanations.
By requiring students to participate in online tests after class, students are not only given the opportunity to retest their understanding, but are also encouraged to engage in revision and are able to gain feedback on the effectiveness of their learning strategies. However, to ensure that students make meaningful use of online tests, it is essential that their participation is monitored and minimum pass-marks are set (e.g. answering at least 70% of the questions correctly).
Requiring students to take the online tests by a set date before the next lecture provides lecturers with the opportunity to identify topic areas many students struggle with. This allows for a more focussed revision at the beginning of the next lecture, which can be supported by a further round of ARS questions.
Fig. Repeat Testing Model
Surveys of the perceptions of post-experience MBA students studying a Business Economics module which implements this type of repeat testing suggest that even these mature and highly experienced learners perceived all three stages as beneficial to their learning. There was no evidence of “testing fatigue” due to regular repeat testing.
Students recorded that both the ARS and the online tests incentivised them to engage more diligently in revision. In addition, the tests helped students to develop more positive attitude towards the subject.
As hoped, repeat testing had a particular positive impact on students with little prior knowledge of economics and who lacked confidence in their ability to cope with the subject at the start of the programme. Surprisingly, despite the fact that both ARS and online quizzes use multiple choice or multiple answer questions, students who favour a deep approach to learning perceived repeat testing as more beneficial to their learning and also reported a stronger impact on their metacognitive skills than students who are more inclined towards surface learning.
While these results might be related to the quality of the multiple choice questions and the feedback provided, they might also suggest that even students who aim to develop a comprehensive, critical understanding of a subject might benefit from organised learning activities which encourage revision and provide regular feedback on their learning progress.
Bibliography
Roediger, H.L., Agarwal, P.K., McDaniel, M.A. & McDermot, K.B. (2011). Test-Enhanced Learning in the Classroom: Long-term improvements from quizzing, Journal of Experimental Psychology: Applied, 17(4): 382-395.
Stratling, R. (2017). The complementary use of audience response systems and online tests to implement repeat testing: A case study. The British Journal of Educational Technology, 48(2): 370-384.