The Economics Network

Improving economics teaching and learning for over 20 years

Is a "number of correct answers" identifier in online MCQ tests a good thing?

There are several studies that examined the design and efficiency of MCQ tests. This discussion has been enriched with the move to online teaching and learning. In this case study, I share my experience of using MCQ tests as an online assessment and the role of the correct answer identifier in improving students’ grades.

Background

MCQ tests are known for their objectivity and being easy to mark. They also allow for quick access to results and allow students to test their learning outcomes. They help them test their analytical, numeric and language proficiency skills (Zhao, 2006). There are numerous benefits of using MCQ tests (Fuentes, 2020), and I will not go through the details here, but I would only say that they are very effective in testing students’ knowledge in a challenging way rather than relying on memorising and reciting content.

However, the design of these tests is not an easy task specially to ensure academic integrity. I have used online MCQ tests for the past three years in one of my final year undergraduate economics modules, Labour Economics, with a cohort that increased in size from 49 to 169 students. It has been interesting to identify the possible effect of the change in the design of the test on student grades and outcomes.

MCQ test design

The module has two main summative assessments: a mid-term test and a final exam. Before the move onto online learning in 2020/21 I used to assess students almost half-way through the term using a test of essay-based questions, which of course had its merits. However, since then I changed the test to an MCQ test. When I designed the test, I needed to ensure that it is an assessment for learning (Sambell, McDowell, and Montgomery, 2013). Key objectives were targeted:

  1. The test is designed to fit the curriculum and to mirror the intended learning outcomes (ILOs) (Ramsden, 1992). The rationale for the test is centred around testing the declarative and functional knowledge gained as it allows for further testing of their understanding of the empirical investigation and evidence of the theoretical and empirical topics (Biggs, 2003). As such, it is designed to be valid as it measures if students have met the ILOs. Accordingly, I give students a clearly stated ‘assessment criteria’ document before the test and I discuss it with them to ensure their full understanding of what they are being assessed on and how.
  2. Timeliness is achieved as students are graded on achievement not too early in the learning process. Thus they receive constructive feedforward for their final exam to reflect on their performance feedback.
  3. Manageable for final year students to fit within their other modules’ assessments (especially their dissertations) due in term two and three.
  4. Authentic ensuring that students are tested for what they may encounter as researchers/economists after graduation when they will need to be able to examine and analyse economic data and models and evaluate their validity.
  5. Inclusive as it includes questions with various degrees of difficulty ensuring that every student demonstrates their level of achievement.

Students were also given mock tests to ensure they are familiar with the QMP platform on which we run the test.

Implementations and challenges

It could be more time consuming, but I found it more efficient to have a test bank so that students get random sets of questions with randomized choices.

Before taking the MCQ test, students are trained using Moodle online quizzes as a form of formative assessment, focusing on assessing student instant understanding of the materials (Elasra, 2021). The quizzes are designed to be short: true / false questions. The purpose of the quizzes is also to prepare students for the MCQ test using a different question style. The average score is 80%.

When designing MCQ tests, the number of questions and choices has been debated in the pedagogic literature. A study by Zhao (2006) showed that, based on probabilistic analysis examining the role of guessing in MCQ tests, the optimal number of choices is four and increasing the number of questions reduces the chance of scoring highly by guessing (Zhao considered tests of 8, 18, or 48 questions). Accordingly, I designed the test allowing for an average 4 minutes per question with varying levels of difficulty. Each question had 4 choices. Given the time for the test, the maximum number of questions is 16.

When I designed the test the first time in 2020/21, it had 16 questions. There could be more than one correct answer for each question. Students had to select all correct answers to get the 5 marks per question. The challenge with setting this design is that students were taken back by the multiple choices and having to select all correct answer or else getting a zero mark. Some students found this difficult, and some easy. Unfortunately, 15% failed the test. Perhaps, given it was their first time with online MCQ tests on a new platform, they did not prepare well. That drove the average score down to 55%. However on a positive note, 19% of the students managed a first on the test and 50% got a 2.1 or more overall. As feedback, all students were advised that being unfamiliar with such design simply implies that they need to revise their way of studying the materials and challenge their understanding.

Leaning from my experience of the first trial, I wanted to test the hypothesis that 1) reducing the number of questions, 2) identifying the number of correct choices in each question (for example, "which one or two of the following is correct"), and 3) removing negative marking could lead to hopefully better outcomes. I also wanted to make sure to tackle academic integrity challenges and bad academic habits of cheating and allowed for randomization of the choices. In 2021/22, the test had 14 questions. For each question, they should choose one or two from 4 possible choices. Choosing all correct choices will give 5 marks. Choosing 1 of the 2 correct choices will give 2 marks. The results were excellent this time -maybe too excellent!! The average score was 89% with zero percent fail. Although students were happy, that result can only make me question the difficulty of the test.

Learning from this round, I wanted to test if increasing the number of questions while maintaining the level of difficulty would change the outcomes. I also wanted to enhance the randomization process. It could be more time consuming, but I found it more efficient to have a test bank so that students get random sets of questions with randomized choices. In 2022/23 students were given the same instructions but the number of questions was 15. The results this time were still excellent with an average score of 84%.

Lessons learnt and conclusions

  1. Formative reflective assessments, such as online quizzes, play a positive role in testing students' learning before MCQ tests. Students also find them very helpful in preparing for their test. One of them wrote in their feedback that one of the most useful things in the module was the ‘quizzes’.
  2. Setting a test bank allows for more time efficiency when setting MCQ tests – it becomes more challenging if the module content changes from one year to another. It also reduces the risk of bad academic habits when used efficiently through randomization.
  3. Negative marking significantly worsens students’ outcomes in terms of grades. It may not be a very helpful approach. Although students might have achieved some of the learning outcomes, negative marking just does not work in their favour. A counter argument could be that this approach helps identify the top students in the cohort and helps those at the lower end of the distribution to rethink their studying practices. This is an open question.
  4. Further consideration of the total number of questions is also very important.

A general conclusion from the different strategies above could suggest that even with more questions, identifying the number of correct choices in each question does improve the results significantly. (It would be good to do empirical testing, but it is not the purpose of this article). Does that mean we should use that as an approach when setting MCQ tests? I personally have no answer to that question and one of the reasons for sharing my experience through this case study is to hear from others how they have implemented similar approaches and whether or not they found it effective.

Reference

Biggs, J. 2003. "Aligning teaching for constructing learning", HEA.

Elasra, Amira. 2021. "Using online quiz assessments". Economics Network case study. https://doi.org/10.53593/n3378a

Fuentes, Stefania Paredes. 2020. "Moving Multiple-Choice Tests Online: Challenges and Considerations." Economics Network case study. https://doi.org/10.53593/n3343a

Ramsden, P. 1992. Learning to teach in higher education. London: Routledge.

Sambell, K., McDowell, L. and Montgomery, C. 2013. Assessment for Learning in Higher Education. London: Routledge.

Zhao, Y. 2006. "How to Design and Interpret a Multiple-Choice-Question Test: A Probabilistic Approach." International Journal of Engineering Education. Vol. 22, No. 6, pp. 1281-1286.

Back to top
Contributor profiles