The Economics Network

Improving economics teaching and learning for over 20 years

3.3 Coursework

Modules typically have one or two pieces of graded coursework to keep marking loads at a manageable level. One limitation with this approach is that students only study content they perceive as relevant for the assessments. It can also result in very inconsistent patterns of learning as studying in non-contact time takes place in short intensive bursts i.e. in the weeks/day prior to the submission deadlines.

To incentivise students to work more consistently, some type of continuous assessment is usually required. How is this possible on modules with large student numbers without creating an unmanageable marking load? There are a number of possibilities.

The module requires the submission of numerous assessments that are marked/graded by software rather than the tutors.

This approach is increasingly popular, with lecturers creating a series of multiple-choice tests/quizzes using on-line products such as MyLab Economics or Aplia. The students typically complete the tests outside the classroom and the software automatically grades the assessment and provides feedback. The use of on-line products creates two key issues. Firstly, who pays the cost of the license? The tutor will typically have to convince their department/school or faculty to finance the cost of any licenses as it is difficult to insist that students pay. Secondly, how do you limit the potential for cheating? Many on-line products have features to deter copying. These include:

  • The use of pooling. With pooling, the tutor creates different versions (i.e. a pool) of each multiple-choice question i.e. typically five to ten alternatives. It is easy to find a number of similar questions in the various banks of questions. The software randomly selects each question from the pool so it is highly unlikely that any two students will get the same questions on the test. Some care needs to be taken to make sure the difficulty of each question in a given pool is similar.
  • Randomising the order in which the questions appear on the test.
  • Randomising the order of the answer options to any given multiple choice question
  • Setting time limits for completion of the test. Once the student begins the on-line test, they have certain amount of time to complete the questions i.e. 30 minutes.
  • The release of grades/feedback after the deadline.

Some recent research indicates that this type of approach can have a positive impact on learning. Chevalier, Dolton and Luhrmann (2017) find that graded quizzes increases the examination performance of economics students by 0.27 of a standard deviation. There is also no evidence of any displacement effects i.e. grades and pass rates in other modules are not negatively affected. However, a potential drawback is that it may encourage low quality activities i.e. those that generate surface learning. There is evidence that students believe that rote learning and memorising course content are effective strategies to perform well in multiple-choice tests (Scouler, 1998). This focus on rote learning might also crowd out other higher quality learning activities that lead to a deeper understanding of the material.

Some tutors also question the extent to which automated on-line tests can measure high order skills of analysis, synthesis and evaluation. Section 4.1 of the handbook discusses the use of multiple-choice tests in more detail.

The module requires the submission of a number of assessments that tutors grade in a relatively low cost manner

The following case briefly outlines an example

Case study 3.3.1: A log - book exercise

Some economics departments use this type of assessment in both Intermediate Microeconomics and Intermediate Macroeconomics modules. Students have to write and submit four problem sheets that contribute ten per cent of the module mark i.e. 2.5 per cent for each individual problem sheet. The tutor releases the exercise a week in advance and the students write their answers during four specified seminars. The questions on the problem sheets typically involve the completion of a set of short numerical problems and/or the representation of solutions using diagrams. Each exercise is marked on a pass/fail basis to keep the grading process as simple as possible. The tutor posts model solutions on the virtual learning environment. The marking criteria are as follows.

Pass

Students receive a pass grade for work of a good standard in terms of both quantity and quality. While there may be some errors or omissions, the answers demonstrate evidence of a good understanding of the core aspects of the material. There is also evidence of good preparation.

Fail

Students receive a fail grade for one or more of the following reasons.

  • The work reflects a deep and fundamental misunderstanding of core aspects of the material.
  • There is an unacceptably high frequency of mistakes and errors in the work that indicate undue carelessness and/or a complete lack of preparation.
  • The amount of work completed during the session is unacceptably low, such that significant elements of the problem sheet are missing or incomplete.
  • Absence from the class.

An alternative approach is to require students to complete a number of ten-minute mini-assessments at the end of seminars. The questions on each mini-assessment are very short and similar in nature to the problems on the seminar sheets. Once again, this makes them easy to mark. The highest five marks from the six mini-assessments count towards the final grade.

The module requires the submission of a number of assessments but the tutor does not grade them

Students have to complete all or the majority of the problem sheets in order to be eligible to complete other assessments i.e. the final exam. The tutor does not mark or grade the work and simply provides model answers. One obvious issue with this design is the quality of the work. How much effort will students exert on the tasks if it is not marked and graded? Checking to see if the students have submitted all the problem sheets also involves some administrative costs. One final issue is the credibility of the threat to exclude students from other assessments i.e. is it consistent with university assessment regulations?

The module requires the submission of a number of assessments but the tutor only marks and grades a fraction of the work

In this assessment design, students have to submit a minimum number of assessments during the module i.e. the answers to 4/5 problem sheets. The tutor posts guideline solutions for each exercise on the VLE. After the deadline for the last problem sheet, the tutor randomly chooses and grades just one. This mark counts towards the final grade.

The tutor needs to take care that each of the problem sheets is of approximately the same level of difficulty. This approach may seem very unusual but in many ways mirrors that of an examination where tutors provide very limited guidance about the topics, i.e. students' understanding of some of the module content they have learnt is never measured/graded. It also encourages consistent study habits, i.e. taking each problem sheet seriously because of the random selection of the one that is graded. It may induce more consistent effort than marking all the problem sheets: students may exert less effort on a problem sheet if it is only carries a few marks.

The module requires the submission of a number of assessments that the students grade

With some practice, students may be able to mark the work of their peers effectively. It is possible for tutors to organise peer marking in seminars by providing answer guidelines and moderating a sample to check for consistency. Some research has found that peer marking by students to be as reliable as that of lecturers. Mostert and Snowball (2013) discuss the use of on-line peer assessment in a first-year macroeconomics module with over 800 students.