The Economics Network

Improving economics teaching and learning for over 20 years

Academic Integrity in Remote Assessments

A brief prepared for EconTEAching, CTaLE, and The Economics Network

Note: For the time-constrained (or easily-bored), sections 4 and 5 focus on practical elements of academic integrity and can be understood without necessarily exploring the other sections.

1. Introduction

Known by several names, and subject to a cornucopia of institutionally-defined definitions, academic malpractice broadly represents a range of actions carried out by students which breach expected standards of academic integrity. A full taxonomy of possible type of malpractice includes the usual suspects of plagiarism, collusion, and commissioning of work from a third party, and also includes the likes of exam-room cheating, purloining (stealing work from another student) and many others.[1] Irrespective the specific offence, malpractice can be broadly conceptualised as the separation of the means and goal of university education (Harp and Taietz, 1966).

Beyond the simple idea of moral outrage at cheating, or drawing on arguments around fairness, there are numerous sound, pedagogically-grounded, reasons for trying to ensure that students do not engage in malpractice, not least that students are unlikely to meet learning outcomes, failing to engage in core elements of experiential learning, or being issued formative feedback which is related to work they have not actually carried out (Bertram Gallant, 2008; Bertram Gallant, 2017).

One of the main concerns around the shift from in-situ assessment to remote arrangements is that this will present greater opportunities for students to cheat. Indeed, one of the key psychological barriers to the adoption of more varied assessment is the belief that, liberated from the watchful eye of an exam invigilator, students will engage in a variety of forms of academic misconduct—thus negatively impacting on the fidelity of assessment results. Indeed, in many cases, the belief that in-situ exams are a ‘safer’ option that take-home assessment has perpetuated universities’ reliance on exams as the primary form of assessment.

This report serves two purposes: it first critically examines the claims that students cheat more in remote assessment relative to in-person exams, and second it provides a range of suggestions for understanding and minimising misconduct in remote settings—focussing very deliberately on the case of alternatives to in-situ exams. Although the report features some general advice on broader, more general, academic integrity issues, it is not (and is not intended to be) an exhaustive manual on the subject. For those who are interested, section 6 features signposting to several excellent sources of further information.

2. What are we aiming for?

An important point to bear in mind when considering academic integrity with respect to remote assessment, is what we are aiming for. As with much of the concern and ‘hand-wringing’ about aspects of remote learning and assessment, it is necessary to understand that, even prior to the COVID-enforced situation, the status quo was probably not the best situation available.[2] This is particularly true of academic integrity concerns.

There exists an extensive literature which deals with student knowledge (or lack of) knowledge and understanding of good academic practice, and malpractice. In particular, students have been shown to exhibit significant degrees of anxiety around he bounds and limits of some behaviours such as collusion, and, to a lesser extent, plagiarism. Similarly, academics continue to set assessment tasks which lend themselves to the threat of student malpractice, discussed in section 5.1. These represent failures of the pre-COVID status quo to appropriately deal with malpractice, so in this respect merely aiming for the previous standards is aiming for a distinctly second-best outcome.

That said, if the present scenario is offering a rare opportunity for academics to reflect on their practice, including the way they design assessment and think about academic integrity, then there is limited impediment to learning from previous errors and aiming for some kind of new first-best. The good news is that most of the issues and ideas discussed in this report are fairly general and are applicable to all assignments, whether long-planned essays, or hastily and grumblingly assembled alternatives to sit-down exams.

3. How common is cheating? (or “Dissecting the myth that exams are safer”)

A logical starting point, whether trying to evangelise teachers in the advantages of remote assessments, or otherwise to set at ease the minds of academics who are being forced to adopt remote measures, is to begin by critically evaluating the claim that in-situ exams protect against student cheating. Despite the reputation of exams as bastions of academic honesty, there is extensive evidence to support the idea that exam-based cheating is more common than many academics would like to believe.[3]

For example, in survey evidence from over 80,000 students, McCabe (2005) reports that 21% reported engaging in some sort of exam-based misconduct, with similar magnitudes reported in UK-based surveys in Burnett (2020) and in Nath and Lovaglia (2009) with respect to in-class tests.[4] Meanwhile, in earlier evidence, McCabe and Trevino (1996) report that over 50% of surveyed students admitted cheating in exams.

Beyond (or in addition to) exam-based misconduct, more general estimates on the incidence of malpractice vary significantly, with estimates of the proportion of students cheating being anywhere between 1% and 90% (Bertram Gallant, 2008; Nath and Lovaglia, 2009). Moreover, the limited observability of misconduct (we only observe what we catch), and the uncertain reliability of self-declaration in surveys, makes quantifications based on either of these approaches unreliable (at best). In particular, surveys of students generally rely on concrete and shared understandings of definitions and expectations with respect to different types of malpractice. There is extensive evidence suggesting that this is not the case, and that students are frequently confused or uncertain regarding the definitions or bounds of particular behaviours, particularly collusion (Barrett and Cox, 2005; Burnett, 2020; McCabe, 2005; McGowan, 2016).

3.1 Evidence around misconduct in remote exams

Reflecting the general view that remote exams are more susceptible to misconduct (e.g. Alessio et al., 2017; Fask et al., 2014; Rogers, 2006; Rowe, 2004), there is a branch of literature which specifically considers academic malpractice in this setting, generally in contrast to their in-situ counterparts. Bengtsson (2019), investigating faculty attitudes toward take-home exams, specifically finds academic integrity concerns to be a barrier to adoption (though crucially suggests that the incidence of cheating in such assessments is no higher than in-situ exams). Such results are also echoed in Miller and Young-Jones (2012) which features a review of the evidence and similarly finds minimal differences between misconduct in online versus in-person exams, though they did find some differences between online-only (less likely to cheat in online tests), and students with a mix of assessments.[5]

4. Dealing with academic integrity concerns in remote assignments

The reality of the remote teaching and assessment situation is that many of the forms of assignment which are set are going to be substantively the same as they were under the previous status-quo. That students are learning remotely is not going to materially affect the process by which students complete essays, projects, or other written assignments.[6] While the remote teaching situation will affect the way students complete group or practical assignments, it is unlikely to impact on the threat of malpractice for such exercises. In this respect, all these types of assessments represent no greater malpractice threat than they would have done previously.

Indeed, the main type of assessment which is going to be impacted by the remote teaching scenario is those which were previously in-situ exams or in-class tests. These are the assignments which would have required placing (potentially significant) numbers of students within the same room—the precise circumstances which universities are trying to minimise in the present period.[7] Given that these need to be adjusted, the format of the adjusted assessment is going to determine the extent to which academic integrity becomes a concern; this is especially important to academics who favoured conventional exams for their purported ability to minimise cheating.

It should come as no surprise to any academic that, unless using effective proctoring software (see section 5.4), any assignment which is set, to be carried out remotely, becomes a de-facto open-book assessment. This is something needs to be borne in mind as it obviously impacts upon the type of assessments we should be considering for remote completion.

5. Bloom’s Taxonomy as a framework for considering assessment

Although primarily put forward as a way of thinking about learning tasks, Bloom’s (revised) Taxonomy of Learning also provides a useful framework for thinking about assessments and the particular learning skills they are assessing (see Bengstsson (2019) or Burnett and Paredes Fuentes (2020) for extended summaries on this point).

Most assessment tasks can be classified in terms of the taxonomic skills they involve; tests or assignments which simply ask students to report their knowledge or memory of theory resides in the lower-order skills, whereas individual or group research projects might ask students to perform original research—a higher order skill of creating original knowledge.

Figure 1: Bloom's Revised Taxonomy of Learning

Bloom's Taxonomy cake style graphic

Source: Shabatura (2013)

Although the type of assignment which asks students to uncritically repeat learning back to a lecturer (frequently through multiple-choice tests) are derided in some circles as ‘simply requiring a student to open a textbook or lecture notes’, they are still valued by many for serving the function of asking students to demonstrate knowledge and engage with relevant course material. This is particularly the case where learning outcomes involve the demonstration of absorption of theories.

5.1 What can Bloom tell us about academic integrity in remote assessments?

It should not have escaped the reader’s notice that assessments which assess lower-order taxonomic skills are precisely those types of assessment which would normally be held locally (demonstrating knowledge and understanding, through class tests or exams). Intuitively, this is because the task we are setting students would (in many cases) be trivially easy were they to have access to a textbook, the internet, or the wisdom of their peers (e.g. Harmon et al., 2010); . It is also because with answers of a ‘factual’ right/wrong nature, such as mathematical questions, it becomes very difficult to determine if students have colluded on the answer. In these respects, when we ask students to remotely complete ‘lower order’ assessment tasks the threat of academic malpractice comes from two sources:

  • Unauthorised copying where the assignment is intended to assess students’ retention of knowledge; and,
  • Collusion where students work together to complete assessments which are intended to be individual work.

Higher-order taxonomic skills are those which would normally be assessed by take-home assignments, where students would be expected to access further sources to address the task. This is not to say that there are no academic integrity concerns,[8] but the tools at our disposal, such as Turnitin, have evolved to detect exactly this sort of cheating.

Because it is in-situ tests which require rearranging/revising, and these disproportionately involve lower order skills, we need to consider measures and/or adjustments which can allay some of the concerns we might have about academic malpractice. Whilst numerous studies have suggested remedies (e.g. Fask et al., 2014; Rogers, 2006), the same research suggests that there is limited consensus on the best approach, and that adoption of countermeasures is far from uniform. Below are several options which, though not exhaustive, spell out some options and considerations—both in terms of format and content:

5.2 Adjust the assessment task to consider higher-order skills

One solution to the integrity issues inherent in remote exams assessing lower-order skills is to revise them to create a greater focus on higher-order elements which are harder to copy from a book, or easier to detect if tackled by a group rather than an individual. This might involve re-writing exams, or otherwise adjusting the question weightings to elevate the importance of questions which ask students to provide interpretations of answers, to critically evaluate assumptions of theoretical models or econometric model specifications, to make judgements, and to justify their response. This has two benefits:

  • It becomes harder for students to rely on off-the-shelf information, such as that provided through course materials or textbooks, therefore addressing the issue of unauthorised copying; and,
  • It becomes easier to detect collusion and/or plagiarism, since students are being required to produce small blocks of text—precisely the sort of submissions that software such as Turnitin is designed to test.

5.3 Choice of assessment type

A shift of assessment content toward higher-order learning skills, or the perceived impracticality of issuing an exam for remote completion, may also warrant a wholesale change in the type of assessment. It was not uncommon in the summer 2020 assessment period to see exams issued as de-facto essay assignments with 24-, 48-, or 72-hour turnaround windows, or even as standard essays with much longer completion windows. Though general discussion of the relative merits of different assessment types sits outside the scope of this work, there are academic integrity implications associated with these assignments.

It is well established that there are methods and approaches for detecting malpractice in essays and other written assignments; Turnitin and other plagiarism-checking software will naturally detect evidence of collusion and conventional acts of plagiarism. More problematic is the issue of commissioning (or ‘contract cheating’) of academic work through essay writing services.

Estimates of the incidence of commissioning are extremely unreliable[9]; it can be very difficult to detect, and any attempt to survey students is going to result in the same types of biases associated with any attempt to have students self-declare their own miscreant behaviour. The same constraints also apply when trying to identify remedies to the problem, in that the effectiveness of various strategies are unknown, because it is very difficult to establish the impact of such interventions. Several options have, however, been put forward:

  • One reason put forward for students turning to essay mills is the issue of disengagement (Burnett, 2020). Authentic assessment, such case-based assignments and problem-solving has been suggested as a way to engage students (Harrison, 2020), while maintaining specificity of assignments to your course is purported to reduce the likelihood that students turn to third parties to write their work.
  • Originality checking software, such as Turnitin, are adapting to detect contract cheating. Although still in an early phase, these services operate by comparing submissions with the student’s earlier work. Whilst this sounds appealing, one must bear in mind that such technical solutions bear the hallmark of an ‘arms race’; as universities escalate their capacity for detection, this results in reciprocal escalation from students in measures to avoid detection (Mulcahy and Goodacre, 2004).
  • The completion window for assessments can make commissioning more or less likely. A short window can make it difficult to arrange for third-party authoring, though this must be balanced against the need to provide a fair window for students to complete work.

One last point to raise regarding the above section is that, although there are measures in place to deter malpractice, the reasons students engage in such activities are invariably manifold and, as such, addressing them requires more than simply tweaking assessment design. This is briefly considered in section 6.

5.4 Other considerations with assessments

A common (frequently justified, sometimes not) refrain from educators in mathematical or technical disciplines is that it is difficult to convert their exams to test higher-order skills, for example where the object is simply to test students’ ability to remember and execute a set of ‘workhorse’ operations. This might particularly occur in situations where professional accreditation might rest upon candidates simply demonstrating competence in carrying out specific operations. In these situations it is not always possible to tweak the nature of the test, so alternative measures must be found. Rogers (2006) and Rowe (2004) outline several techniques which can be employed, including:

Timing of exams: One solution is to consider how timings can be manipulated to minimise the threat of cheating. This might include setting a single short window for completion, or having students synchronously complete the test with specific timing per question. By places constraints on time, both options serve to limit the opportunity for students to engage in malpractice. Unfortunately, there are issues with such approaches, including limited resilience in case of IT failure and issues of time zone differentials which might inadvertently discriminate against some students, meaning that this approach should be implemented with care.

Randomisation of questions: One option which can be employed in both multiple-choice and more general technical exams is the use of question randomisation to dissuade opportunistic collusion. Such randomisation might include the simple ordering of questions, drawing randomly from a bank, or may extend to much more sophisticated methods, such as writing several subtly different versions of each question. As Rowe (2004) demonstrates, relatively small changes in question bank size can have significant impacts on the likelihood of students receiving the same set of questions. Randomisation is particularly effective where multiple choice exams are concerned, where questions can be adjusted by altering a single sign or value in an equation, and where candidate answers all correspond to different versions of each question.

Online proctoring: If the amount of extra effort involved in producing multiple versions of an exam feels like too much effort (and there are many demands on our time in this period), there are ‘proctoring’ services available which are designed to monitor students whilst they complete the exam. Designed to mimic the surveillance of exam-room invigilators, such proctoring will check physically monitor a student via a webcam, check for sounds, and whether a student accesses the internet whilst carrying out the test. There is some evidence to suggest that proctoring deters misconduct (Alessio et al., 2017)[10], however its growth in popularity has yielded numerous sources written to help students sidestep its effectiveness—for example, a Google search of ‘how to cheat proctoring software’ yields numerous websites featuring solutions as technically varied as providing a fake video stream[11] to simply plugging an external monitor into a laptop and having a helper provide assistance.

6. Conclusions, considerations, and further sources

This brief has provided an overview of some academic integrity concerns associated with remote assessment, and several corresponding remedies which can be explored to overcome these issues. Although far from exhaustive, it has aimed to remain focussed on the specific case of rearranging in-person exams, though many of the strategies featured will be more widely applicable.

As an epilogue (and as alluded to throughout), there is a danger that such prescriptive ideas dramatically over-simplify the task of dealing with academic malpractice. Students engage in misconduct for a multitude of reasons related to their own understanding (Burnett, 2020; Bretag et al., 2014), the academic culture in which they are inculcated (McCabe, 2001), the behaviour of peers and educators (McGowan, 2016), and a host of other socio-behavioural issues (Hayes and Introna, 2005). Unfortunately (for everyone involved) tweaking assessment design is unlikely to address this complex web of motivations in the long run.

7. References

Alessio, Helaine M., Malay, Nancy, Maurer, Karsten, Bailer, A. John, and Rubin, Beth (2017). Examining the effect of proctoring on online test scores. Online Learning Journal, volume 21, number 1

Barret, Ruth, and Cox, Anna L. (2005). ‘At least they’re learning something’: the hazy line between collaboration and collusion. Assessment & Evaluation in Higher Education, Volume 30, Number 2, p.107-122

Bengtsson, Lars (2019). Take-Home Exams in Higher Education: A Systematic Review. Education Sciences, volume 9, number 4

Bertram Gallant, Tricia (2008). Academic Integrity in the Twenty-First Century: A Teaching and Learning Imperative. ASHE Higher Education Report, Volume 33, Number 5

Bertram Gallant, Tricia (2017). Academic Integrity as a Teaching & Learning Issue: From Theory to Practice. Theory Into Practice, volume 56, number 2, p.88-94

Burnett, Tim (2020). Understanding and developing implementable best practice in the design of academic integrity policies for international students studying in the UK. UK Council for International Student Affairs (UKCISA)

Burnett, Tim, and Paredes Fuentes, Stefania (2020). Assessment in the Time of Pandemic: A Panic-free Guide (online). The Economics Network.

Dobrovska, D. (2007). Avoiding plagiarism and collusion. Proceedings of the International Conference on Engineering Education – ICEE

Eastman, Jacqueline K., Iyer, Rajesh, and Reisenwitz, Timothy H. (2008). The Impact Of Unethical Reasoning On Different Types Of Academic Dishonesty: An Exploratory Study. Journal of College Teaching & Learning, Volume 5, Number 12, p.7-16

Fask, Alan, Englander, Fred, and Wang, Zhaobo (2014). Do Online Exams Facilitate Cheating? An Experiment Designed to Separate Possible Cheating from the Effect of the Online Test Taking Environment. Journal of Academic Ethics, volume 12, number 2, p.101-112

Harmon, Oskar K., Lambrino, James, and Buffolino, Judy (2010). Assessment design and cheating risk in online instruction. Online Journal of Distance Learning Administration, volume 13, number 3

Harrison, Douglas (2020). Online Education and Authentic Assessment. [online] Inside Higher Ed. Url: https://www.insidehighered.com/advice/2020/04/29/how-discourage-student-cheating-online-exams-opinion

Hayes, Niall, and Introna, Lucas D. (2005). Cultural Values, Plagiarism, and Fairness: When Plagiarism Gets in the Way of Learning. Ethics and Behaviour, volume 15, number 3, p.213-231

McCabe, Donald L., and Trevino, Linda Klebe (1996). What We Know About Cheating In Colleges: Longitudinal Trends And Recent Developments. Change: The magazine of Higher Education, volume 28, number 1, p.29-33

McCabe, Donald L., Trevino, Linda Klebe, and Butterfield, Kenneth D. (2001). Cheating in Academic Institutions: A Decade of Research. Ethics and Behaviour, volume 11, number 3, p.219-232

McCabe, Donald L. (2005). Cheating among college and university students: A North American perspective. International Journal for Educational Integrity, volume 1, number 1

McGowan, Sue (2016) Breaches of Academic Integrity Using Collusion. In Bretag, Tracey (Ed) Handbook of Academic Integrity, Springer, p.221-248

Mulcahy, S. & Goodacre, C. (2004). Opening Pandora’s box of academic integrity: Using plagiarism detection software. In R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 688-696). Perth, 5-8 December.

Nath, Leda, and Lovaglia, Michael (2009). Cheating on Multiplechoice Exams: Monitoring, Assessment, and an Optional Assignment. College Teaching, volume 57, number 1, p.3-8

Newton, Philip M. (2018). How Common Is Commercial Contract Cheating in Higher Education and Is It Increasing? A Systematic Review. Frontiers in Education, volume 3

Rogers, Camille (2006). Faculty Perceptions about e-Cheating. Journal of Computing Sciences in Colleges, volume 22, number 2, p.206-212

Rowe, Neil C. (2004). Cheating in Online Student Assessment: Beyond Plagiarism. Online Journal of Distance Learning Administration, volume 7

Shabatura, J. (2013) Using Bloom’s Taxonomy to Write Effective Learning Objectives (Online). University of Arkansas. Url: https://tips.uark.edu/using-blooms-taxonomy/

Footnotes

[1] For a fairly comprehensive list, consult Dobrovska (2007), Eastman et al. (2008), or (especially) McCabe (2005).

[2] I have been party to numerous discussions concerned about how we will engage students under remote learning, with nary a moment’s consideration that even before lockdown there have been well-publicised issues with engaging particular groups of students.

[3] An associated question, beyond the scope of this work, is whether many academics genuinely believe that exams are ‘cheat-proof’, or simply sustain the idea to avoid a critical examination of their assessment methods.

[4] Citing evidence from Hillbo and Lovaglia (1995).

[5] The authors are very keen to highlight potential distortions from survey sample-selection effects as mentioned in the preceding section.

[6] Except in the unlikely event that your online teaching activities end up actively facilitating collusion, in which case disregard this statement

[7] One would like to think.

[8] On the contrary, it is usually with these types of assessment that we observe plagiarism or collusion.

[9] Newton (2018), a systematic review of studies, suggests around 3.5% of students have commissioned work, but higher in more recent studies. Survey findings in Burnett (2020) suggest differences between home (around 7%) and international students (over 25%).

[10] The reduction in malpractice here is inferred through changes in completion time and attrition, but not proven

[11] Yes, this does sound very Mission Impossible.

↑ Top
Other content in
Contributor profiles
Other content in
Contributor profiles