The Economics Network

Improving economics teaching and learning for over 20 years

The use of oral assessments: our experience of “Individual Evaluative Conversations”

Background and idea

The rapid development of Generative AI (GenAI) tools has raised pressing concerns around academic integrity (e.g. Chaudhry et al., 2023; Bower et al., 2024) and assessment validity more generally (Dawson et al., 2024). While some propose closed-book invigilated exams as the only viable solution (Reeves, 2025), there is growing interest in diversifying assessment formats to ensure robustness and authenticity (Sotiriadou et al., 2020). A diverse range of assessments provides an opportunity to assess a broader range of skills (see Walstad, 2001), address inclusivity concerns (Kaur et al., 2017) and encourage students to demonstrate applied on-the-spot reasoning and communication – skills consistently valued by employers.

While some assessments are designed to include use of GenAI, reflecting the reality of modern technology and the workplace environment, it is argued that others should be structurally designed to restrict such assistance (Zhai et al., 2024). This reflects the approach specified in the Digital Education Council’s framework, which emphasises ensuring both human competency and development of Human-AI collaboration skills. Corbin et al. (2025) suggest that warnings alone against the use of GenAI may be insufficient to address these concerns. Assessments in a controlled environment together with specifically designed take-home assessments (e.g. Suleymenova, 2024) can be used to achieve the focus on human competency.

The use of oral examinations is not new (Morrissett, 1958): they are widely used in continental Europe and across different disciplines, including business and economics (Rawls et al., 2015) and mathematics (Iannone et al., 2020). Bayley et al. (2024) advocate the use of oral exams with large cohorts. Moreover, Hazen (2020) finds that the benefits of oral examinations outweigh the costs when carefully implemented and concerns of bias and sufficient preparation are considered. Our case study builds on these findings by discussing the practicalities of implementing oral examinations in a mid-size cohort.

Inspired by Arico (2021), we decided to implement individual in-person oral assessments (“Individual Evaluative Conversations”, or “IECs”) in a final year optional module (“History of Economic Thought”, or “HET”) with approximately 90 students. This was part of a two-module pilot; our colleagues Dr Gunes Bebek and Dr Sebastian Cortes Corrales also introduced IECs in another final year module (“Advanced Microeconomics”, which had a similar cohort size). The duration of the IEC in both modules was 15 minutes (as determined by university-wide guidance) with an assessment weight of 50% of the whole module mark (the other 50% is individual written coursework).

Module details

The HET module was delivered by two module leaders, Joe & Kamilya, with an approximate 50/50 split. For ease of reference, we have labelled the two parts of the module “Kamilya’s part” (taught in weeks 1-3 and 9-10) and “Joe’s part” (taught in weeks 4-8). Each week had 4 hours of lectures, with a total of 6 hours of seminars (small group teaching sessions) held over the course of the semester. Each part was relatively independent (focusing on different periods / scholars), although coordinated; the skills developed in both parts were aligned. Both module leaders underlined the importance of lecture and seminar attendance, as well as the engagement with independent learning and reading. The volume of readings was greater than an average economics module, offsetting the technical requirements on other courses. Readings were categorised as “essential”, “useful” and “optional” for each topic.

Implementation

In preparation for the IEC, each of the module leaders designed 16-18 questions (approximately 2 per lecture, with 35 questions in total). These were shared with students in advance, helping to scaffold and address some concerns from students with reasonable adjustment plans. At the start of the IEC, a student would randomly draw two questions to answer, one for each part. We also allowed students to veto one question from each of the parts at the start of their IEC. Each of the initial questions would be followed up by a small number of follow-up questions on the same topic, which were not available in advance and would depend on student’s initial answer. These questions would be used either for clarification or exploration of depth, essentially allowing students to correct any errors or show their breadth and depth of knowledge. In other words, the follow up questions could only improve the grade obtained from the initial answer; marks were not decreased on the basis of follow-up responses.

Scaffolding

Recognising that IEC is a relatively unfamiliar form of assessment for our students (only the aforementioned “Advanced Microeconomics” module also implemented IEC on the Economics suite of programmes, in the same calendar year), we put strong emphasis on scaffolding this assessment. A complete list of initial questions was provided to students approximately 6 weeks in advance of IECs. The questions were clearly linked to specific topics, helping students adopt a structured approach to revision. More importantly, the last three seminars were dedicated to IEC practice. Questions similar to the final list were provided to students in advance of a seminar. During the seminar, students were split into groups of 2-3; one student would take the role of “student” answering the question and the other student(s) would take the role of “lecturer” to ask follow-up questions and provide evaluation (suggested follow-up questions and prompts were provided). Module leaders provided feedback by walking around the room and listening to conversations, asking students to summarise key points and then feeding back on strengths and areas of improvement.

Both module leaders encouraged students to continue to practice in peer groups after the seminars and during revision. Additionally, module leaders encouraged the use of GenAI to generate follow-up questions and to practice in the form of a dialogue. Two “mock” examples were shared, based on conversations with students from a previous year who volunteered to play the role.

Right to veto

To provide reassurance to students without compromising module learning outcomes we introduced a right of veto: each student could pass on one question from each of the two parts. In essence, this is equivalent to having a choice of questions in a written exam setting and thus aligned with standard practice. Anecdotal feedback from students suggests that they have really valued this right to veto; it provided significant reassurance.

Scheduling and practicalities

IECs were held over the course of 4 days. While scheduling may seem challenging, our experience shows the feasibility of IECs for mid-size cohorts. In organising IECs, fairness to all students was at the forefront of our thinking. We decided to schedule all IECs within one week, creating a timetable based on anonymised student numbers randomly assigned to different 20 min slots (15 mins for IEC with a five-minute buffer). Our schedule is demonstrated in Table 1 below (students IDs removed). Our experience shows that the second block after the break should be a little shorter, to aid the assessors’ concentration. Whilst our IECs occurred over 4 days due to a public holiday falling on Monday, we would recommend holding such examinations over one working week for a cohort of 80-100 students (if possible). A 5-day timetable would allow an earlier finish (or longer breaks) each day.

Table 1: IEC schedule

TuesdayWednesdayThursdayFriday
09:00XYZ09:00XYZ09:00XYZ09:00XYZ
09:20XYZ09:20XYZ09:20XYZ09:20XYZ
09:40XYZ09:40XYZ09:40XYZ09:40XYZ
10:00XYZ10:00XYZ10:00XYZ10:00XYZ
10:20XYZ10:20XYZ10:20XYZ10:20XYZ
10:40XYZ10:40XYZ10:40XYZ10:40XYZ
11:00XYZ11:00XYZ11:00XYZ11:00XYZ
BREAK (20 min) from 11:20-11:40BREAK (20 min) from 11:20-11:40BREAK (20 min) from 11:20-11:40BREAK (20 min) from 11:20-11:40
11:40XYZ11:40XYZ11:40XYZ11:40XYZ
12:00XYZ12:05XYZ*12:00XYZ12:00XYZ
12:20XYZ12:30XYZ*12:20XYZ12:20XYZ
12:40XYZ12:55XYZ*12:40XYZ12:40XYZ
13:00XYZ13:20XYZ*13:00XYZ13:00XYZ
13:20XYZ13:20XYZ13:20XYZ  
LUNCH (40 min) from 13:40-14:20LUNCH (45 min) from 13:45 to 14:30LUNCH (40 min) from 13:40-14:20LUNCH (40 min) from 13:40-14:20
14:20XYZ14:30XYZ*14:20XYZ14:20XYZ
14:40XYZ14:55XYZ*14:40XYZ14:40XYZ
15:00XYZ15:20XYZ*15:00XYZ15:00XYZ
15:20XYZ15:45XYZ*15:20XYZ15:20XYZ
15:40XYZ16:10XYZ*15:40XYZ15:40XYZ
16:00XYZ16:35XYZ*16:00XYZ16:00XYZ
16:20XYZ16:20XYZ16:20XYZ  
BREAK (20 min) - from 16:40 to 17:00 BREAK (20 min) - from 16:40 to 17:00BREAK (20 min) - from 16:40 to 17:00
17:00XYZ  17:00XYZ17:00XYZ
17:20XYZ  17:20XYZ17:20XYZ
17:40XYZ  17:40XYZ17:40XYZ
18:00XYZ  18:00XYZ18:00XYZ
18:20XYZ  18:20XYZ18:20XYZ
18:40XYZ  18:40XYZ  

*Note that the longer slots on Wednesday afternoon were for those students with approved extra time.

All IECs were recorded for moderation purposes. A marking rubric, which was shared with students in revision week, was used to assess students; both module leaders providing additional comments on the performance of each student. The rubric, a provisional mark, and comments were subsequently shared with the students as feedback, three weeks after the IECs took place. This aligns with standard practice for coursework at Birmingham, but it would be entirely possible to provide feedback in a much shorter timeframe – comments and marks were agreed and finalised at the end of the week of IECs.

We did our best to create a comfortable environment for students, booking two rooms away from busy parts of the university. One room was used to hold the IEC, and the other was a waiting space, where refreshments were provided. Students were asked to arrive at the waiting room in advance of their allocated timeslot and one of the lecturers went to collect them for the IEC. As students entered the examination room, we made sure that they were comfortable and then started the recording. While the rules were explained in the recorded revision lecture, we restated them at the start for each student and started the timer only once the student was ready. The rules are best summarised by the explanation at the start of the IEC:

Before we start the IEC, let me explain the rules. As you know, you have 15 minutes for this assessment and you will be asked two questions, one for each part. Each of these questions will be drawn from the list of earlier published questions. To determine the question number, you will pick two pieces of paper, one from each pile. Each piece of paper has the number of a question stated on it. You will read the number and then check the question detail on this printout of the questions. As you know, you have the right to veto one question for each part. If you veto a question, you will re-draw the number, but you will have to answer this last question – you cannot go back to the original one. You have 15 minutes overall and you can start by answering either of the two questions. You can also use some time to write some notes on the scrap paper provided here for you. You can prepare both questions at the start or answer one and then prepare the second one. We will switch the question about 7.30 mins into the IEC; you can see the timer here. Do you have any questions before we start? Please draw the question numbers.

Thus, we tried to be as flexible and relaxed as possible, allowing students to manage their time as they saw fit and creating a comfortable environment.

Reflections on our experience

While the IEC week was very time- and effort- intense, both module leaders enjoyed them more than marking closed-book exams in previous years. The IECs felt like a more meaningful process, alleviating concerns about GenAI or simple memorisation. The follow-up questions were particularly useful, acting as prompts: students could remember additional readings, reflect in greater depth on a particular point, or show knowledge of a related topic that wasn’t selected as an initial question. Students who made minor mistakes were afforded the opportunity to correct themselves, where a follow-up question jogged their memory. Providing feedback also felt more meaningful, as we could address both content and aspects of presentation – the latter is particularly useful to final year students going into the labour market.

Students raised no concerns and indeed many have told us that they have enjoyed this different type of assessment, even if they found it somewhat stressful (perhaps due to unfamiliarity). Students also shared, anecdotally, that they had put substantial effort into the revision of this module. One of the possible interpretations is that the human interaction (i.e. speaking to peers in seminars and speaking to lecturers in preparation and during the IECs) added more depth to reflection on the material. Unlike coursework or closed-book exams, such a setting removes the anonymity veil and involves real-time in-person judgement of performance. This experience is likely to be closer to an interview experience or client presentation and thus should contribute to the development of employability skills.

When the right of veto was utilised, particularly for both parts, the performance of students was notably poorer. Approximately 30% of students elected to veto a question from one lecturer’s part and approximately 8% of students elected to veto both of their initial questions. Those who vetoed just a single question performed comparably with the cohort overall. Those who vetoed both initial questions performed (on average) two grade boundaries poorer than the mean of the cohort overall. We interpret this as an indication that students tended to use the veto when they were unsure or underprepared, as opposed to using it for strategic reasons. There was no indication that vetoing a question conferred any advantage and we believe that offering the right to veto did not simplify the task in any way. It did, however, provide the students with a psychological buffer, affording them the opportunity to avoid topics they were less confident in answering.

On the practical side, we would recommend having a separate laptop for recording, multiple timers in the room, as well as water and tissues. In our experience 15 minutes constitutes sufficient time for evaluation of the intended learning outcomes, so there is no need to rush and press students. It is important to note that creating a comfortable environment for IECs starts at the beginning of the semester, by fostering more interaction in lectures and seminars, allowing space for students to express themselves. Thus, this assessment relies in part on assessors willing to be approachable and interactive with students throughout the course.

Potential challenges and resolutions

As with other forms of assessments, IEC may require additional adjustments for students with approved reasonable adjustments plans. While we thought this may present a considerable challenge it was not so. One of the standard adjustments, provision of 25% extra time, could be easily accommodated by extending the IEC slot to 19 minutes. The students who had this adjustment in place had the flexibility to use it, or not, as they required in the moment. Another accommodation was to have a wellbeing officer present in the room during the IEC of some students; this did not change the IEC in any other way. We believe that other accommodations are possible and can be designed in a bespoke way, in discussion with individual students and via usual processes. In an extreme case, instead of providing an oral answer, students could be asked to provide a written response (in an extended timeframe) with the same question selection procedure.

As with any non-anonymous assessment, bias in assessment is a primary concern. We considered different forms of bias: language, ethnicity, gender, presentation, body language, levels of confidence, familiarity with students (those who attended sessions and engaged with lecturers versus those who did not). We tried to address this issue by putting various processes in place. Both lecturers were present in the room for all IECs, providing a gender balanced panel, with one lecturer being an English-native speaker and the other not. Our pre-IEC conversation discussed different biases to watch out for, marks for each part decided independently (and reconciled together only once all IECs were completed) to avoid any undue influence on each other during the week. Rather than shying away from such an assessment, we did our best to evaluate students according to the marking criteria (which included the quality of presentation of ideas), acknowledging that an answer of the same quality can be provided in a very different way.

Transferability and practical advice

Overall, while this type of assessment may not be suitable for all types of modules, we believe there is significant room for expanding the use of oral examinations in Economics programmes (and wider UKHE). As Economics aims to develop analytical skills alongside clear communication, skills necessary for policy advice and the ability to think “on one’s feet” drawing on prior knowledge, IECs offer a chance to assess this mixture of abilities. We believe they are particularly suitable for more advanced modules where the depth of understanding is imperative. This assessment is also suitable for technical modules (such as Advanced Microeconomics, as colleagues at Birmingham have demonstrated) since a whiteboard or other materials can be offered to students to provide derivations.

In sum, we hope that our experience will encourage colleagues to explore (or re-discover) oral assessments, adding them to the range of assessment possibilities. They can be effective in addressing a number of challenges, from GenAI concerns to the need to boost employability.

References

Aricò, F. R. (2021). Evaluative conversations: Unlocking the power of viva voce assessment for undergraduate students. Assessment and feedback in a Post-Pandemic Era: A time for learning and inclusion, 47-56. https://insight.cumbria.ac.uk/id/eprint/6918/1/Sambell_ChangingAssessmentFor.pdf#page=50

Bayley, T., Maclean, K. D., & Weidner, T. (2024). Back to the Future: Implementing Large-Scale Oral Exams. Management Teaching Review. https://doi.org/10.1177/23792981241267744

Bower, M., Torrington, J., Lai, J. W., Petocz, P., & Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Education and Information Technologies, 29(12), 15403-15439. https://doi.org/10.1007/s10639-023-12405-0

Chaudhry, I. S., Sarwary, S. A. M., El Refae, G. A., & Chabchoub, H. (2023). Time to revisit existing student’s performance evaluation approach in higher education sector in a new era of ChatGPT—A case study. Cogent Education, 10(1), 2210461. https://doi.org/10.1080/2331186X.2023.2210461

Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education, 1-11. https://doi.org/10.1080/02602938.2025.2503964

Dawson, P., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 49(7), 1005-1016. https://doi.org/10.1080/02602938.2024.2386662

Digital Education Council (2025), The Next Era of Assessment: A Global Review of AI in Assessment Design (In partnership with Pearson). Report can be requested via https://www.digitaleducationcouncil.com/

Hazen, H. (2020). Use of oral examinations to assess student learning in the social sciences. Journal of Geography in Higher Education, 44(4), 592-607. https://doi.org/10.1080/03098265.2020.1773418

Iannone, P., Czichowsky, C., & Ruf, J. (2020). The impact of high stakes oral performance assessment on students’ approaches to learning: a case study. Educational Studies in Mathematics, 103(3), 313-337. https://doi.org/10.1007/s10649-020-09937-4

Kaur, A., Noman, M., & Nordin, H. (2017). Inclusive assessment for linguistically diverse learners in higher education. Assessment & Evaluation in Higher Education, 42(5), 756-771. https://doi.org/10.1080/02602938.2016.1187250

Morrissett, I. (1958). An experiment with oral examinations. The Journal of Higher Education, 29(4), 185-190. https://doi.org/10.1080/00221546.1958.11776366

Rawls, J., Wilsker, A., & Rawls, R. S. (2015). Are You Talking to Me? On the Use of Oral Examinations in Undergraduate Business Courses. Journal of the Academy of Business Education, 16. https://research.ebsco.com/linkprocessor/plink?id=730ffd3e-7740-3657-8ec4-470d9c778a75

Reeves C. (2025), “Universities face a reckoning of ChatGPT cheats” , The Guardian, https://www.theguardian.com/technology/2025/jun/17/universities-face-a-reckoning-on-chatgpt-cheats [Accessed 22 June 2025]

Sotiriadou, P., Logan, D., Daly, A., & Guest, R. (2020). The role of authentic assessment to preserve academic integrity and promote skill development and employability. Studies in Higher Education, 45(11), 2132-2148. https://doi.org/10.1080/03075079.2019.1582015

Suleymenova K. (2024) An alternative to problem sets coursework: using hand-written annotations, students become markers. Ideas Bank. Economics Network. https://doi.org/10.53593/n3900a

Walstad, W. B. (2001). Improving assessment in university economics. The Journal of Economic Education, 32(3), 281-294. https://doi.org/10.1080/00220480109596109

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7

↑ Top
Other content in
Contributor profiles
Other content in
Contributor profiles