The Economics Network

Improving economics teaching and learning for over 20 years

Extended Case Study: Peer Assessment in a Second Year Macroeconomics Unit

Introduction

In today’s competitive world, quality aspects of the education of the student have assumed increasing importance (Hogg et.al 1999). Improvements to teaching methods and the need to promote professional skills, are amongst the processes likely to improve student quality. At the same time, seeing the student becoming more interested and eager in learning and developing skills, and in essence becoming more truly educated can be an extremely rewarding experience for the instructor (Heppner & Johnston 1994).

Instructors wishing to improve the techniques of learning and evaluation, need to keep in mind important professional skills such as teamwork that they need to develop in their students. These skills are required in order to effectively function as future employees either in a business or in a government work environment. Collaborative peer group learning is one such method that develops team skills. Several studies have shown that student learning and satisfaction are significantly improved with peer assessment (Kwok et.al 2001, Weiler 2001, Ueltschy 2001).

Ueltschy (2001) in his study conducted at an AACSB-accredited midwestern university with 177 business students, both graduate and undergraduate, found collaborative learning improves teaching and learning skills. The findings of his study show that the use of peer interactive technology in the classroom improves team-building skills, student participation, understanding and recollection of important concepts and the satisfaction of doing the course well. In short, it improves the learning process itself.

According to Brown and Pendlebury (1992), peer assessment involves both giving and receiving assessment. As we are aware, in the work environment, staff constantly examine others’ communication in meetings, written documents and dialogue. Businesses need people who can effectively use collaboration tools, team dynamics, and interpersonal communication. Companies are encouraged to realign incentives to reward those who participate in the growth of collaborative learning techniques (Weiler 2001).

This case study is organised as follows. The first section briefly describes the process of team learning and peer assessment. Section 2 examines the method used to evaluate the peer review process, while Section 3 addresses the data collection and analysis. Section 4 looks at the effects of the peer assessment on student outcomes, while concluding remarks are provided in section 5.

Process of team learning and assessment

During the second semester, 2001, a team learning technique was used for the first time by the second year Macro Economics tutorial class in the School of Economics and Finance, Curtin University of Technology, Perth, Western Australia. Tutorial classes averaged around eighteen individuals; during the first tutorial class, students were assigned to teams of three or four. A concurrent aim was to use those teams for a peer review process. Prior experience suggested that relatively small groups work best. Rather than allow students to form their own groups, in which case friends tend to stick together and those with similar ethnic backgrounds similarly tend to stick together, the groups were formed by the tutor. This was seen particularly as one way of ensuring students from diverse national backgrounds experienced practice in dealing with those whose nationality was different from their own, following Wilson and Shullery (2000), whose findings suggested positive experiences in classes where groups are formed by the tutor.

The team-learning model was designed to provide a flexible method of helping students to achieve the following objectives:

  • Building group cohesiveness
  • Promoting learning of essential concepts
  • Ensuring individual accountability
  • Teaching students the positive value of groups

In the unit in question, students were encouraged to apply economic modelling to real life situations. They attended a two-hour lecture each week, and a one-hour tutorial. The tutorial work consisted of 4 or 5 problems and applications, from a topic covered in the previous lecture. Traditionally, the tutorial consisted of the whole class being asked to share their answers to the sequence of questions, or alternatively, one student per week being given the responsibility to present their set of answers. More often than not, the first of these techniques resulted in sporadic answers from the few who had bothered to do any prior preparation for the tutorial, or a domination of the responses by the brighter and more conscientious students. If the second technique was adopted, again, more often than not, the only student to prepare for the tutorial was the one whose responsibility was to present for that week. In both cases, the learning process was generally dysfunctional from both the student’s and the tutor’s point of view. The process generally involved little input from most of the students, the tutor generally ended up supplying the correct answers to the tutorial question, and often the tutorial ended up being another lecture. Apart from the predictable apathy of the students in response to these methods, and the sharp drop-off in tutorial attendance, the tutor generally ended up exhausted and frustrated by feeling obliged to do all of the work. Hence, while the general aim of the tutorial sessions was to learn important macro economic concepts, build group cohesiveness and individual accountability (Hoger, 1998 and Kadel and Keehner, 1995), the reality was often far removed from this goal.

The introduction of the team-based process, coupled with the introduction of a substantial percentage of the final assessment devoted to peer assessment of the individual’s contribution to the team (in terms of participation in the team and in prior preparation for the tutorial session), was seen as an attempt to improve attendance, to shift the focus from the tutor as instructor to the tutor as a learning resource, and to encourage a more student centred learning process. Students were to be able to hone their teamwork skills by discussing the work done by their peers and learn from each other.

In the first tutorial, students were briefed about the importance of team learning and were given an explanation of how the peer assessment would be carried out. Peer assessment sheets were provided to monitor the progress of others in the group (Appendix A). Two different methods were used by the two tutors responsible for the tutorials. One tutor assigned at least 30 minutes for group discussion of the tutorial questions. The tutor approached each group and asked a specific member if he or she would lead the group. The team leader was rotated every week. At the end of 30 minutes, the leader of the group would explain a particular tutorial solution to the entire class. This was done after receiving the group’s input. The tutor visited each group during class to answer questions and help students to work with each other effectively.

The second tutor simply worked with each team in turn, asking for answers to the set of questions, and encouraging each member of the group to contribute. The available tutorial time then, was divided between the four or five groups, with no overall summary to the whole group. Each tutor, however, made use of the same scoring sheet, as set out in Appendix A. Here, for each tutorial, the student who had prepared for class and participated well was assigned 3 marks, above average preparation and participation was assigned 2 marks and average level 1 mark, while those who were absent were assigned 0 mark. The zero assessment for non-attendance was used to reinforce the notion that students need to be present to contribute to the group. In the unit under review, the peer review process comprised 15% of a student’s course grade, which, while may be thought to be somewhat on the high side, has as a comparison, other studies which have allotted as much as 20%.

Methods

At the end of the fifteen-week semester, a questionnaire was administered to the students to evaluate their experience of the peer review process (Appendix B). The questionnaire comprised of a combination of fixed alternative questions and open-ended questions. Open-ended questions tend to be more personal and involving. The students are more able to explain, elaborate, and refine the meaning of their comments; and they are able to get to the heart of the matter more readily (Heppner, and Johnston 1994). Questions 3, 5, 9 and 10 were of this open-ended type and dealt with reasons for the liking or disliking the peer assessment, comments on fair or unfair assessment, how students actually evaluated their peers, and lastly a question asking for further comments.

Fixed alternative questions, on the other hand, are easier for the respondent to answer and they enable comparability of answers, facilitate coding, tabulating and interpreting the data. Moreover, the use of structured questions helps to reduce any biases that could occur due to the influence of the question designer. Questions 1, 2, 4, 6, 7 and 8 were structured questions. The sample size of 84 represented about 77% of the students who remained in the unit up until the week the questionnaire was administered.

Data and analysis

The following analysis addresses the issues only at an aggregate student level. For the purpose of this case study it was not thought necessary to address issues such as whether the questionnaire responses varied systematically across sex, country of origin or any other possible grouping (fairly unsophisticated chi-square tests, however, suggested little systematic differences across these groupings).

Students general judgment on peer assessment

Question 1 asked the student what they thought of peer assessment in general. A five-point interval scale where 1 represented ‘strongly support’ and 5 represented ‘strongly oppose’ was employed to measure the extent to which they approved/disapproved of peer assessment. Table 1 shows the responses to this question. With around 62% in either the supporting or strongly supporting category, clearly there is substantial student support in general for the use of peer assessment as a component of total assessment.

Table 1: General Judgment on Peer Assessment

  Number* Percent (%)
Strongly support peer assessment 18 21.4
Support peer assessment 33 39.3
Neutral 18 21.4
Oppose peer assessment 11 13.1
Strongly oppose peer assessment 4 4.8
Total 84 100.0

*Number of responses

Students judgment on peer assessment in the unit under review

Question 2 was a similar question but referred specifically to the unit under review.

The distribution of responses varied little from that in Table 1, suggesting that the students saw no peculiarities in the actual process administered in the unit under review.

Reasons for liking the peer review process

Table 2 summarises the reasons students gave for liking the peer review process. For ease of interpretation, the many varied responses were recoded into the four categories shown.

Table 2: Reasons for Liking PRP**

  Percent (%)
Improves communication, participation and group skills 60.4
Improves preparation 3.8
Rewards work, easy marks 26.4
Other 9.4
Total 100.0

** PRP refers to Peer Review Process

The reason that the peer review process encouraged them to improve their communication skills, and skills in work participatively in a team environment, was nominated by just over 60% as their reason for liking the process. A small percentage responded that their main reason for liking the process was that it specifically improved their level of preparation for the tutorials. Roughly one quarter liked the process because it provided a reward for those who did the work; in that category concerned with rewards, a couple of students offered the comment that they liked the process because it supplied some easy marks. Clearly students perceived the same set of advantages in this method of assessment and the concurrent team approach, as that intended by the instructors who introduced the innovation.

Students were also asked what they did not like about the process. Of the 84 students who responded to the survey, 34 or 40.5% supplied an answer to this question. Amongst the most supported reasons given for disliking the process was its perceived subjective nature, and that the process had the potential to alienate friends. On the question of whether peer assessment is highly subjective, it could be argued that it is, in fact, more reliable than a single tutor’s assessment of teamwork, since it involves more than one person examining the work done. These repeated measures may make peer assessment more reliable than single or double marking by tutors. Nevertheless, built into the process under review was a provision that the tutor monitor the assessment, and if the tutor suspected that there was a cartel at work, then make appropriate adjustments. The use of the spreadsheet (Appendix A) can help students to evaluate their peers performance with minimum discrepancy and more accurately (73.5% of students who responded said they made use of the spreadsheet, 26.5% said they did not). As it turned out, very few adjustment of the peer assessments by the tutors were judged to be necessary.

Perception of the fairness of the peer review process

In response to this question, approximately 92% of the students responding to the survey believed they had been treated fairly by their fellow team-members. Of the few who perceived they had been treated unfairly in the process, the main complaint appeared to be that the student felt that those in his or her team who had frequently been absent, would not have been in a proper position to judge their overall level of participation in the team.

Was the 15% the appropriate allocation?

About 65% of the students felt that 15% was the right amount for the peer assessment, roughly 33% felt it was too much, while only 2.4% felt it was too little. From the unit designer’s point of view, this is an important consideration. Too high an allocation of marks, and the integrity of the assessment procedure is called into question. Too low a percentage, and there is little incentive effect for the student to actively participate.

Did the peer assessment process encourage the student to……?

  1. attend the tutorials more regularly (82% said yes, about 11% said no, with around 7% unable to decide);
  2. prepare more thoroughly for the tutorials (80.5% said yes, 13.4% said no, with 6.1% unable to decide);
  3. discuss the answers more with your colleagues (82.7% said yes, 13.6% said no, 3.7% could not decide);
  4. help develop your teamwork skills (73.5% said yes, 22% said no, with 6.1 % unable to decide).

Generally, then, students perceived the process as providing an incentive to attend the tutorials, to prepare more thoroughly, and to interact with members of their team.

Additional comments on the process

Approximately 50% of the respondents to the survey supplied an additional comment. These, however, were many and varied; amongst those with sufficient responses to place in the same category, were a general response that the process was a success and should be continued, that assigning marks to peers is difficult, that the process is interesting, interactive and innovative. On the negative side, several comments were along the lines that it’s the tutor’s job to assess student performance and not that of other students, that the process was a waste of time, and that the job of the tutor is to teach, rather than be a student ‘resource’.

The effect of the peer assessment on student outcomes

During week ten of the semester under consideration, a general review of student opinion of the unit was undertaken, and since this was also undertaken in the corresponding semester a year earlier in 2000, and also since the tutors and unit syllabus remained basically unchanged across those two semesters, it is possible to make some tentative comparisons of the overall student outcomes under the two different regimes. There was a difference in the lecturing arrangements across the two semesters, however, with half the lectures being taken by one lecturer and the other half by an alternative lecturer in 2000, but with only the one lecturer in 2001. In the semester 2, 2000, assessment scheme, there was no assessment associated with the tutorials; the 15% attached to peer assessment in 2001 was dispersed amongst a higher essay mark (20% instead of 15%) and a higher percentage for the final examination (60% instead of 50%).

A student assessment tool called the Unit Experience Questionnaire, developed by the Curtin Business School, was administered to the students in October, 2001. In that questionnaire, there is a good teaching scale, a clear goals and standards scale, an appropriate workload scale, an appropriate assessment scale and an overall satisfaction scale. The elements which make up theses scales are supplied in Appendix C. Figure 1 below summarises these responses for the two semesters, semester 2, 2000 and semester 2, 2001.

Figure 1 Comparison of Unit Experience Questionnaire Results
Semester 2, 2000 and Semester 2, 2001

Image: GVC Extended Case Study-1.jpg

2001 UEQ = results of Unit Experience Questionnaire for 2001 Curtin students in the unit Economics (Macro) 202. N = 59

2000 UEQ = results of Unit Experience Questionnaire for 2000 Curtin students in the unit Economics (Macro) 202. N = 67

2000 CEQ = results of Course Experience Questionnaire for 1999 Curtin graduates from the School of Economics & Finance. N = 82

Note: any comparisons between UEQ and Curtin CEQ data to be made with care given the differences in questionnaire focus, level of students, year group of students, nature of data (unit vs course), and low response rate figures of the CEQ

The good teaching scale may have been affected by the differences in lecturing arrangements across the two semesters, but apart from that difference, there is a high degree of similarity across the other scales (possibly with the exception of the clear goals and standards scale, which is difficult to interpret). The appropriate workload scale, the appropriate assessment scale and the overall satisfaction scale, where it might be expected to pick up some differences across the two semesters, all are practically identical. Presumably, the introduction of the substantial assessment for peer review, and the substantial changes to the operation of the tutorials, were not perceived by students as changing their overall evaluation of how the unit ran.

There is, however, one further piece of evidence which may throw a different light on these conclusions. Table 3 below, shows the distribution of final marks in the unit for the two semesters. The standout feature of the table is the significantly higher percentage of students scoring higher marks. For example, while 5% of students scored in the 80s or 90s in Semester 2, 2000, in Semester 2, 2001, this percentage had risen to 16.6%. It is possible that this reflects the influence of the team skills and peer assessment, and may be a reflection of the positive responses students gave to the increased motivation to attend tutorials, to prepare more thoroughly, and in general to assume a more learning-centred attitude than has been the case in the past. Alternatively, of course, this may simply be a reflection of a higher achieving cohort of students. However, analysis of some additional data available for the same basic group of students for other similar economics and finance units (details not reported on here), did not suggest this group of students presented as a higher achieving cohort than the norm. Of course, it is also possible that the peer assessment marks were easier to earn than under the former assessment regime; the general impression of the tutors/lecturers involved in this new approach was that the size of the assessment going to peer assessment and the final distribution of those marks would not support such a conclusion. However, it must be acknowledged, that in the absence of the use of a control group of students assessed under the previous regime, no definitive conclusion can be made on this issue.

Table 3: Distribution of Final Marks for Unit

Marks Range % of Students
Semester 2, 2000 Semester 2, 2001
90-99 0.0 1.2
80-89 5.0 14.4
70-79 24.7 25.9
60-69 32.1 25.3
50-59 22.2 20.5
49 or below 16.0 12.7
  100.0 100.0

 

Conclusion

This case study has reported an evaluation of changing the assessment system in a second year economics unit, to incorporate a reasonably significant percentage of the assessment related to peer review, with the latter also associated with a change from a tutor-centred tutorial regime, to a team-oriented and more student-centred learning environment. Although far from conclusive, the evidence presented is encouraging for the change. Students appear to have adopted more appropriate learning strategies, the tutors are definitely happier with a substantial reduction in the pressure generally associated with tutorials stemming from a system which discouraged student participation, and indirectly encouraged tutors to do most of the work.

Appendices

Bibliography

Becker, W. E. and M. Watts (1999). “The state of economic education: How departments of economics evaluate teaching.” The American Economic Review 89(2): 344-349.

Brown, G. and M. Pendlebury (1992). Assessing active learning. Effective Learning and Teaching in Higher Education Module 11. P. Cryer. Sheffield, CVCP Universities' Staff Development and Training Unit. Part 1: 79-82.

Calegari, M. J., G. G. Geisler, et al. (1999). “Implementing teaching portfolios and peer reviews in tax courses.” The Journal of the American Taxation Association 21(2): 95-107.

Heppner, P. P. and J. A. Johnston (1994). “Peer consultation: Faculty and students working together to improve teaching.” Journal of Counseling and Development May 1994 72(5): 492-500.

Hoger, E. A. (1998). “A portfolio assignment for analyzing business communications.” Business Communication Quarterly 61(3): 64-66.

Hogg, R. V., H. J. Newton, et al. (1999). “Let's use CQI in our statistics programs / discussion / reply.” The American Statistician 53(1): 7-28.

Kadel, S. and J. Keehner (1995). Collaborative Learning: A Sourcebook for Higher Education, The Pennsylvania State University, National Centre on Postsecondary Teaching, Learning and Assessment.

Kwok, R. C. W., J. Ma, et al. (2001). “Collaborative assessment in education: An application of a fuzzy GSS.” Information & Management 39(3): 243-253.

Salemi, M. K., J. J. Siegfried, et al. (2001). “Research in economic education: Five new initiatives.” The American Economic Review 91(2): 440-445.

Ueltschy, L. C. (2001). “An exploratory study of integrating interactive technology into the marketing curriculum.” Journal of Marketing Education 23(1): 63-72.

Weiler, R. K. (2001). “How to sharpen virtual business.” Information week (863): 132-133.

Wilson, B. and N. Shullery (2000). “Rotating responsibility reaps rewards.” Business Communication Quarterly 63(2): 68-72.

Contact details

Geoffrey Crockett
Dean, CBS Operations
Curtin University of Technology
GPO Box U1987
Perth Western Australia 6845
Phone: (+618) 9266 4090
Fax: (+618) 9266 2378
Email: crockettg at cbs.curtin.edu.au

↑ Top
Sub-disciplines
Other content in
Other content in