Economics Network IREE Virtual Edition

Using Writing to Enhance Student Learning in Undergraduate Economics

Steven A. Greenlaw
International Review of Economics Education, volume 1, issue 1 (2003), pp. 61-70

Up: HomeLecturer ResourcesIREEVolume 1 Issue 1

Abstract

Traditionally, Principles of Economics has been taught as a lecture class. Recent literature on pedagogy suggests that students learn more from an ‘active learning’ approach, which engages students in ways that lectures often do not. One method of promoting active learning is to incorporate student writing in the Principles course. To test this hypothesis, I taught two sections of macroeconomic principles, which were identical except that one included a series of writing assignments, while the other did not. The examinations for both sections were the same. I assessed the experiment using several measures and concluded that the writing-augmented section showed greater learning.

JEL Classification: A22

Introduction

Students often view education as a passive process, where instructors are the sources of knowledge, which will be dispensed to them if they simply attend class and take copious notes. Lecture‑based introductory courses seem to reinforce this notion. In contrast, the literature on general education and economics education suggests that active or participatory learning is more productive than passive learning. Hamlin and Janssen (1987) observe:

The concept of active learning is simple: rather than the teacher presenting facts to the students, the students play an active role in learning by exploring issues and ideas under the guidance of the instructor . . . Instead of memorizing, and being mesmerized by, a set of often loosely connected facts, the student learns a way of thinking, asking questions, searching for answers, and interpreting observations.

Writing to learn

One method of promoting active learning is with student writing, especially when instructors apply the principles of the ‘Writing Across the Curriculum’ (WAC) approach.1 WAC views writing not simply as a product that reports ideas or summarises knowledge, but rather as a process that generates ideas and creates knowledge. From this perspective, sometimes called ‘Writing to Learn’, the purpose of using writing to teach economics is not primarily to make students better writers, but rather to give students a tool to learn economics better.

Students sometimes say, ‘I know it, but I just can’t explain it.’ But if you can’t explain an economic idea, you only know it at a superficial level. Writing forces you to think concretely, to figure out exactly what you mean.2 When you write, any holes in the logic become readily apparent. In the context of WAC, writing is a tool of discovery,3 a way of working through ideas that you don’t fully understand. In other words, writing is a positive-sum game. When you write, you don’t merely put down what you already know; rather you end up knowing more.

From 1990 to 1995, I taught Principles of Macroeconomics using this writing-intensive approach. Each term, I asked students to write eight to ten short papers, one on each of the major topics in the course. The writing assignments included reflective essays, as well as more technical assignments involving formal economic analysis. Over the course of the semester, the assignments became progressively more complex.4 Early on they might simply ask for an opinion; later they asked students to summarise the ideas of an author or to explain the prediction of an economic model. Ultimately they asked students to apply an appropriate model to some specific economic situation or issue, and to evaluate the results. The following are several examples of assignments I have used:

Assignments such as these provide a basis for productive class discussions. Because of the reading and writing done before class, each student has something to say. The pedagogy works very well, though the instructor’s workload is heavy.

Investigating the writing-intensive approach

During the spring 1997 semester, I was asked to teach an extra section of Principles, which I did in the traditional, non‑writing-intensive way. For the first half of the semester, I loved it. I did not have to do all the work associated with the writing-intensive version. I simply showed up to class and talked about topics I know well and enjoy. As the semester progressed, however, I became disenchanted. Students did not participate as much as I had been used to. I sensed that the course was less effective when taught without the writing-intensive approach.

To investigate this further, I looked into the literature on economics pedagogy. While a number of authors (as noted above) have linked writing in undergraduate economics and learning, there is no established methodology for testing. Indeed, Hansen (1998, p. 83) observes, ‘Unfortunately, no body of research findings has yet emerged on the impact of writing on learning in economics.’

To test formally my hypothesis that a writing introductory course is more effective, I designed an action research project, which I carried out during the fall 1997 semester. Action research refers to research done by educators using the classroom as their laboratory.5 The primary purpose of action research is to investigate issues of immediate concern and to incorporate the results into future teaching, rather than being primarily concerned with the development of generalisable results for the profession. The findings of action research can be much more reliably applied, since they came from the same environment where the research was performed, in contrast to research that was done in a very different context.

The project involved my teaching two sections of Principles of Macroeconomics. Both sections were taught using essentially the same format, except that one section was taught as a writing-intensive (WI) course, while the other (the control group) was not. The WI section had 23 students, while the non-WI section had 31. They used the same texts and the same examinations. The course syllabuses were identical except for the weights on the course requirements: for the WI section, the mid‑term examinations were weighted less, so that credit could be given for the writing assignments. The final examination was given the same weight for both sections.

To analyse the effectiveness of each approach, I examined evidence from four sources: an attitude survey, the SIR‑II course evaluation instrument, written course evaluations and the course examinations.

1 Attitudes towards the subject

Typically, students develop a more favourable attitude towards economics as a result of taking the Principles course. Soper and Walstad (1983) have developed a nationally normed survey, which has been used fairly widely to assess attitudes towards economics. I gave this survey to both sections, first at the beginning of the semester and then again at the end. Student interest is known to be positively correlated with achievement, so if the WI approach were more effective for promoting student learning, one would expect to see a greater improvement in attitudes for that section.

The evidence, however, did not support the hypothesis. There was no significant difference between the aggregate scores of the two sections on the attitude survey. I can think of two possible explanations for this result. Perhaps attitudes are influenced more by the subject or professor than by the course format. Alternatively, since the post test was given in conjunction with the final exam, but with no credit per se, perhaps students didn’t take it as seriously as one might have hoped.

2 Student course evaluations

The second piece of evidence I examined was each section’s responses to the 40-question SIR-II instrument, which our institution uses for student evaluation of courses. While not all of the questions on the SIR-II are relevant to my hypothesis, a number of them are. One would expect that students who studied and put more effort into the course (question 34), or who prepared for class meetings by completing the written assignments (Q35), would learn more in the course. These points might be reflected by the students in the WI section rating the course higher on these questions. Similarly, if the writing assignments enhanced student learning, one might expect the WI section to rate the course higher on questions about the helpfulness of assignments (Q21), the extent to which student learning increased in the course (Q29), and the extent to which students believed they had made progress towards course outcomes (Q30).

To analyse the SIR‑II scores, I divided the survey results into three categories: those where the WI section’s scores were ‘noticeably’ higher than the control group; those where there was no noticeable difference between the two sections’ scores; and those where the WI section’s scores were noticeably lower.6 Those categories are shown in Table 1.

Table 1 Comparison of SIR-II scores between WI and control class sections

Higher for the WI section

No noticeable difference between the WI and control sections

Lower for the WI section

On the majority of the SIR‑II items, there was no noticeable difference between the two sections. Roughly half of these items were those in which one would expect no difference, such as the instructor’s command of the subject matter (Q3), the instructor’s way of summarising or emphasising important points in class (Q5), the instructor’s command of spoken English (Q7) and the information given to students about how they would be graded (Q16).

On the other half of these items, I had expected the WI section to score higher, but it did not. These items included making progress towards course outcomes (Q30), increasing students’ interest in the subject area (Q31), helping students to think independently (Q32) and involving students actively in what they were learning (Q33).

On about one-quarter of the SIR‑II items, the WI section’s scores were noticeably lower than those of the traditionally taught section. Most of these items, however, assessed factors that were, in fact, the same for both sections. Examples of these items were: clarity of exam questions (Q17), the exam’s coverage of important aspects of the course (Q18) and the overall quality of the textbook (Q20). As such, these scores are not relevant to disproving the hypothesis. Ironically, the WI section’s scores were lower on Q40, the overall evaluation of the course. I comment on this below.

On the remaining one-fifth of the SIR‑II items, the WI section’s scores were noticeably higher than those of the traditionally taught section. These items included the instructor’s effective use of examples to clarify course material (Q8), effective use of challenging questions (Q9), and helpfulness and responsiveness to students (Q11). More importantly, the WI section’s scores were also higher on three key issues: students studying and putting effort into the course (Q34), students preparing for each class (Q35) and students’ learning increasing (Q29). In addition, the SIR‑II scores indicated that the workload was heavier for the WI section (Q38) and the pace of the course was somewhat slower (Q37), both of which were expected.

The discrepancies in the SIR‑II scores, discussed above, may reflect several factors. It is not uncommon for instructors to have a more favourable experience with one section of a course than another. For whatever reason, I enjoyed teaching the students in the control group more than the group in the WI section. The former group and I seemed to get along better. To the extent that the SIR‑II scores, especially Q40, reflect how much students ‘like’ the teacher, that may explain why some scores were not noticeably higher for the WI section. Additionally, there is a great deal of anecdotal evidence that WI courses tend to get lower course evaluations than non-WI courses.7 The reasons are not clear, but it may be because students are uncomfortable about having their writing criticised, and may take it personally. This may particularly explain the results for Q12, respect for students, and Q15, willingness to listen to student questions. On balance, while the evidence from the SIR-II scores cannot be assessed on the basis of formal statistical tests, it is at least suggestive of the hypothesis of this research.

3 Written student comments

The two class sections were also asked to respond in prose to three supplementary questions on the SIR‑II. They were: What did you like about the course? What did you dislike? What suggestions can you make for improvement? The written comments were similar for the two sections. Both were generally favourable towards the course. Both liked the format, but disliked the examinations. The differences between the two sections were subtler. There was a higher response rate from the WI class. Also, the WI class wrote longer, more complex comments.

4 Course examinations

The last type of evidence was each class section’s scores on the course examinations. This evidence was more favourable to the hypothesis. I gave three exams during the term. If the students in each class were equally gifted in terms of ability, there should have been little significant difference in test scores on the first exam, but growing difference in favour of the WI section on subsequent exams. To examine this, I performed a t-test of differences in mean scores between the two sections on each of the three exams, the results of which are shown in Table 2.

Table 2 Comparison of examination scores between the WI section and the control group

WI mean Control group mean Difference t-score Critical t (0.05 level) p-score Comment
First exam 74.5 71.9 2.6 0.866 1.669 0.19 No significant difference
Second exam 79.6 72.9 6.7 2.136 1.669 0.02 Significant at 95%
Final exam8 66.6 59.8 6.8 2.412 1.669 0.009 Significant at 99%

As hypothesised, although the first test average was not significantly different between the two classes, the WI class scored significantly higher on subsequent exams. The effect, nearly 7 per cent, was not insignificant. Furthermore, while the difference did not significantly increase between the second and final examinations, the p-scores declined noticeably, giving greater statistical confidence that the effect is real. Ayersman (1999) observed that action research projects are unlikely to generate statistically significant results due to their generally small samples. Yet this one did, confirming my hypothesis.

Conclusions

This study set out to examine the hypothesis that intensive use of writing assignments can enhance learning in an introductory economics course as compared to a traditional lecture-based approach. The evidence can be summarised as follows. The attitude survey showed no difference between the two approaches. The SIR‑II results were mixed, but generally in favour of the writing-intensive approach, as were the written comments. The examinations strongly supported the writing-intensive approach. Indeed by the final examination, the average student in the writing-intensive class scored 2/3 of a letter grade higher (on a traditional 100-point scale, where each letter grade corresponds to 10 points) than the average student in the traditional course. 

While the results were not unequivocal, on balance the evidence from this study supports the notion that writing enhances student learning, at least for Principles of Economics. I am persuaded not to return to the traditional lecture-based approach.

Contact details

Steven A. Greenlaw
Mary Washington College
Fredericksburg, VA 22401
USA

Tel: +1 (540) 654-1483
Fax: +1 (540) 654-1074
Email: sgreenla@umw.edu

References

Ayersman, D. J. (1999) Personal conversation.

Bean, J. C. (1996) Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom, San Francisco: Jossey-Bass.

Cohen, A. and Spencer, J. (1993) ‘Using writing across the curriculum: is taking the plunge worth it?’, Journal of Economic Education, vol. 24, pp. 219–30.

Hamlin, J. and Janssen, S. (1987) ‘Active learning in large introductory sociology courses’, Teaching Sociology, vol. 15, pp. 45–54.

Hansen, W. L. (1998) ‘Integrating the practice of writing into economics instruction’, in W. E. Becker and M. E. Watts (Eds), Teaching Economics to Undergraduates: Alternatives to Chalk and Talk, Cheltenham: Edward Elgar.

Knoblach, C. H. and Brannon, L. (1983) ‘Writing as Learning through the curriculum’, College English, vol. 45, pp. 465–74.

McCloskey, D. (2000) Economical Writing, Prospect Heights, IL: Waveland Press.

Myers, M. (1985) The Teacher-Researcher: How to Study Writing in the Classroom, San Francisco: Bay Area Writing Project.

Petr, J. L. (1990) ‘Student writing as a guide to student thinking’, in P. Saunders and W. Walstad (eds), The Principles of Economics Course: A Handbook for Instructors, New York: McGraw-Hill.

Reed, W. M., Ayersman, D. J. and Hoffman, N. E. (1997) ‘The story of a changing role: teacher research in action’, in N. Hoffman and W. Reed (eds), Lessons from Restructuring Experiences: Stories of Change in Professional Development Schools, Albany: SUNY Press.

Soper, J. and Walstad, W. (1983) ‘On measuring economic attitudes’, Journal of Economic Education, vol. 14, pp. 4–17.

Notes

[1] This point is stressed by Petr (1990). For several articles describing the theory and application of ‘Writing Across the Curriculum’, see the summer 1993 issue (vol. 24, no. 3) of the Journal of Economic Education, especially Cohen and Spencer.

[2] Several authors have highlighted the relationship between writing and thinking about a topic, most notably Petr (1990), Bean (1996) and McCloskey (2000).

[3] Knoblach and Brannon (1983).

[4] Hansen (1998) argues that this is the preferred approach to incorporating writing into the undergraduate economics curriculum. ‘Students are called on to produce a series of short papers on well-defined and progressively more complex tasks. By receiving quick feedback on both their treatment of the subject matter and the skill they display in presenting the material, students are given opportunities to improve both their economic understanding and their ability to write with confidence and skill about economics’ (p. 84).

[5] I am indebted to David Ayersman for this definition of action research. See also Myers (1985) or Reed et al. (1997).

[6] By ‘noticeably’, I do not mean significant in a formal statistical sense. Since we only receive summary scores for each section, I did not have individual students’ scores, and so I was unable to assess statistical significance. I defined a noticeable difference as a minimum of 0.10 points difference between the scores of the two sections.

[7] This observation comes from Carol Manning, Director of the Writing Program at Mary Washington College.

[8] The first two exams were composed of 30 questions each, while the final exam had 60. Scores for each test were computed using a 100-point scale.

Top | IREE Home | Economics Network | Share this page