Economics Network CHEER Virtual Edition

Volume 9, Issue 1, 1995

Controlled Experiments Using BOSSCAT

Alistair Dawson
Staffordshire University Business School

Herein I return to a theme which may be familiar to the reader, namely the use of computer teaching packages as experimental test-beds by undergraduate students. Rather than speculating what may be done and how, this note is concerned with the performance of a group [about 250 strong] of second year Business Studies and related students undertaking [one surmises] their first controlled experiments on a teaching package. This was actually the first occasion on which I have set such assignments with BOSSCAT, although Dawson (1990, 1992) describes similar tasks actually undertaken by students with CHANCELLOR-TWO.

The students were enrolled on a module in problem-solving; the first half of which was more philosophical in nature and concerned with human aspects of problems and solutions. The second half was based on the idea of giving students specified tasks to perform on computer teaching packages; in the present case to undertake virtual market research using BOSSCAT. The module features only group assessment - two assignments [the other a personnel case study] weighted 50% each and no examination.

With one lecture and one computer workshop weekly per student for the second half only of a semester, the lectures had to try to compare and contrast types of social research methods, for example experiments with participant observation, to introduce the structure of BOSSCAT, and to highlight the major implications of that structure for controlled experimenting and to draw attention to problems such as experimenter effects in real life. The manual discusses some of these issues and gives examples of experiments - but of course getting students to read manuals is not that easy a task. Supplementary hand-outs were provided in the hope of speeding up learning.

To keep the experiments simple yet not devoid of interest and to avoid plagiaristic use of the manual three basic types were to be undertaken [one per group] and a brief report presented as if for the owner of the business:

  1. to investigate the cross-price sensitivity of demand for both dolls
  2. to investigate the own-advertising sensitivity of demand for both dolls
  3. to investigate the cross-advertising sensitivity of demand for both dolls

In each case students were to report also on the degree of linearity of their estimates and to contrast experimenting in BOSSCAT with conducting similar research in the real world. The experiments were sparsely delineated to avoid virtually completing the design on students' behalf. First, teams had to select their own "base run" [economic modellers' jargon for "placebo"] and there is a low probability that two teams doing [say] the cross-price experiments will choose the same base. Likewise each was left to decide the magnitude and direction of price variations about the placebo, and also how many variations to run. Finally, they could choose from several simple ways to test for linearity.

Naturally the choice of a base proved one of the harder parts - any base would have done in which the team did not run out of finished goods stocks and so inadvertently observe supply rather than demand, and there are lots of such bases which can be constructed in BOSSCAT. It took a lot of persuasion that since teams were actually in the position of a firm of well-informed consultants who had built and were now running a computer model [an unusually good one!] of a business rather than the firm itself, that conventional considerations like profitability, cash flow and reserves were irrelevant when running the model but not of course when running the firm. Consultants, they needed reminding, sell information and advice; not dolls.

Notwithstanding the desire to leave as much as possible for teams to discover about controlled experimenting, some "structure" needed to be imposed; (i) to ensure that tasks set should be equally difficult for all groups, (ii) to lighten the burden of grading and providing feedback and (iii) to ensure that students could not design their own experiments with the aim of arriving at identical numerical values to enable some teams to share data rather than generating their own.

Each team was given a unique task by the simple expedient of drawing up a grid of 16 values for the three "game parameters" for each of the three experiment types indicated in (i) to (iii), making a maximum of 48 differentiated assignments - assuming that teams would consist of four or five players, there ought to be enough assignments to "go around". To avoid unduly complex tasks [in view of the limited time available to teams] the market multiplier was always set to unity [firms operating like pure monopolies]. The price-sensitivity and advertising-sensitivity took on combinations of the values O.5, O.83, 1.13 and 1.5. In all 44 of the 48 tasks were undertaken.

In lectures, in the manual, in workshops I stressed the importance of designing for maximum control, of not running out of stock, and that the students were in a position similar to that of the proprietors of an economic forecasting model; they could try even quite bizarre experiments without doing any harm to the firm. Only the firm itself had to worry about cash-flow, profit, and borrowings. Not quite everybody [especially accounting students] found it easy living with this notion.

How did they fare?

The table below indicates the two-way analysis between those who did [not] undertake a base run and those who did [not] obtain the qualitatively correct results. [Whether they presented them clearly is another point!]

                                    Use of a base run
Qualitatively correct
results obtained
                                 yes                     no
yes                              24                      0
no                               13 (a)                  7 (b)
(a) Mostly due to stock-outs caused by insufficient labour or mis-allocation of labour between the dolls.

(b) Includes one account so unclear it is hard to say what went wrong and one attempt totally mis-directed - they played the business game.

Of the thirteen groups who started off in the correct style but went amiss along the way:

  1. three varied more than one control at a time [one of them also ran out of stock]
  2. seven had stock-outs and so were observing supply rather than demand - but one of these actually recognised the problem in the report
  3. four made no use whatsoever of their base run when trying to measure their experiments' effects, falling back on variations within single time series, rather than comparing such series directly with the base run.
Subset [3] were hardest to comprehend. They had after all avoided the "obvious" errors committed by (1) and (2) and thus might have been expected to arrive at correct inferences.

Subsequent conversation with a student offered a clue. Half her group had done the experimenting perfectly and had then given the output over to the rest to write the report. When the second half asked for assistance, she and her fellows were too busy completing assignments to contribute further.

Given that of [say] a maximum of six one-hour supervised laboratory sessions most groups used two just to familiarise themselves with the package, that most groups were not timetabled for identical slots in the laboratories, and that the Christmas vacation intervened when they had just about gotten on top of BOSSCAT and were in the business of designing their experiments they seem by and large to have done quite well. Will they carry the lessons learnt into life?

What have I learned? First, that "student centred learning" [surprise, surprise] yet again needed more staff input than had been anticipated; ideally one hands out the material, gives a lecture or two, runs a few workshops at which students are soon on top of the job and one's presence is almost surplus to requirements. In due course the reports arrive for grading. It did not pan out that way; some teams were still only familiarising themselves with basic aspects of BOSSCAT at the third or fourth session. Second, that in this sort of exercise it might be sensible to put an upper bound on the numbers of experiments that teams are permitted to report. A few did scores whereas a handful would have sufficed, as they might have realised had they thought about it more systematically from the outset.

Packages

BOSSCAT: A BUSINESS GAME, Harrison Macey, 217 Silver Road, Norwich, NRœ 4TL.
CHANCELLORS ONE AND TWO, Martin Binks and Andrew Jennings, McGraw-Hill, 1986.

References

Top | CHEER Home

Copyright 1989-2007