In the first of the sessions David Hendry talked about and demonstrated the new Pc-Give 9.0 for Windows (see CHEER Volume 11 Issue 1 for a review of the package). This was followed by a presentation by Jurgen Doornik on his matrix programming language, Ox. The session was completed by Thomas Krichel who talked about developments in the WoPEc project, which is dedicated to electronic publication of Working Papers in Economics. Again turn to CHEER Volume 11 Issue 1 for an article by Krichel. Visit the uk web site or one of the mirror sites in the USA or Japan.
The second session had the theme "Innovations in Computer Based Learning in Economics". Ros O'Leary gave a presentation which provided an update on resources for economists on the Internet.
Ros began by giving a brief overview of the development of the Internet and in particular the World-Wide Web. She discussed and illustrated the use of search engines (such as Alta Vista and Yahoo!) and Information Gateways (such as WebEc). Unfortunately the telephone connection went down and she had to resort to a "Blue Peter" solution from the cache, but she carried it all off with such aplomb that it was hard to see the switch. Bill Goffe's Resources for Economists was of course also highlighted as a good place to start. Newspaper sites such as The Times Archive, Financial Times [Note: you have to be registered to use the FT web site] and the Electronic Telegraph were mentioned as good places to look for material on current issues of economic policy.
Next Ros looked at Biz/ed, which is "a comprehensive source of education centred business and economics quality information on a single world wide web site". Three particular items were pointed to: the mirror of the Penn World tables, the links to Economics Department Home Pages and the on-line glossary of economics terms.
Other sites highlighted were MIDAS [Now MIMAS- Web Editor], The Data Archive, The Journal of Business and Economic Statistics' ftp site, the Oxford Economic Growth site, The Institute of Fiscal Studies' site and the Fairmodel site
Ros showed us David Demery's pages at Bristol [This is now at http://bris.ac.uk/Depts/Economics/unit_res/unit_res.htm- Web Editor] as an example of how lecturers are now using the web to communicate information to students about their courses, and she discussed the use of mail lists and discussion lists, mentioning some of the relevant lists organised through Mailbase at Newcastle.
In the other talk in this session Arnie Katz spoke about Tracking and Evaluating Computer Based Learning. Arnie, from the University of Pittsburgh, is Visiting Fellow at the Institute for Learning and Research Technology at Bristol. He is particularly interested in finding out how students learn with computer based material and has built into his software routines for tracking students as they work with it (see CHEER Volume 10 Issue 1 for details). He contrasted his approach to evaluating learning software (which is process oriented) with the more usual approaches of the analysis of student performances and assessments of the software after they have used it. He has collected a huge amount of data and believes that his results (which examine both consecutive errors and success patterns as students work with the program) show clear evidence of a learning curve for students using the software.
Arnie discussed the ideas of John Anderson of Carnegie Mellon University on the Theory of Skill Development which he believes to be of relevance to Computer Based Learning. The theory distinguishes various stages of learning, from Novice to Expert, and places emphasis on learning by doing with feedback from experts. It makes a number of predictions: "repetition increases the speed of proceduralization", "expert advice increases the accuracy of compilation" which can be tested with the Smithtown data.
Arnie encouraged other software authors to incorporate "rich recording facilities" (the latest version of WinEcon now does) and discussed some of the other problems he is having in analyzing his data. Because later generations of the software have introduced improvements and innovations, he faces a problem of analysing learning patterns in a changing environment - something which would appear to have wider applications in economics. He is keen to find econometricians who would be willing to work with him in developing models for this kind of situation.
The final session focused on independent reviews and demonstrations of the latest versions of three well known pieces of software; STATA, STAMP and Microfit.
Simon Peters began by reviewing STATA 5.0. He started by asking why certain applied economics researchers prefer STATA to some of the other well-known alternatives (Limdep, SAS and SPSS). STATA runs on a variety of platforms and operating systems, from PCs running DOS and Windows, Macs, right through to Crays such as the one at Manchester where he is based. Datasets and codes are portable across all these platforms. The PC version has a menu driven interface, but essentially the program is command driven. He talked about the command language syntax and suggested that it is relatively easy for Masters levels students to learn, say by comparison with Gauss. There is on-line help, keyword lookup and an on-line interactive tutorial.
Simon gave some examples of the kinds of things that researchers do with STATA, working with large cross-section or panel data sets from the Data Archive. Users like STATA's data handling features. The program keeps a working database and the Compress command makes saving information efficient. However there is no equivalent to SAS data-warehousing. Users mainly work with probit, logit, multinomial logit and other models for dealing with discrete data.
A possible problem for applied economists wanting to work with the program is the lack of suitable references to turn to; the program was originally developed by epidemiologists and statisticians and most of the papers relating to STATA are in the statistics literature. Another concern is the accuracy of some routines. Some people have mentioned problems of convergence in the Heckman routine.
Because it is used mainly by statisticians the range of misspecification tests available is not always considered sufficient by economists, but it can do Wald and Likelihood Ratio tests. In addition the program's "linktest" is a form of RESET test.
User support is good and there is a website, a mail list <firstname.lastname@example.org>, a technical bulletin/journal and an annual workshop. There is also an on-line course on using the program which can be taken on the Internet.
What is better in version 5 of the program? Among the improvements Simon mentioned are better and more transparent memory management, some new panel data estimation commands (for unbalanced panels) and improvements to duration model procedures. Although the manual tends still to be couched in "statistician-speak" it does contain more references to the econometrics literature.
In the second presentation of the session I looked at STAMP 5.0, the current version of the program for structural time series modelling produced by Andrew Harvey and his co-workers (most notably Siem Jan Koopman). I began by providing a brief introduction both to structural time series models and to the development of the STAMP program, from the first version which appeared in the late 1980s (where Simon Peters was one of the co-authors) through to version 5 which came out in 1995. The current version has been designed (with the help of Jurgen Doornik) so that it's menu structure and graphing facilities are the same as PcGive (the old DOS version). When the next version appears sometime later this year, as well as introducing some new procedures for dealing with weekly, daily and hourly data, missing observations and outlier/structural break detection etc., STAMP will run under Windows via the GiveWin front end. My talk concluded with a quick demonstration of the how the program works, running through an example in which both a basic structural model and an ECM regression model with evolving seasonal patterns were fitted to quarterly energy data. Interesed readers can find full reviews of STAMP 5.0 in CHEER Volume 9 Issue 3 and the July 1996 issue of the Economic Journal.
completed the session with a review and demonstration of Microfit 4.0 for
DOS (see also the review by Judge and Harris
elsewhere in this issue of CHEER). Les began by saying that he was
disappointed that there was still no sign of the Windows version. He
mentioned some of the new procedures and improvements in the latest
version of the program, listing all the new single equation and
multi-equation techniques available, and discussing the improvements in
data handling, graphics and the on-line hypertext Help. For most of the
presentation the audience could see the program running and Les took us
through the various menus and screen boxes to show how easy it is to use.
He said that he was surprised to see the program still offering
Cochrane-Orcutt adjustments for autocorrelation and introducing a
Hodrick-Prescott filter, both of which he felt were undesirable. The new
range of GARCH procedures were easy to use and would be especially popular
for those working with financial data. However in the part of the program
for applied cointegration analysis some tests were not available and he
felt the program compared unfavourably with PcFiml.
The workshop was run by Professor Jose J Gonzalez from the Institute of Information technology, Agder College, Norway (E-mail Jose.J.Gonzalez@hia.no). He began by looking at a fictitious company inventory simulator. Items within the model could be changed in order to run system simulations from projected "mental simulations". This was the standard model which was used later on to introduce paramters which affected working/running patterns within the simulation. There would be the possibility to run much more complex working simulations in this way. The user could project and run a model to see if it matched up with expected results or patterns.
Four windows were brought up on screen to view model data. The template non-working model was created "ABC manufacturing". This involved creating the working blocks on screen and linking the various details together to form a static model (equilibrium). Later on the idea was to "shock" the model in order to give it real-life characteristics and see how it reacted to the pressures of day to day input. The user is then able to analyse the content and consequences of their actions.
It is possible in Powersim to keep enhancing the model and to introduce new variables to fine-tune possible scenarios. In any given simulation stop times can be increased or decreased to give desired or actual production rates and possible consequences. This general model is then turned into a Simulator. Specific Project Models could be set up, with access restricted to those in a particular project only.
The flexibility of Powersim enables it to import and export files and to connect it to other tools. Once a model has been built it can be brought in to run along side others in parallel mode. Individuals in separate projects can work on elements in their areas. These disparate pieces can then be brought together in a larger model
The Powersim software has the capability to let the user design and create visual elements that appear within the interface. It is possible to customise this interface. Powersim supports MDI (Multiple Document Interface) Protocol. There is the potential for multimedia elements to be introduced - a model could run on a website or on a company intranet. It could be used to draw in interest for a company's work by involving a client. Models can be customised using Active X or Java.
One of the main implications of this software is that it is a good way of bringing new or intended items or people together before investing large sums of money. Plans can be tested before large amounts of capital are commited. Powersim enables those operating it to visualise projected business plans. It allows the user to experience the consequences of their decision making.
Powersim can be used with Windows '95, 3.1 or NT. 4Mb RAM is needed (8Mb recommended). 10-12 Mb free hard disk space. The Powersim web page may be found at www.powersim.no.
The basics of the GAUSS language were covered which left you in the position to start writing your own programs. The workshop also outlined the full capabilities of the software, highlighted methods to tackle problems using GAUSS, and used specific code and examples. Topical worked examples in the assisted practicals were particularly helpful to the learning process. Group interaction helped to clarify ideas and formulate solutions to the problems. The accompanying comprehensive manual provided all of the information needed for the workshop, ranging from examples of code to how to write effective programs understandable to others. The workshop significantly shortened the time needed to become familiar with GAUSS and gave you the confidence to delve further into the language to produce more technical programs. The workshop was suitable for anyone wanting to learn GAUSS.
The tutorial began with an introduction to the power of PcGive, exploring its facilities and highlighting the differences and improvements of this version compared with previous versions of PcGive (particularly the way in which the new GiveWin interface operates). The tutorial used was an extensive money demand model which had been developed by the presenters (Doornik and Hendry). This aided understanding and highlighted the relevance of the package.
The latter part of the afternoon was spent developing the integration
of PcFiml into the GiveWin framework. This final stage appeared to be a
little rushed; however, at the end of the day the group was brought
together and given a run-through of some of the facilities of PcFiml.
Though some of the econometrics was advanced, the results were easily
interpretable to those with a good understanding of the subject. To those
without, the workshop offered very little help, but then it was assumed
wanting to develop an understanding of the latest version of a package
such as PcGive, will have the skills necessary to find it useful. Most of
those who attended the workshop appeared to take away with them the desire
to work with GiveWin and PcFiml in the future.
The workshop focused primarily on introducing WinEcon and looked particularly at how to customise the student user interface using WinEcon Lecturer. While initially disappointed by this emphasis, no doubt through my own misconception of the workshop objectives, in retrospect the session turned out to be very useful since we are now at the stage of investigating the delivery options controlled through WinEcon Lecturer. I was impressed by the high standard of presentation, advice and guidance offered by the presenter Li Lin Cheah. On the other hand, you would expect one of the senior programmers of WinEcon to be able to answer all questions about the interface and functionality. There was an expectancy, however, that participants were highly computer literate and some academics may struggle with the terminology used.
There was a sad absence of case studies illustrating how departments had implemented WinEcon. Such information and discussions would have been useful for establishing our own strategies for course design and management. The discussion time at the end of the workshop did go some way to meeting this requirement. However, it was clear that most of the participants were at very different stages of investigating the package and were generally looking for guidance rather than sharing own experiences which tended to dwell on more specific niggles.
From a technical point of view, there was too little time for fielding questions on setting WinEcon up successfully on networks. For example, it might have been useful to explain the mechanism by which shared files (econdata.tbk and econlog.tbk) are used to "talk" between WinEcon Lecturer and the student WinEcon interface. This would aid comprehension in establishing a strategy for how a department might plan to operate as a whole in terms of individual staff customising sections, modules and tests within WinEcon that apply to their specific teaching areas. For instance, Lecturer A might use WinEcon Lecturer to change the modules and tests included in, say, MicroEconomics, but Lecturer B who teaches MacroEconomics might want to exclude students "seeing" the MicroEconomics modules entirely. Without some prior collaboration, I envisage problems of both A and B trying to modify the econdata.tbk file with different settings, undoing each other's changes. There needs therefore to be some degree of planning and management over who is allowed to control which areas of the courseware. These operational issues are not dealt with in the workshop and consequently each institution and department using WinEcon will need to discover such implentation complexities for themselves.
There are educational support issues that I feel the WinEcon workshop
did not address. Of course, these may not have been in the objectives, but
they are of the utmost importance if departments and individuals are to
integrate WinEcon appropriately, effectively and successfully. Such issues
address particularly the need for support with implementation of an
academic curriculum design nature rather than just what buttons to press.
Perhaps these comments are advocacy for the WinEcon team to produce
guidelines and case studies in terms of both technical and curriculum
aspects of implementing WinEcon.
Note: In response to feedback from the programme of WinEcon workshops, the WinEcon team are now looking proactively to collect case studies to share with users, both via the web and in future workshops. For further details please read the short piece below by Simon Price, of the TLTP Economics Consortium. There will also be an opportunity to input into the best route ahead for WinEcon at the CALECO 97 conference in Bristol on 25th and 26th September.
The Consortium's recent series of regional workshops entitled "Implementing WinEcon at Your University", for practical reasons, focused on the technical skills and the knowledge required for implementation. However, the most frequent request appearing on the workshop feedback forms was to hear more about the experiences of people who had implemented WinEcon. In particular, participants were seeking information on a variety of implementation methods so that they could select the most appropriate for their institution.
A few evaluation and implementation publications are listed on the WinEcon Web site or have
published in CHEER. However, the Consortium would very much like to
increase the breadth and depth of this coverage by collaborating with
CHEER in a call for articles, papers and letters on the implementation of
WinEcon. Real World experiences are sought - both good and bad - so please
respond and enable others to repeat successes and avoid repeating
The workshop began with a brief introduction to Ox from Jurgen, who explained that there are actually three ways of running Ox source code: at a 32-bit Windows command line (oxlw.exe), at an MS-DOS command line (oxl.exe) and within a Windows dialog and interacting with GiveWin (oxrun.exe). Using a simple example program he decribed the basic syntax of the language and indicated some of the benefits of working with it, particularly for running Monte Carlo simulations. Much of the rest of the day was spent at the computer obtaining hands-on experience of working through the examples and exercises provided, with additional guidance from Jurgen. From time to time we would stop and gather round for Jurgen to tell us more about Ox's features and functions.
Ox does not come with an editor, although one is under development (OxEdit) and we were able to work with it on the Oxford computers. In fact, when we first sat down at the computer screens we had to spend some time setting up the program because some of the necessary files were not in the correct directories. Participants' familiarity with Windows '95 were rather varied and for a short while the workshop became one in Windows '95 rather than Ox and we got behind schedule. Despite this setback (which did not appear to be Jurgen's fault) participants managed to make good progress during the day and Jurgen answered all questions with clarity and good humour. Although workshop like this could not make someone into an Ox programmer overnight (practice is the only way!) it succeeded in its objective of giving participants a flavour of the way that Ox works and an incentive to spend the time necessary to become proficient in its use.