2.3 Analysing the results of questionnaires
You are here
I shall assume that the questionnaires were completed and submitted for analysis in paper form. Online questionnaires are discussed in section 4.1. Here is a summary of the key stages in the process of analysing the data with useful tips – more extensive discussion follows:
- Prepare a simple grid to collate the data provided in the questionnaires.
- Design a simple coding system – careful design of questions and the form that answers take can simplify this process considerably.
- It is relatively straightforward to code closed questions. For example, if answers are ranked according to a numerical scale, you will probably use the same scale as code.
- To evaluate open questions, review responses and try to categorise them into a sufficiently small set of broad categories, which may then be coded. (There is an example of this below.)
- Enter data on to the grid.
- Calculate the proportion of respondents answering for each category of each question.
- Many institutions calculate averages and standard deviations for ranked questions. Statistically, this is not necessarily a very sound approach (see the discussion on ‘evaluating data’ below).
- If your data allow you to explore relationships in the data – for example, between the perceived difficulties that students experience with the course and the degree programme to which they are attached – a simple Chi-squared test may be appropriate.
- For a review of this test and an example, see Munn and Drever (1999) and Burns (2000) – the page references are indexed.
- You may wish to pool responses to a number of related questions. In this case, answers must conform to a consistent numerical code, and it is often best simply to sum the scores over questions, rather than compute an average score.
Preparing a grid
You will have a large number of paper questionnaires. To make it easier to interpret and store the responses, it is best to transfer data on to a single grid, which should comprise of no more than two or three sheets depending on the number of questions and student respondents. A typical grid looks like this:
If the answers to a question are represented on the questionnaire as points on a scale from 1 to 5, usually you will enter these numbers directly into the grid. If the answers take a different form, you may wish to translate them into a numerical scale. For example, if students are asked to note their gender as male/female, you may ascribe a value of 1 to every male response and 0 to female responses – this will be helpful when it comes to computing summary statistics and necessary if you are interested in exploring correlations in the data. It will make it much easier to analyse the data if there is an entry for all questions. To do this, you will need to construct code to describe ‘missing data’, ‘don’t know’ answers or answers that do not follow instructions – for example, if some respondents select more than one category.
Coding open questions is not straightforward. You must first read through all of the comments made in response to the open questions and try to group them into meaningful categories. For example, if students are asked to ‘state what they least like about the course’, there are likely to be some very broad themes. A number may not find the subject matter interesting; others will have difficulties accessing reading material. It may be useful to have an ‘other’ category for those responses that you are unable to categorise meaningfully.
Often, it is sufficient and best simply to calculate the proportions of all respondents answering in each category. (An Excel spreadsheet is much quicker than using a calculator!) It is clear that having a category for all respondents who either don’t know or didn’t answer is very important, as it provides useful information on the strength of feeling over a particular question.
Questionnaire results are often used to compute mean scores for individual questions or groups of questions. For example, the questionnaire may ask students to rate their lecturer on a five-point scale, with 5 denoting excellent, 4 good, 3 average, 2 poor and 1 very poor. The mean score is then used as an index of the overall quality of a lecturer with high scores indicating good quality. This is not a particularly useful or legitimate approach as it assumes that you are working on an evenly spaced scale, so that, for example, ‘very poor’ is twice as bad as ‘poor’, and ‘excellent’ twice as good as ‘good’.
Often analysts add up scores over a number of related questions. For example, you may ask students ten questions related to a lecturer’s skills, all ranked from 1 to 5 with 5 indicating a positive response, and add up the scores to derive some index of the overall ability of the lecturer. Again, except in carefully designed questionnaires, this approach is inappropriate. It assumes that each question is relevant and of equal importance. Comparing scores across different lecturers and modules, this assumption is unlikely to hold. If you are interested in summative indices of quality, it may be best simply to ask the students to rate the lecturer themselves on a ranked scale.