Session 4: Monday 27th October10:00-12:00 & Tuesday 28th October 17:30-19:30     

Survey & Quantitative Research; Sampling

Professor Paul Dowling

Quantification is used in different ways in research activity and in different relation to theory. Luria’s research (1976, discussed in the previous session and in Dowling & Brown, 2010), is theory-driven in the sense that both cognitive and sociocultural development are defined in advance. The definition of sociocultural development enables critical case sampling—societies in transition—and the definition of cognitive development facilitates the production of stimuli items for use in clinical interviews. The results of interviews allow the classification of individuals, as the unit of analysis. The number of individuals in each category were then counted producing tables such as the one reproduced as Figure 8.1 in Dowling & Brown (2010, p. 110). As we point out in Dowling & Brown (2010), the analysis—the assigning of each individual’s discourse to a particular category—is qualitative and is illustrated in the elaborated description of interview transcripts. Counting the number of individuals in each category elides differences between individuals classified in the same category and enables the presentation of a picture of the distribution of cognitive development in terms of sociocultural development. The result makes visible the relation between these two variables, represented here as a tabulation, but which might alternatively have been formulated as a graph of one form or another.


Luria’s initial theorization of cognitive development might alternatively have been used as an experimental designed to explore the potential of pedagogic intervention to develop cognition, independently of the level of sociocultural development of the societal setting of the research. The theory of cognitive development would enable the production of test items for use in the classification of experimental participants. It would also advise the devising of a pedagogic intervention. In fact, two interventions would need to be developed, one that it is hoped would lead to cognitive development—the experimental treatment—and another that does not incorporate the developmental features—the control treatment. ‘Control’ refers to the control of non-experimental variables, such as natural development that may occur during the course of the study but that is not related to the experimental treatment. Two groups of participants would be constructed and these would need to be matched in terms of non-experimental variables, such as age, gender, socioeconomic class (relating to sociocultural development), and perhaps school performance level. Two versions of the test would also need to be produced, one to be administered in advance of the treatment—a pre-test—and one afterwards—the post-test. The experiment would now be run: pre-test administered to both groups; experimental group to be given the experimental pedagogic treatment, control group to be given the control treatment; post-test administered to both groups. A third test—a delayed post-test— might be administered to both groups some time after the post-test in order to measure the extent to which any cognitive development is sustained (although evidence of negative cognitive development would ask serious questions of the original theory and/or the validity and reliability of the tests). The results, in terms of test scores, would enable the researcher to quantify individual cognitive development over the course of the programme and so to compare the development of experimental and control groups using descriptive statistical measures—possibly mean and standard deviation of developments—to quantify group development effects. An inferential statistical technique might then be deployed in order to assess the statistical significance of the result, that is to say, to measure the probability that any difference that was found between the groups might have arisen randomly (even if the two groups were given exactly the same treatment, it is highly unlikely that this would have resulted in zero difference in development between the them). Conventionally, if this probability—the p-value—is below 0.05 (or, for a sterner test, 0.01), then the ‘null hypothesis’ that there is no difference in the cognitive development of the two groups can be dismissed and we can claim that the results are statistically significant. it should be mentioned that statistical significance does not entail significance per se: if the number of participants is sufficiently large, even very small, even trivial, differences may prove to be significant statistically.


Luria’s research and my imaginary experiment are both theory driven research. Not all quantitative research is of this kind, however. The research into attitudes to dance by Patricia Sanderson that is a focus in this session starts off, in her 2000 paper, as theory generating research and is followed up in the 2008 paper as theory-driven research that deploys the theory generated in the earlier paper. Indeed, the first piece of research begins with a qualitative study, using focus groups. The focus group discussions were analysed qualitatively to generate a set of 70 statements about different aspects of dance. The results included statements like: ‘Ballet dancing is for women’ and ‘To enjoy dance the movements have to be exciting’. These statements are then presented to a sample of secondary school students, who are asked to register their agreement or disagreement with each statement using a 5-point, Likert-type attitude scale. At this point, each statement constitutes a separate variable, so there are 70 variables. The process of data analysis in both qualitative and quantitative research often involves the reduction of the number of variables. In this case, the 70 variables are very close to empirical utterances, indeed, Sanderson indicates that most of the original students’ statements remained intact in the analysis process, so that, for the most part, they constituted what were deemed to be representative selections, they were empirical instances that do not carry much in the way of meaning beyond themselves: they do not, in and of themselves, conceptualise attitudes to dance; they do not amount to a serious move into the theoretical. In any event, 70 variables are too many to handle in social research. So Sanderson recruits a statistical variable reduction technique, exploratory factor analysis. This is a technique that we can describe as combining deductive and abductive forms of argument, in this case, to arrive at a much reduced number—four—of variables or factors that organize the original statements. The questionnaire is then revised and administered to a larger group of students in order to assess the validity and reliability of the factors. Subsequently, a questionnaire based on this work is administered to another sample of students and the findings of this survey are presented in Sanderson, 2008.


The use of Likert-type scales is common in social research. It is important to note that the individual items on a Likert-type questionnaire do not themselves constitute variables, but indicators of variables and that the analysis of the survey results involves collating the responses to items that indicate the same variable and not the reporting of item responses themselves individually. Of course, not all surveys involve Likert-type scales or factor analysis and some include provision for open as opposed to pre-coded responses that will require qualitative analysis prior to quantification.

It is not clear how the samples in Sanderson’s studies were established. In this kind of research, there is commonly an intention to generalize from the sample to the population from which the sample is drawn. In order to facilitate this, the sample must be representative of that population. Market and polling research will sometimes employ quota sampling in order to achieve this. This entails classifying the population—for example, in terms of occupational group, gender, and so forth—and estimating the number in each classification. The sample should then comprise proportionate numbers of individuals in each group. In practice, respondents will be included or not depending upon their answers to questions about their occupation and gender (if not apparent) and the distribution of data already collected. This strategy is often not practicable in social research, which is likely to recognize a far wider range of categories. The response to this may be to establish a random sample. Here, a sampling frame is first drawn up. This may be a list approximating to the relevant population, such as the electoral register (as an approximation to the list of the adult population of a state), a list of registered students in an institution, and so forth. Producing a random sample then entails that each member of the list has an equal chance of being selected for the sample. For a ten percent sample, take a random number between 1 and 10, start with the member corresponding to that number, and select every tenth member subsequently. In practice, a simple random sample may be difficult to achieve (eg because of a wide geographical distribution of the sampling frame) and/or the researcher may want to ensure that certain categories are equally represented. Under such circumstances, compromises are made in the form of clustering or stratification. The general assumption is that if the sample has been selected randomly, then there is no reason to suppose that the distribution of variables within the sample will differ substantially from their distribution within the population. Inferential statistical techniques can be used to establish (or not) statistical significance.


Not all survey (or experimental) research uses random sampling. One aspect of Sanderson’s sampling (the selection of schools) was what she refers to as purposeful. This entails that selection is intended to recruit individuals whose knowledge or experience is relevant to the research. Grever et al (2011) also appear to have deployed purposeful sampling to some degree and in some respects, though they are not clear on their strategies. In any event, Grever et al are not looking to generalize to a population, but rather to illustrate the existence of particular views on the content of school history curricula.


In qualitative research, the intention is generally not to generalize from a sample to a population, so random sampling is usually not relevant or desirable, because of the difficulties in recruiting a random sample. The approach used in the grounded theory study of Tina Johnston (see session 4) is theoretical sampling, whereby sampling decisions are based on the preliminary analysis of data already collected. Douglas and Carless, in their narrative study, seem to have selected purposefully from the participants in an ongoing study. Sampling strategies used in ethnographic studies may commonly include a form of theoretical sampling and opportunistic sampling, which may entail that you speak with whoever is available, but more purposeful approaches may also be used. Denovan and Macaskill also deployed purposive sampling in their phenomenological study. In generaly, it’s probably true to say that sampling strategies in social research are most commonly one or a combination of: random, purposeful, theoretical, or opportunistic sampling. The choice of sampling strategy, as is the case with other methodological decisions, must be made so as to provide access to appropriate data and to enable the researcher to make the argument that they intend to make, although the precise nature of that argument will not, of course, be known until after the completion of data analysis.

Preliminary Reading

SANDERSON, P. (2000). The Development of Dance Attitude Scales. Educational Research. 42(1): 91-99.

Further Reading

DOWLING, P.C. & BROWN, A.J. (2010). Doing Research/Reading Research: re-interrogating education. London. Routledge. c. 8 & pp. 26-31.
FABRIGAR, L. R. & WEGENER, D.T. (2012). Exploratory Factor Analysis. Oxford, Oxford University Press.
GREVER, M., PELZER, B. & HAYDN, T. (2011). ‘High School Students’ Views on History.’ Journal of Curriculum Studies. 43(2). 207-229.
OSBORNE, J.W. (2014). Best Practices in Exploratory Factor Analysis. See http://jwosborne.com.
SANDERSON, P. (2008). The arts, social inclusion and social class: the case of dance. British Educational Research Journal. 34(4).