Gathering accurate data is vital as government and other agencies turn increasingly to surveys to gauge public feeling about policy issues. But even the most reputable surveys can never be perfect - although they do not set out to mislead deliberately. All have a statistical margin of error and the chosen sample may not be a true reflection of the population whose views are being sought.
Questions are usually tested in small-scale trials before the survey is launched in an attempt to clear up ambiguities and give survey compilers more confidence in the validity of their data. But sometimes there is simply not time to pre-test questions - for example, when surveys are commissioned to gauge public feeling about fast-moving events. Even when there is time, flaws do not show up unless people overtly identify that they do not understand the question or feel unable to tick a box as instructed. And, despite pre-testing, questions can still remain ambiguous.
There are many potential pitfalls: surveys can fail to give enough clues about how to respond to questions; words or concepts can be open to more than one meaning; and "closed answers" can force people to make a choice, even though it may not fit with their experience. Faulty layout can also confuse, as controversy surrounding votes in the US presidential election in Florida has shown. And cultural factors can distort findings. For example, attempting to obtain views on sexual matters from first generation Muslim immigrants can prove problematic.
Increasingly, psychological techniques are allowing survey designers to identify weaknesses in questions. This cognitive testing methodology, pioneered in the US, enables survey designers to explore indepth how potential respondents interpret questions.
The techniques pay explicit attention to the mental processes people use to answer survey questions. Better survey design can undoubtedly help policy-makers and service-providers to be more responsive to the public's wants, needs and priorities by showing what people are really thinking when they fill in questionnaires.
Next month, Hull will host an international conference to explore how this approach can be applied to the field of health-related quality-of-life data. Surveys might never yield perfect results, but at least we are beginning to confront some of their more fundamental shortcomings.
Keith Meadows is head of the health and survey unit, applied statistics centre, University of Hull. Hull will host "Assessing health-related quality of life - what can the cognitive sciences contribute?" from December 3-5.
- Interviewed by Helen Hague