Using San Francisco’s public transport to work out the value of research

Jonathan Grant and Alexandra Pollitt look at how discrete choice modelling might be able to work out what type of impact is most valued

August 28, 2016
Bay Area Rapid Transport station
Source: iStock

When the Bay Area Rapid Transport (BART) system was being built in San Francisco in the 1970s, economist Daniel McFadden wanted to see if he could predict the demand for the new train service. 

He collected data on the observed travel behaviour of about 700 commuters and, using an economic model, he predicted that about 6 per cent of the commuters would use the new BART system. He and his team then looked at actual uptake and discovered that, within a few decimal places, their predictions were accurate.

So was born the obscure branch of economics known as discrete choice modelling for which McFadden won the Nobel prize, with James Heckman, in 2000. He and others began to apply the method to a number of different areas of public policy – for example, health and social care, the environment and security. 

The great strength of discrete choice modelling is that it links choices that people make to the characteristics of the alternatives – as well as the characteristics of the people themselves. 

ADVERTISEMENT

For example, it is Friday evening and you are trying to work out whether to have a Chinese or Indian takeaway. Your choice is made by trading off factors such as the speed and reliability of delivery, with price, and how much you like different dishes and flavours. The same is true when you decide to take a train or a car to see a friend – one may be faster or more expensive, one is private and the other you share with fellow travellers.

You can also ask people to make choices between “hypothetical” goods or services in surveys, which allows you to examine their preferences for new products. By collecting data about people’s real or hypothetical choices you can begin to understand their preferences for different characteristics or “attributes” – such as Indian flavours or privacy in a car.

ADVERTISEMENT

In a recent study, we applied this methodology to different types of research impact. 

From previous research we know that biomedical and health research produces a significant economic return and that the types of impacts are diverse and often unpredictable. In the classical model of biomedical research translation, a finding is patented then commercialised.

However, research can also change professional practice, create jobs, influence education, or lead to other social or economic impacts. What we don’t know is how people value these different types of benefits. Do they prefer that governments invest in research that leads to job creation over research that reduces the cost of healthcare? Do researchers have different preferences compared with the general public? Do different types of researchers have different preferences to one another? And if so, by how much?

Answering these questions matters for two reasons. 

First, funders of research are increasingly making decisions on what to fund or not based on the actual or likely impact of the research. For example, in the recent research excellence framework nearly 7,000 case studies were reviewed and rated by researchers and research users resulting in the allocation of about £320 million of research funding per year. 

But how were the decisions about ratings made? Were they consistent? Were they fair? Did they reflect what the taxpayer (the funder) actually wants from research? 

ADVERTISEMENT

The second reason is that there is an active debate on whether metrics can be used to reward and allocate research funds based on impact. Such metrics need to be developed from an empirically derived evidence base to be fair and transparent, and at present that evidence does not exist. 

In a recently published study, we and colleagues have taken a small second step to address this issue (a second step because a similar study was undertaken in Canada by Fiona Miller and colleagues in 2013). We asked a representative population of the general public, as well as current and existing Medical Research Council grant-holders to choose between different types of research impact (e.g. research leading to better care being provided at the same cost, helping to create new jobs across the UK or increasing life expectancy).

ADVERTISEMENT

We used a preference elicitation technique, known as best-worst scaling, where we listed these impacts (and others) in randomly allocated batches of eight and asked the respondent to choose their most favoured and least favoured. We then knocked out these choices and asked them to choose their second favourite and second least favourite. 

This task was repeated a number of times by each respondent. We then applied a similar modelling technique to that used by Daniel McFadden to see what type of research impact people preferred.

Not surprisingly, our findings showed that the general public and researchers value different types of research impacts in different ways. For example, private sector investment is valued more by the general public than researchers, and researchers prefer the training of future academics over the training of future medical professionals, in contrast to the general public. 

Perhaps more importantly, we have demonstrated that it was possible to apply a technique developed for the San Francisco Bay Area Transport System to assess research impact. A technique that has been used to estimate the value of time (used in transport economics) and quality of life (used in health economics), could be applicable for developing a ”value of impact”. 

This is a small step, but if validated, improved and developed it may be possible to have a metric that captures different stakeholder valuations of research that could be used for both assessment and allocation purposes.

It may be a crazy idea, but actually it’s no crazier than thinking that you could predict demand for a local transport system based on an economic model. And look at the impact that research has had.

ADVERTISEMENT

Jonathan Grant is director of the Policy Institute at King’s College, London, and Alexandra Pollitt is a research fellow at the same institution.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored

ADVERTISEMENT