Research intelligence - The bottom line on the research ripple effect

Paul Jump meets the economist working to quantify the value-added contributions of science spending

May 26, 2011

Governments around the world increasingly acknowledge the economic case for investing in science. But in 2009, when US President Barack Obama announced his stimulus bill, there were plenty of people who questioned the value of putting extra money into science.

"The concern was that (science spending) wouldn't create jobs," explained Julia Lane, director of the Science of Science and Innovation Policy programme at the National Science Foundation (NSF). "It was a challenge to describe how many people were supported by science funding, and it took (universities) a lot of time to document things. There had to be a better way," she told Times Higher Education.

As it happens, there was. In a speech in 2005, John Marburger, science adviser to Mr Obama's predecessor, George W. Bush, had complained about the lack of solid theoretical foundations and empirical data to inform science spending.

This led to the creation of a working group of 17 funding agencies, including the NSF and the National Institutes of Health, under the White House Office of Science and Technology Policy.

ADVERTISEMENT

In the same year, the NSF established its Science of Science and Innovation Policy programme, recruiting as its head Professor Lane, a "reasonably successful" labour economist (as she described herself), a year later.

The New Zealander found that there was a great deal to be done. The field was still very new. She compared it to labour economics 40 years ago, when the "weak and marginalised" discipline - which now features a number of Nobel prizewinners - consisted of "a bunch of people telling stories about industrial relations".

ADVERTISEMENT

The first big task was to gather information. She found that most of the reporting systems used by universities and research funders around the world were primitive and not set up to record the long-term impact of their work.

"Agencies are charged with identifying the best science and making sure that no shenanigans go on with the money - and the entire data infrastructure reflects that."

Another hindrance was the insistence of funders that principal investigators identify the outcomes of specific awards, she said. "I have lots of grants from different places. It would be impossible to identify what each one contributed."

In building a new system, Professor Lane sought to involve institutions and academics because "if you have a bunch of federal bureaucrats figuring out what the measures ought to be, they are going to be wrong." She was also concerned that a top-down approach would distort the science being funded.

Star rising

To ensure maximum take-up in the new scheme, "Star Metrics" (Science and Technology for America's Reinvestment: Measuring the Effect of Research on Innovation, Competitiveness and Science), she strived to ensure that it would reduce rather than increase the "ridiculous" amount of time academics spent on administration and that universities would not need any new systems.

ADVERTISEMENT

The first phase of Star Metrics, which examined how science funding supported jobs, mined universities' existing financial and human resources records for information on how grant awards were spent.

"The system takes 20 hours to set up, and after that you just push a button," Professor Lane explained. "We know the amount of money that went, for instance, to the rat- cage industry. From economic census data, we know that 60 per cent of that is salaries. We also know the average salary (in that industry), so we can say how many jobs were supported."

After a successful pilot in 2009, the programme went nationwide, with the vast majority of universities happy to take part.

ADVERTISEMENT

But most exciting for Professor Lane is phase two, which aims to capture the wider social and economic impact of science funding on things such as health, better trained workers and technological advances.

A good place to start capturing such information, she suggested, might be documents such as annual reports and biosketches, which scientists are only too happy to submit to funders "because they want to strut their stuff". Other seams to mine include databases of patents, which often cite underlying research. Such information could be combined with financial data about the value of the patents.

Professor Lane said it was "good government" to assess the impact of science spending, to keep abreast of emerging trends and to ask if money is being spent wisely and effectively.

She hoped Star Metrics would also throw light on issues such as whether it was better to give longer or shorter grants, or to fund young researchers rather than established stars, although she thought it would take years to develop accurate impact measures.

ADVERTISEMENT

"But we can start doing it. And as you build more data, you will get lots of really smart people coming into the field. The idea that smart people won't be able to figure out how to (do this) is just not feasible."

paul.jump@tsleducation.com.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT