Stanford president’s downfall: a wider indictment of US research?

While Marc Tessier-Lavigne has fallen on his sword, the circumstances of his departure point to much deeper problems with scholarly norms and incentives

July 25, 2023
A cameraman sets up a shot of a Tyrannosaurus rex to illustrate Stanford exit fallout grows
Source: Getty Images

Over the months leading up to his resignation, it gradually became clear to Marc Tessier-Lavigne that the culture he apparently tolerated in his research laboratory – speed and slop too often overtaking precision and value – had made his position as Stanford University president untenable.

For the rest of US higher education, the question the debacle suddenly leaves behind is whether, for similar reasons, the nation’s entire academic research enterprise may also have grown unsustainable.

Professor Tessier-Lavigne had no choice but to step down once the details of his case – failing to correct known errors in papers on which he was a listed author – became clear, experts said.

But the cultural norms that his lab reflected are commonplace, said Ferric Fang, a professor of laboratory medicine and microbiology at the University of Washington-Seattle, who has become an expert in scientific fraud.

ADVERTISEMENT

“It’s unfortunate this happened to him, because it could have happened to a whole lot of people,” Professor Fang said. “The big story is that hyper-competitive science is widespread, and it creates corruption of the scientific record – it creates findings which are not robust, it causes people to cut corners and make mistakes or do dishonest things.

“Having Marc Tessier-Lavigne fall on his sword is not the story.”

ADVERTISEMENT

The issues are well known: federal research funding doesn’t come close to meeting demand, creating incentives for shortcuts and even deception. University evaluators don’t take the time to read scientific papers, and often rely instead on volume counts and computerised metrics in tenure and promotion decisions. A similar dynamic infects grant-awarding processes. High-prestige journals look for the sensational in the resulting flood of submissions, reducing incentives for science that is most valuable to critical fields of study.

Professor Tessier-Lavigne, a neuroscientist, was hired to the Stanford presidency in February 2016, while serving as president of The Rockefeller University in New York. A few months earlier, reports began appearing on PubPeer – an online platform for scientific exchange – raising questions about possible data manipulation in papers published in Science and Nature on which Professor Tessier-Lavigne was a co-author.

Those concerns were revived in November 2022 – amplified by the student newspaper, The Stanford Daily – prompting a university investigation that eventually encompassed 12 journal articles involving Professor Tessier-Lavigne published between 1999 and 2012.

The Stanford investigation found no evidence that Professor Tessier-Lavigne had manipulated any data in his papers or knew at the time about anyone else doing that. But the investigative panel said it did find several instances where he had “failed to decisively and forthrightly correct mistakes in the scientific record” after learning about them. It also described him as running a lab with a culture that rewarded scientists who produced favourable results and penalised those who did not.

On one level, said Judith Wilde, professor of policy and government at George Mason University and an expert on campus presidential searches, the Tessier-Lavigne case shows that universities are not getting good advice on hiring.

Stanford’s 2016 presidential selection process appears to have been unusual, an exception to the overwhelming tendency of universities to use outside search firms, Professor Wilde said. Yet it reflected a tendency – with or without outside professional assistance – to fail to turn up issues that could develop into serious problems.

ADVERTISEMENT

“The universities are relying on search firms to do it,” Professor Wilde said of comprehensive background checks. “The search firms tell them everything’s fine, although they do a very rudimentary job.”

More likely than detecting problems, she said, either a search firm or a university-led process could hide them. The firms tend to keep proposing the same candidates to new university clients, while faculty familiar with a candidate – the best potential source of useful information – are kept silent by the fear of legal or employment repercussions, she said.

ADVERTISEMENT

Jim Sirianni, managing director for education at Storbeck Search, a leading higher education search firm, suggested that such criticisms overlooked the complexity of the job. On one level, Dr Sirianni acknowledged, the amount of information publicly available on academic scientists far exceeds that for candidates in most other professions. Yet that volume of information is so great that it is hard to assess it all.

Dr Sirianni, a former director of the High School Summer College at Stanford, said he could not explain why the Stanford search committee that chose Professor Tessier-Lavigne could not have matched its student newspaper in uncovering his publication problems.

But, he added, “it quite literally could be four or even five decades of activity as a scholar or scientist” that would need to be reviewed.

For Professor Fang, much of that is beside the point. There can never be enough vetting and enforcement to stop corruption in the sciences if the incentives for bad behaviour remain strong, he said.

Holden Thorp, editor-in-chief of Science, said while it was “absolutely fair” to complain that academic publishing too often rewarded glitz over substance – something he is trying to change by seeking papers outside the usual channels – universities also had to make a greater effort to base their career rewards on meaningful contributions to the sciences.

“Everything is inherently lazy,” the former chancellor of the University of North Carolina at Chapel Hill said of the metrics for promotion in academia.

ADVERTISEMENT

“It’s based on quantitative factors and external forces like rankings and citations and all of this stuff, rather than a holistic evaluation of not only the research, but also the effect of that research on the world and on the people participating in conducting it.”

paul.basken@timeshighereducation.com

POSTSCRIPT:

Print headline: Stanford exit fallout grows 

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (7)

This is unusually ignorant. The issue is not "US research." The problem are international, and especially serious in medical research where Tessier-Lavigne's use of fraudulent images and failure to replace them after the error was identified is too common. Basken ignores that Tessier-Lavigne finally left his presidency, BUT he keeps his tenured position. That should not be permitted Basken cannot decide on his focus. Holden Thorp is an administrator not a researcher
Is having lots of co-authors, many of whom have only a passing acquaintance with the research, on papers also something of a problem?
Indeed, and sadly, mid-career and profs are the worst. Sometimes for good reasons, coz it makes the studies more 'attractive' and possibly attract future fundings (which are usually the 'reasons' for including them), but sometimes not, coz many just tag along.
For as long as universities focus only on the "volume counts and computerised metrics in tenure and promotion decisions", researchers will cut corners. Time to look at quality, not quantity of the research. But that is a long way off.
There's really something wrong with the scientific research 'system'. We've know these problems exist for a very long time (including those in the comments), globally, but nothing is ever done about it. The unhealthy competitions (which also give birth to free-riders) have made this space increasingly unpleasant to be in.
A low quality publication cited many times is still a low quality paper. Isn’t the best safeguard against this the process of peer review. What happened there? Metrics are a fine idea, as long the right thing is being measured. That is where to focus if metrics are to have any real value. Measure what matters, not what is easy to measure, because the latter is lazy and misleading.
“It’s based on quantitative factors and external forces like rankings and citations and all of this stuff, rather than a holistic evaluation of not only the research, but also the effect of that research on the world and on the people participating in conducting it.” Interesting statement coming from a publication (THES!) that pushes a very strong agenda in the international university ranking schemes ...

Sponsored

ADVERTISEMENT