Today, a tiny office in the sprawling edifice of the National Institutes of Health released a strategic plan. The 58-page document, complete with bullet phases and clip art, spells out a direction for behavioral and social science research–including psychology, economics, and sociology–for the next four years. And while it doesnt immediately shunt fund around, the plan is a bat signal for social scientists across the nation: It shows what the NIH is interested in and( likely) where grants will follow. And that could ultimately shape the direction of behavioral and social science itself.
The plan comes from the Office of Behavioral and Social Science Research, an limb of the NIH that directs social science efforts within each of the agencys 27 institutes. The last time the office issued a report was a decade ago. But theres been a fundamental shift in social science research, says the office’s director Bill Rileyin big part because of the advent of smartphones and sensors and the rich, deep data theyve yielded about people. The new plan aims to take those changes into account.
Mostly, the goals are about stimulating social science more useful: coming up with public health interventions informed by research, reducing the gap between procuring an effective therapy for say, nervousnes, and actually treating people that style. But it also includes a nod to the problems raised by supporters of replication in the past two yearsyou know, the researchers who suggest that the foundations of psychology and other sciences arent as firm as everyone believe. As a blueprint for the future of social science, the scheme is a uncovering look at how the NIH thinks about those issues.
For one, the scheme calls for researchers to nail down and agree on nomenclature for different notions so researchers arent just talking past each other. Often, in behavioral science, people talk about different phenomena but genuinely mean the same thing, says Riley. Or the opposite happens: Chemists dont squabble about what oxygen is, but if psychologists convene a seminar on a fuzzier concept like trust, says Colin Camerer, an economist at Caltech, they’ll spend the first two days disagreeing about what the word actually means.
That ambiguity gets tricky when researchers are trying to share and comparison datasets, especially the massive ones scientists work with nowadays.( If youre trying to compare variables in two datasets both named” resilience, how do you know theyre really the same thing ?) To fix these problems, the plan indicates, scientists should settle on rigorously defined words. We need to figure out what we mean when we say depression, and how to define iteither by using the same measures, or by calibrating with the same framework, Riley says.
Social scientists like Camerer are impressed that the NIH recognise the opportunities offered by new sources of data, from Twitter to text messaging to more detailed brain scans. Its absolutely fantastic, Camerer says. The NIH is trying to lead , not follow. And he says some of the NIHs priorities in the scheme, like defining words, would do wonders for reproducibility. Focusing on bigger datasets would also help attain research more robust, says Jonathan Schooler, a psychologist at UC Santa Barbara, since performing experimentations on too-small groups of people can often lead to unrepeatable studies.
But Camerer isnt as enthused about having every researcher employ a single metric to measure something, even if it would construct comparing run easier. Its a classic problem of standardization, he says. The threat is that you get stuck with one measure to utilize that isnt that great ,” and then everyone aims up with mediocre data. Instead, having a listing of three acceptable measures is better than just necessitating one.
Other researchers are concerned that the scheme doesnt call out reproducibility issues front and centerinstead, it folds those issues into a discussion about better managing data. Its disappointing, because replication is all I hear anyone talk about these days, says Hal Pashler, a cognitive scientist at UC San Diego. He and Schooler think the NIH should more aggressively fund research into meta-science and promote scientists to perform replications, or foster big multi-lab cooperations. And the plan constructs no mention of issues like publishing bias or preregistration–other key parts of the research process that impact a study’s reproducibility.
But even if the NIH’s plan comes up short, supporters of replication aren’t too worried. Yes, the NIH could do more to incentivize redoing analyses by offering up more funding for it. But thanks to reproducibility the initiatives in the past two years, other grant-making organizations are now much more eager to fund that the project works. And there’s nothing like funding to shape the direction of scientific research.