META-REP Conference 2024
print

Links and Functions

Breadcrumb Navigation


Content

Keynotes

Which research is worth doing well?

Keynote by Daniel Lakens

Abstract: In recent years scientists in many disciplines have had to raise the bar to improve the reliability of their findings. The increasing realization that fields should perform more replication studies, as well as design studies with larger samples, reduces the number of research questions we can reliably study, compared to our beliefs about this a decade ago. Improving the quality of individual studies takes more resources, and further reduces the number of studies we can perform. This raises questions concerning research prioritization. Which studies should fields perform, and how should they decide upon these studies? Are individual scientists able to make these choices, or does research prioritization require collective decision making, for example at consensus conferences? Would increasing coordination in the selection of research questions facilitate scientific progress, or hinder it? And how should we shape reward structures in academia to move towards collective agreements about research prioritization, in fields where it is deemed to be beneficial?

Bio: Daniel Lakens, PhD., is an Associate Professor of Metascience and chair of the Ethical Review Board at the Human-Technology Interaction group at Eindhoven University of Technology in The Netherlands. Lakens’ work focuses on improving research methods and statistical inferences in the social sciences. He has published more than 100 peer-reviewed articles, including highly cited papers on effect sizes, sequential analyses, equivalence testing, and sample size justification. He won the Ammodo Science award for fundamental research in the Social Sciences in 2023. He is internationally recognized for his contributions to improve research practices in psychological science. He co-edited the first Registered Reports in science with Brian Nosek in 2014, convinced the Dutch science funder NWO board to create dedicated grants to fund replication studies, and was actively involved in the design and analysis of the Reproducibility Project: Psychology. He received the Leamer-Rosenthal Prize for Open Social Science in the category “Leader in Education” in 2017. His popular massive open online course and accompanying textbook “Improving Your Statistical Inferences” that he has created have been used by tens of thousands of people looking to improve their statistical skills. His pragmatic teaching approach focusses on practical and immediately implementable knowledge that will improve the reliability and efficiency of your research.

How forecasts of replicability and other structured deliberation protocols can improve peer review

Keynote by Fiona Fidler

Abstract: Now in its sixth year, the repliCATS project (Collaborative Assessments for Trustworthy Science) has evaluated over 4,000 published social science articles across 8 disciplines, including psychology, economics, and education, as well as many preprints. For each paper, a diverse group of experts forecasts the likely replicability of the research findings and makes a variety of other judgements about the credibility of the evidence presented using a structured deliberation protocol. This talk will present our approach to evaluating research, and for cases where we have the outcome of actual replication studies, data about the accuracy of our forecasts. I will also discuss how structured expert elicitation, deliberation, and decision protocols like those used in in repliCATS might improve peer review more generally.

Bio: Fiona Fidler is a professor at the University of Melbourne and the Head of its History and Philosophy of Science Program. She is broadly interested in how experts, including scientists, make decisions and change their minds. Her past research has examined how methodological change occurs in different disciplines, including psychology, medicine and ecology, and developed methods for eliciting reliable expert judgements to improve decision making. She originally trained as a psychologist, and maintains a strong interest in psychological methods. She has been active in establishing the Metascience community, including founding the Association for Interdisciplinary Metaresearch and Open Science (AIMOS) in 2019. She is co-director (with Simine Vazire) of the MetaMelb Research Initative at the University of Melbourne, and lead PI of the repliCATS project (Collaborative Assessments for Trustworthy Science).


Service