Skip to main content

I am a Post-doc at the Department of Finance, Stockholm School of Economics and at the Institute for Social Research (SOFI), Stockholm University. I received my Ph.D. from Stockholm School of Economics in 2018.

I do research in empirical and experimental microeconomics, using econometric techniques to study how real-life decisions are impacted by psychological biases, non-standard preferences, and the lack of information. My current work involves topics such as college choice and household finance.

I believe that social science research needs to become more open and reproducible, which is why I am a BITSS Catalyst and partake in several replication projects.

Work in Progress

  • Predicting Replication

    with Anna Dreber et. al.

    We measure how accurately replication of experimental results can be predicted by a black-box statistical model. With data from four large- scale replication projects in experimental psychology and economics, and techniques from machine learning, we train a predictive model and study which variables drive predictable replication. The model predicts binary replication with a cross validated accuracy rate of 70% (AUC of 0.79) and relative effect size with a Spearman ρ of 0.38. The accuracy level is similar to the market-aggregated beliefs of peer scientists (Camerer et al., 2016; Dreber et al., 2015). The predictive power is validated in a pre-registered out of sample test of the outcome of Camerer et al. (2018b), where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two- variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.

  • Relative Returns to Swedish College Fields

    This paper studies the economic returns to fields of college education, using a unique Swedish data set of applicant preferences and university admissions. Field payoffs eight years after initial application vary substantially by field, but not nearly as much as in Norway (Kirkebøen et. al., 2016). Medicine, engineering, and business degree holders enjoy positive returns of around $10 000 per year, when compared to most other fields. Humanities, which is the worst paying field, incurs losses of almost as much. These results are causally identified using a regression discontinuity approach and confirmed by admission lotteries. In contrast to many other countries, the findings indicate that Swedish students do not necessarily prefer fields where their potential earnings indicate that they would have a comparative advantage.

  • Sibling Influence on College Choice

    How are college choices shaped by the experiences of peers? Using registry data of applications to Swedish universities, I study how an individual's education experience influences the college applications of their siblings. Tie breaking lotteries at admission margins provide causal identification. I find that successful admission to a specific institution-program combination increases the likelihood that a sibling ranks that specific combination as their most preferred option from 2 to 3 percent. Siblings are five times as likely to follow one another to the same institution compared to the same field. The effect is stronger when both siblings are male but does not vary with the education level of the parents or with the popularity of the program. The observed spillovers seem to mainly be driven by a demand for convenience rather than the transmission of information between siblings.

Publications

  • Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

    Nature Human Behavior, 2018

    with Colin F. Camerer et. al.

    Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

  • Evaluating Replicability of Laboratory Experiments in Economics

    Science, 2016

    with Colin F. Camerer et. al.

    The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

  • Using Prediction Markets to Forecast Research Evaluations

    Royal Society Open Science, 2015

    with Marcus Munafo et. al.

    The 2014 Research Excellence Framework (REF2014) was conducted to assess the quality of research carried out at higher education institutions in the UK over a 6 year period. However, the process was criticized for being expensive and bureaucratic, and it was argued that similar information could be obtained more simply from various existing metrics. We were interested in whether a prediction market on the outcome of REF2014 for 33 chemistry departments in the UK would provide information similar to that obtained during the REF2014 process. Prediction markets have become increasingly popular as a means of capturing what is colloquially known as the ‘wisdom of crowds’, and enable individuals to trade ‘bets’ on whether a specific outcome will occur or not. These have been shown to be successful at predicting various outcomes in a number of domains (e.g. sport, entertainment and politics), but have rarely been tested against outcomes based on expert judgements such as those that formed the basis of REF2014.