Skip to main content

I am a Post-doc at the Department of Finance, Stockholm School of Economics and at the Institute for Social Research (SOFI), Stockholm University. I received my Ph.D. from Stockholm School of Economics in 2018.

I do research in empirical and experimental microeconomics, using econometric techniques to study how real-life decisions are impacted by psychological biases, non-standard preferences, and the lack of information. My current work involves topics such as college choice and household finance.

I believe that social science research needs to become more open and reproducible, which is why I am a BITSS Catalyst and partake in several replication projects.

Work in Progress

  • Siblings' Effects on College and Major Choices: Evidence from Chile, Croatia and Sweden

    with Andrés Barrios-Fernández et. al.

    While it is a widely held belief that family and social networks can influence important life decisions, identifying causal effects is notoriously difficult. This paper presents causal evidence from three countries at different stages of economic development that the educational trajectories of older siblings can significantly influence the college and major choice of younger siblings. We exploit institutional features of centralized college assignment systems in Chile, Croatia, and Sweden to generate quasi-random variation in the educational paths taken by older siblings. Using a regression discontinuity design, we show that younger siblings in each country are significantly more likely to apply and enroll in the same college and major that their older sibling was assigned to. These results persist for siblings far apart in age who are unlikely to attend higher education at the same time. We propose three broad classes of mechanisms that can explain why the trajectory of an older sibling can causally affect the college and major choice of a younger sibling. We find that spillovers are stronger when older siblings enroll and are successful in majors that on average have higher scoring peers, lower dropout rates and higher earnings from graduates. The evidence presented shows that the decisions, and even random luck, of your close family members and peer network, can have significant effects on important life decisions such as the choice of specialization in higher education. The results also suggest that college access programs such as affirmative action, may have important spillover effects through family and social networks.

  • Relative Returns to Swedish College Fields

    This paper studies the economic returns to fields of college education, using a unique Swedish data set of applicant preferences and university admissions. Field payoffs eight years after initial application vary substantially by field, but not nearly as much as in Norway (Kirkebøen et. al., 2016). Medicine, engineering, and business degree holders enjoy positive returns of around $10 000 per year, when compared to most other fields. Humanities, which is the worst paying field, incurs losses of almost as much. These results are causally identified using a regression discontinuity approach and confirmed by admission lotteries. In contrast to many other countries, the findings indicate that Swedish students do not necessarily prefer fields where their potential earnings indicate that they would have a comparative advantage.

Publications

  • Predicting the replicability of social science lab experiments

    PLOS One, 2019

    with Anna Dreber et. al.

    We measure how accurately replication of experimental results can be predicted by black-box statistical models. With data from four large-scale replication projects in experimental psychology and economics, and techniques from machine learning, we train predictive models and study which variables drive predictable replication. The models predicts binary replication with a cross-validated accuracy rate of 70% (AUC of 0.77) and estimates of relative effect sizes with a Spearman ρ of 0.38. The accuracy level is similar to market-aggregated beliefs of peer scientists (Camerer et al., 2016; Dreber et al., 2015). The predictive power is validated in a pre-registered out of sample test of the outcome of Camerer et al. (2018), where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two-variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.

  • Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015

    Nature Human Behavior, 2018

    with Colin F. Camerer et. al.

    Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

  • Evaluating Replicability of Laboratory Experiments in Economics

    Science, 2016

    with Colin F. Camerer et. al.

    The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

  • Using Prediction Markets to Forecast Research Evaluations

    Royal Society Open Science, 2015

    with Marcus Munafo et. al.

    The 2014 Research Excellence Framework (REF2014) was conducted to assess the quality of research carried out at higher education institutions in the UK over a 6 year period. However, the process was criticized for being expensive and bureaucratic, and it was argued that similar information could be obtained more simply from various existing metrics. We were interested in whether a prediction market on the outcome of REF2014 for 33 chemistry departments in the UK would provide information similar to that obtained during the REF2014 process. Prediction markets have become increasingly popular as a means of capturing what is colloquially known as the ‘wisdom of crowds’, and enable individuals to trade ‘bets’ on whether a specific outcome will occur or not. These have been shown to be successful at predicting various outcomes in a number of domains (e.g. sport, entertainment and politics), but have rarely been tested against outcomes based on expert judgements such as those that formed the basis of REF2014.