640 results on '"Athey, Susan"'
Search Results
2. Estimating Treatment Effects with Causal Forests: An Application
- Author
-
Athey, Susan and Wager, Stefan
- Published
- 2021
- Full Text
- View/download PDF
3. Robust Offline Policy Learning with Observational Data from Multiple Sources
- Author
-
Carranza, Aldo Gael and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We consider the problem of using observational bandit feedback data from multiple heterogeneous data sources to learn a personalized decision policy that robustly generalizes across diverse target settings. To achieve this, we propose a minimax regret optimization objective to ensure uniformly low regret under general mixtures of the source distributions. We develop a policy learning algorithm tailored to this objective, combining doubly robust offline policy evaluation techniques and no-regret learning algorithms for minimax optimization. Our regret analysis shows that this approach achieves the minimal worst-case mixture regret up to a moderated vanishing rate of the total data across all sources. Our analysis, extensions, and experimental results demonstrate the benefits of this approach for learning robust decision policies from multiple data sources., Comment: arXiv admin note: substantial text overlap with arXiv:2305.12407
- Published
- 2024
4. Estimating Wage Disparities Using Foundation Models
- Author
-
Vafa, Keyon, Athey, Susan, and Blei, David M.
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics ,Statistics - Methodology ,Statistics - Machine Learning - Abstract
One thread of empirical work in social science focuses on decomposing group differences in outcomes into unexplained components and components explained by observable factors. In this paper, we study gender wage decompositions, which require estimating the portion of the gender wage gap explained by career histories of workers. Classical methods for decomposing the wage gap employ simple predictive models of wages which condition on a small set of simple summaries of labor history. The problem is that these predictive models cannot take advantage of the full complexity of a worker's history, and the resulting decompositions thus suffer from omitted variable bias (OVB), where covariates that are correlated with both gender and wages are not included in the model. Here we explore an alternative methodology for wage gap decomposition that employs powerful foundation models, such as large language models, as the predictive engine. Foundation models excel at making accurate predictions from complex, high-dimensional inputs. We use a custom-built foundation model, designed to predict wages from full labor histories, to decompose the gender wage gap. We prove that the way such models are usually trained might still lead to OVB, but develop fine-tuning algorithms that empirically mitigate this issue. Our model captures a richer representation of career history than simple models and predicts wages more accurately. In detail, we first provide a novel set of conditions under which an estimator of the wage gap based on a fine-tuned foundation model is $\sqrt{n}$-consistent. Building on the theory, we then propose methods for fine-tuning foundation models that minimize OVB. Using data from the Panel Study of Income Dynamics, we find that history explains more of the gender wage gap than standard econometric models can measure, and we identify elements of history that are important for reducing OVB.
- Published
- 2024
5. LABOR-LLM: Language-Based Occupational Representations with Large Language Models
- Author
-
Athey, Susan, Brunborg, Herman, Du, Tianyu, Kanodia, Ayush, and Vafa, Keyon
- Subjects
Computer Science - Machine Learning ,Computer Science - Computation and Language ,Economics - Econometrics - Abstract
Vafa et al. (2024) introduced a transformer-based econometric model, CAREER, that predicts a worker's next job as a function of career history (an "occupation model"). CAREER was initially estimated ("pre-trained") using a large, unrepresentative resume dataset, which served as a "foundation model," and parameter estimation was continued ("fine-tuned") using data from a representative survey. CAREER had better predictive performance than benchmarks. This paper considers an alternative where the resume-based foundation model is replaced by a large language model (LLM). We convert tabular data from the survey into text files that resemble resumes and fine-tune the LLMs using these text files with the objective to predict the next token (word). The resulting fine-tuned LLM is used as an input to an occupation model. Its predictive performance surpasses all prior models. We demonstrate the value of fine-tuning and further show that by adding more career data from a different population, fine-tuning smaller LLMs surpasses the performance of fine-tuning larger models.
- Published
- 2024
6. Data-driven Error Estimation: Upper Bounding Multiple Errors with No Technical Debt
- Author
-
Krishnamurthy, Sanath Kumar, Athey, Susan, and Brunskill, Emma
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
We formulate the problem of constructing multiple simultaneously valid confidence intervals (CIs) as estimating a high probability upper bound on the maximum error for a class/set of estimate-estimand-error tuples, and refer to this as the error estimation problem. For a single such tuple, data-driven confidence intervals can often be used to bound the error in our estimate. However, for a class of estimate-estimand-error tuples, nontrivial high probability upper bounds on the maximum error often require class complexity as input -- limiting the practicality of such methods and often resulting in loose bounds. Rather than deriving theoretical class complexity-based bounds, we propose a completely data-driven approach to estimate an upper bound on the maximum error. The simple and general nature of our solution to this fundamental challenge lends itself to several applications including: multiple CI construction, multiple hypothesis testing, estimating excess risk bounds (a fundamental measure of uncertainty in machine learning) for any training/fine-tuning algorithm, and enabling the development of a contextual bandit pipeline that can leverage any reward model estimation procedure as input (without additional mathematical analysis).
- Published
- 2024
7. The value of non-traditional credentials in the labor market
- Author
-
Athey, Susan and Palikot, Emil
- Subjects
Economics - General Economics - Abstract
This study investigates the labor market value of credentials obtained from Massive Open Online Courses (MOOCs) and shared on business networking platforms. We conducted a randomized experiment involving more than 800,000 learners, primarily from developing countries and without college degrees, who completed technology or business-related courses on the Coursera platform between September 2022 and March 2023. The intervention targeted learners who had recently completed their courses, encouraging them to share their credentials and simplifying the sharing process. One year after the intervention, we collected data from LinkedIn profiles of approximately 40,000 experimental subjects. We find that the intervention leads to an increase of 17 percentage points for credential sharing. Further, learners in the treatment group were 6\% more likely to report new employment within a year, with an 8\% increase in jobs related to their certificates. This effect was more pronounced among LinkedIn users with lower baseline employability. Across the entire sample, the treated group received a higher number of certificate views, indicating an increased interest in their profiles. These results suggest that facilitating credential sharing and reminding learners of the value of skill signaling can yield significant gains. When the experiment is viewed as an encouragement design for credential sharing, we can estimate the local average treatment effect (LATE) of credential sharing (that is, the impact of credential sharing on the workers induced to share by the intervention) for the outcome of getting a job. The LATE estimates are imprecise but large in magnitude; they suggest that credential sharing more than doubles the baseline probability of getting a new job in scope for the credential.
- Published
- 2024
8. The Year in Review: Economics at the Antitrust Division 2023–2024
- Author
-
Athey, Susan, Gross, Alex, Marinescu, Ioana, and Shanefelter, Jennifer
- Published
- 2024
- Full Text
- View/download PDF
9. Digital interventions and habit formation in educational technology
- Author
-
Agrawal, Keshav, Athey, Susan, Kanodia, Ayush, and Palikot, Emil
- Subjects
Economics - General Economics - Abstract
As online educational technology products have become increasingly prevalent, rich evidence indicates that learners often find it challenging to establish regular learning habits and complete their programs. Concurrently, online products geared towards entertainment and social interactions are sometimes so effective in increasing user engagement and creating frequent usage habits that they inadvertently lead to digital addiction, especially among youth. In this project, we carry out a contest-based intervention, common in the entertainment context, on an educational app for Indian children learning English. Approximately ten thousand randomly selected learners entered a 100-day reading contest. They would win a set of physical books if they ranked sufficiently high on a leaderboard based on the amount of educational content consumed. Twelve weeks after the end of the contest, when the treatment group had no additional incentives to use the app, they continued their engagement with it at a rate 75\% higher than the control group, indicating a successful formation of a reading habit. In addition, we observed a 6\% increase in retention within the treatment group. These results underscore the potential of digital interventions in fostering positive engagement habits with educational technology products, ultimately enhancing users' long-term learning outcomes.
- Published
- 2023
10. Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal
- Author
-
Athey, Susan, Keleher, Niall, and Spiess, Jann
- Subjects
Economics - Econometrics ,Computer Science - Machine Learning ,Statistics - Methodology ,Statistics - Machine Learning - Abstract
In many settings, interventions may be more effective for some individuals than others, so that targeting interventions may be beneficial. We analyze the value of targeting in the context of a large-scale field experiment with over 53,000 college students, where the goal was to use "nudges" to encourage students to renew their financial-aid applications before a non-binding deadline. We begin with baseline approaches to targeting. First, we target based on a causal forest that estimates heterogeneous treatment effects and then assigns students to treatment according to those estimated to have the highest treatment effects. Next, we evaluate two alternative targeting policies, one targeting students with low predicted probability of renewing financial aid in the absence of the treatment, the other targeting those with high probability. The predicted baseline outcome is not the ideal criterion for targeting, nor is it a priori clear whether to prioritize low, high, or intermediate predicted probability. Nonetheless, targeting on low baseline outcomes is common in practice, for example because the relationship between individual characteristics and treatment effects is often difficult or impossible to estimate with historical data. We propose hybrid approaches that incorporate the strengths of both predictive approaches (accurate estimation) and causal approaches (correct criterion); we show that targeting intermediate baseline outcomes is most effective in our specific application, while targeting based on low baseline outcomes is detrimental. In one year of the experiment, nudging all students improved early filing by an average of 6.4 percentage points over a baseline average of 37% filing, and we estimate that targeting half of the students using our preferred policy attains around 75% of this benefit.
- Published
- 2023
11. The Heterogeneous Earnings Impact of Job Loss Across Workers, Establishments, and Markets
- Author
-
Athey, Susan, Simon, Lisa K., Skans, Oskar N., Vikstrom, Johan, and Yakymovych, Yaroslav
- Subjects
Economics - General Economics - Abstract
Using generalized random forests and rich Swedish administrative data, we show that the earnings effects of job displacement due to establishment closures are extremely heterogeneous across and within (observable) worker types, establishments, and markets. The decile with the largest predicted effects loses 50 percent of annual earnings the year after displacement and losses accumulate to 200 percent over 7 years. The least affected decile experiences only marginal losses of 6 percent in the year after displacement. Prior to displacement workers in the most affected decile were lower paid and had negative earnings trajectories. Workers with large predicted effects are more sensitive to adverse market conditions than other workers. When restricting attention to simple targeting rules, the subgroup consisting of older workers in routine-task intensive jobs has the highest predictable effects of displacement., Comment: Version 2 adds out-of-sample estimates using closures after the end of the training sample period, robustness checks on heterogeneity within and across establishments (related to AKM etc), results on Swedish insurance policies across the CATE-distribution
- Published
- 2023
12. Proportional Response: Contextual Bandits for Simple and Cumulative Regret Minimization
- Author
-
Krishnamurthy, Sanath Kumar, Zhan, Ruohan, Athey, Susan, and Brunskill, Emma
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
In many applications, e.g. in healthcare and e-commerce, the goal of a contextual bandit may be to learn an optimal treatment assignment policy at the end of the experiment. That is, to minimize simple regret. However, this objective remains understudied. We propose a new family of computationally efficient bandit algorithms for the stochastic contextual bandit setting, where a tuning parameter determines the weight placed on cumulative regret minimization (where we establish near-optimal minimax guarantees) versus simple regret minimization (where we establish state-of-the-art guarantees). Our algorithms work with any function class, are robust to model misspecification, and can be used in continuous arm settings. This flexibility comes from constructing and relying on "conformal arm sets" (CASs). CASs provide a set of arms for every context, encompassing the context-specific optimal arm with a certain probability across the context distribution. Our positive results on simple and cumulative regret guarantees are contrasted with a negative result, which shows that no algorithm can achieve instance-dependent simple regret guarantees while simultaneously achieving minimax optimal cumulative regret guarantees.
- Published
- 2023
13. Battling the coronavirus ‘infodemic’ among social media users in Kenya and Nigeria
- Author
-
Offer-Westort, Molly, Rosenzweig, Leah R., and Athey, Susan
- Published
- 2024
- Full Text
- View/download PDF
14. Qini Curves for Multi-Armed Treatment Rules
- Author
-
Sverdrup, Erik, Wu, Han, Athey, Susan, and Wager, Stefan
- Subjects
Statistics - Methodology - Abstract
Qini curves have emerged as an attractive and popular approach for evaluating the benefit of data-driven targeting rules for treatment allocation. We propose a generalization of the Qini curve to multiple costly treatment arms, that quantifies the value of optimally selecting among both units and treatment arms at different budget levels. We develop an efficient algorithm for computing these curves and propose bootstrap-based confidence intervals that are exact in large samples for any point on the curve. These confidence intervals can be used to conduct hypothesis tests comparing the value of treatment targeting using an optimal combination of arms with using just a subset of arms, or with a non-targeting assignment rule ignoring covariates, at different budget levels. We demonstrate the statistical performance in a simulation experiment and an application to treatment targeting for election turnout., Comment: Forthcoming in the Journal of Computational and Graphical Statistics
- Published
- 2023
15. Federated Offline Policy Learning
- Author
-
Carranza, Aldo Gael and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Computer Science - Distributed, Parallel, and Cluster Computing ,Economics - Econometrics ,Statistics - Machine Learning - Abstract
We consider the problem of learning personalized decision policies from observational bandit feedback data across multiple heterogeneous data sources. In our approach, we introduce a novel regret analysis that establishes finite-sample upper bounds on distinguishing notions of global regret for all data sources on aggregate and of local regret for any given data source. We characterize these regret bounds by expressions of source heterogeneity and distribution shift. Moreover, we examine the practical considerations of this problem in the federated setting where a central server aims to train a policy on data distributed across the heterogeneous sources without collecting any of their raw data. We present a policy learning algorithm amenable to federation based on the aggregation of local policies trained with doubly robust offline policy evaluation strategies. Our analysis and supporting experimental results provide insights into tradeoffs in the participation of heterogeneous data sources in offline policy learning.
- Published
- 2023
16. Torch-Choice: A PyTorch Package for Large-Scale Choice Modelling with Python
- Author
-
Du, Tianyu, Kanodia, Ayush, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Computer Science - Mathematical Software ,Economics - Econometrics - Abstract
The $\texttt{torch-choice}$ is an open-source library for flexible, fast choice modeling with Python and PyTorch. $\texttt{torch-choice}$ provides a $\texttt{ChoiceDataset}$ data structure to manage databases flexibly and memory-efficiently. The paper demonstrates constructing a $\texttt{ChoiceDataset}$ from databases of various formats and functionalities of $\texttt{ChoiceDataset}$. The package implements two widely used models, namely the multinomial logit and nested logit models, and supports regularization during model estimation. The package incorporates the option to take advantage of GPUs for estimation, allowing it to scale to massive datasets while being computationally efficient. Models can be initialized using either R-style formula strings or Python dictionaries. We conclude with a comparison of the computational efficiencies of $\texttt{torch-choice}$ and $\texttt{mlogit}$ in R as (1) the number of observations increases, (2) the number of covariates increases, and (3) the expansion of item sets. Finally, we demonstrate the scalability of $\texttt{torch-choice}$ on large-scale datasets.
- Published
- 2023
17. Machine-learning-based high-benefit approach versus conventional high-risk approach in blood pressure management.
- Author
-
Inoue, Kosuke, Athey, Susan, and Tsugawa, Yusuke
- Subjects
Causal forest ,blood pressure ,cardiovascular events ,heterogeneous treatment effect ,high-benefit approach ,Adult ,Humans ,Blood Pressure ,Hypertension ,Nutrition Surveys ,Machine Learning ,Antihypertensive Agents ,Randomized Controlled Trials as Topic - Abstract
BACKGROUND: In medicine, clinicians treat individuals under an implicit assumption that high-risk patients would benefit most from the treatment (high-risk approach). However, treating individuals with the highest estimated benefit using a novel machine-learning method (high-benefit approach) may improve population health outcomes. METHODS: This study included 10 672 participants who were randomized to systolic blood pressure (SBP) target of either 0) versus the high-risk approach (treating individuals with SBP ≥130 mmHg). Using transportability formula, we also estimated the effect of these approaches among 14 575 US adults from National Health and Nutrition Examination Surveys (NHANES) 1999-2018. RESULTS: We found that 78.9% of individuals with SBP ≥130 mmHg benefited from the intensive SBP control. The high-benefit approach outperformed the high-risk approach [average treatment effect (95% CI), +9.36 (8.33-10.44) vs +1.65 (0.36-2.84) percentage point; difference between these two approaches, +7.71 (6.79-8.67) percentage points, P-value
- Published
- 2023
18. Synthetic Difference In Differences Estimation
- Author
-
Clarke, Damian, Pailañir, Daniel, Athey, Susan, and Imbens, Guido
- Subjects
Economics - Econometrics - Abstract
In this paper, we describe a computational implementation of the Synthetic difference-in-differences (SDID) estimator of Arkhangelsky et al. (2021) for Stata. Synthetic difference-in-differences can be used in a wide class of circumstances where treatment effects on some particular policy or event are desired, and repeated observations on treated and untreated units are available over time. We lay out the theory underlying SDID, both when there is a single treatment adoption date and when adoption is staggered over time, and discuss estimation and inference in each of these cases. We introduce the sdid command which implements these methods in Stata, and provide a number of examples of use, discussing estimation, inference, and visualization of results., Comment: Corrected typos Corrected references
- Published
- 2023
19. Battling the Coronavirus Infodemic Among Social Media Users in Kenya and Nigeria
- Author
-
Offer-Westort, Molly, Rosenzweig, Leah R., and Athey, Susan
- Subjects
Computer Science - Social and Information Networks ,Statistics - Applications - Abstract
How can we induce social media users to be discerning when sharing information during a pandemic? An experiment on Facebook Messenger with users from Kenya (n = 7,498) and Nigeria (n = 7,794) tested interventions designed to decrease intentions to share COVID-19 misinformation without decreasing intentions to share factual posts. The initial stage of the study incorporated: (i) a factorial design with 40 intervention combinations; and (ii) a contextual adaptive design, increasing the probability of assignment to treatments that worked better for previous subjects with similar characteristics. The second stage evaluated the best-performing treatments and a targeted treatment assignment policy estimated from the data. We precisely estimate null effects from warning flags and related article suggestions, tactics used by social media platforms. However, nudges to consider information's accuracy reduced misinformation sharing relative to control by 4.9% (estimate = -2.3 pp, s.e. = 1.0 , Z = -2.31, p = 0.021, 95% CI = [-4.2 , -0.35]). Such low-cost scalable interventions may improve the quality of information circulating online., Comment: 52 pages including appendix, 9 figures
- Published
- 2022
20. Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning
- Author
-
Athey, Susan, Byambadalai, Undral, Hadad, Vitor, Krishnamurthy, Sanath Kumar, Leung, Weiwen, and Williams, Joseph Jay
- Subjects
Economics - Econometrics ,Computer Science - Machine Learning ,Statistics - Machine Learning ,G.3 ,I.2.6 - Abstract
We design and implement an adaptive experiment (a ``contextual bandit'') to learn a targeted treatment assignment policy, where the goal is to use a participant's survey responses to determine which charity to expose them to in a donation solicitation. The design balances two competing objectives: optimizing the outcomes for the subjects in the experiment (``cumulative regret minimization'') and gathering data that will be most useful for policy learning, that is, for learning an assignment rule that will maximize welfare if used after the experiment (``simple regret minimization''). We evaluate alternative experimental designs by collecting pilot data and then conducting a simulation study. Next, we implement our selected algorithm. Finally, we perform a second simulation study anchored to the collected data that evaluates the benefits of the algorithm we chose. Our first result is that the value of a learned policy in this setting is higher when data is collected via a uniform randomization rather than collected adaptively using standard cumulative regret minimization or policy learning algorithms. We propose a simple heuristic for adaptive experimentation that improves upon uniform randomization from the perspective of policy learning at the expense of increasing cumulative regret relative to alternative bandit algorithms. The heuristic modifies an existing contextual bandit algorithm by (i) imposing a lower bound on assignment probabilities that decay slowly so that no arm is discarded too quickly, and (ii) after adaptively collecting data, restricting policy learning to select from arms where sufficient data has been gathered.
- Published
- 2022
21. Effective and scalable programs to facilitate labor market transitions for women in technology
- Author
-
Athey, Susan and Palikot, Emil
- Subjects
Economics - General Economics - Abstract
We describe the design, implementation, and evaluation of a low-cost (approximately $15 per person) and scalable program, called Challenges, aimed at aiding women in Poland transition to technology-sector jobs. This program helps participants develop portfolios demonstrating job-relevant competencies. We conduct two independent evaluations, one of the Challenges program and the other of a traditional mentoring program -- Mentoring -- where experienced tech professionals work individually with mentees to support them in their job search. Exploiting the fact that both programs were oversubscribed, we randomized admissions and measured their impact on the probability of finding a job in the technology sector. We estimate that Mentoring increases the probability of finding a technology job within four months from 29% to 42% and Challenges from 20% to 29%, and the treatment effects do not attenuate over 12 months. Since both programs are capacity constrained in practice (only 28% of applicants can be accommodated), we evaluate the effectiveness of several alternative prioritization rules based on applicant characteristics. We find that a policy that selects applicants based on their predicted treatment effects increases the average treatment effect across the two programs to 22 percentage points. We further analyze how alternative prioritization rules compare to the selection that mentors used. We find that mentors selected applicants who were more likely to get a tech job even without participating in the program, and the treatment effect for applicants with similar characteristics to those selected by mentors is about half of the effect attainable when participants are prioritized optimally.
- Published
- 2022
22. Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces
- Author
-
Athey, Susan, Karlan, Dean, Palikot, Emil, and Yuan, Yuan
- Subjects
Economics - General Economics - Abstract
Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision algorithms and observational data from a micro-lending marketplace, we find that choices made by borrowers creating online profiles impact both of these objectives. We further support this conclusion with a web-based randomized survey experiment. In the experiment, we create profile images using Generative Adversarial Networks that differ in a specific feature and estimate its impact on lender demand. We then counterfactually evaluate alternative platform policies and identify particular approaches to influencing the changeable profile photo features that can ameliorate the fairness-efficiency tension.
- Published
- 2022
23. Personalized Recommendations in EdTech: Evidence from a Randomized Controlled Trial
- Author
-
Agrawal, Keshav, Athey, Susan, Kanodia, Ayush, and Palikot, Emil
- Subjects
Economics - General Economics - Abstract
We study the impact of personalized content recommendations on the usage of an educational app for children. In a randomized controlled trial, we show that the introduction of personalized recommendations increases the consumption of content in the personalized section of the app by approximately 60%. We further show that the overall app usage increases by 14%, compared to the baseline system where human content editors select stories for all students at a given grade level. The magnitude of individual gains from personalized content increases with the amount of data available about a student and with preferences for niche content: heavy users with long histories of content interactions who prefer niche content benefit more than infrequent, newer users who like popular content. To facilitate the move to personalized recommendation systems from a simpler system, we describe how we make important design decisions, such as comparing alternative models using offline metrics and choosing the right target audience.
- Published
- 2022
24. The Effectiveness of Digital Interventions on COVID-19 Attitudes and Beliefs
- Author
-
Athey, Susan, Grabarz, Kristen, Luca, Michael, and Wernerfelt, Nils
- Subjects
Economics - General Economics - Abstract
During the course of the COVID-19 pandemic, a common strategy for public health organizations around the world has been to launch interventions via advertising campaigns on social media. Despite this ubiquity, little has been known about their average effectiveness. We conduct a large-scale program evaluation of campaigns from 174 public health organizations on Facebook and Instagram that collectively reached 2.1 billion individuals and cost around \$40 million. We report the results of 819 randomized experiments that measured the impact of these campaigns across standardized, survey-based outcomes. We find on average these campaigns are effective at influencing self-reported beliefs, shifting opinions close to 1% at baseline with a cost per influenced person of about \$3.41. There is further evidence that campaigns are especially effective at influencing users' knowledge of how to get vaccines. Our results represent, to the best of our knowledge, the largest set of online public health interventions analyzed to date.
- Published
- 2022
25. Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles
- Author
-
Carranza, Aldo Gael, Krishnamurthy, Sanath Kumar, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Mathematics - Statistics Theory ,Statistics - Methodology ,Statistics - Machine Learning - Abstract
Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any function that explains the reward differences between actions, that is, the treatment effects. Motivated by this observation, building on recent work on oracle-based bandit algorithms, we provide the first reduction of contextual bandits to general-purpose heterogeneous treatment effect estimation, and we design a simple and computationally efficient algorithm based on this reduction. Our theoretical and experimental results demonstrate that heterogeneous treatment effect estimation in contextual bandits offers practical advantages over reward estimation, including more efficient model estimation and greater flexibility to model misspecification.
- Published
- 2022
26. CAREER: A Foundation Model for Labor Sequence Data
- Author
-
Vafa, Keyon, Palikot, Emil, Du, Tianyu, Kanodia, Ayush, Athey, Susan, and Blei, David M.
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics - Abstract
Labor economists regularly analyze employment data by fitting predictive models to small, carefully constructed longitudinal survey datasets. Although machine learning methods offer promise for such problems, these survey datasets are too small to take advantage of them. In recent years large datasets of online resumes have also become available, providing data about the career trajectories of millions of individuals. However, standard econometric models cannot take advantage of their scale or incorporate them into the analysis of survey data. To this end we develop CAREER, a foundation model for job sequences. CAREER is first fit to large, passively-collected resume data and then fine-tuned to smaller, better-curated datasets for economic inferences. We fit CAREER to a dataset of 24 million job sequences from resumes, and adjust it on small longitudinal survey datasets. We find that CAREER forms accurate predictions of job sequences, outperforming econometric baselines on three widely-used economics datasets. We further find that CAREER can be used to form good predictions of other downstream variables. For example, incorporating CAREER into a wage model provides better predictions than the econometric models currently in use.
- Published
- 2022
27. The Year in Review: Economics at the Antitrust Division, 2022–2023
- Author
-
Athey, Susan, Chicu, Mark, Krishna, Malika, and Marinescu, Ioana
- Published
- 2023
- Full Text
- View/download PDF
28. Semiparametric Estimation of Treatment Effects in Randomized Experiments
- Author
-
Athey, Susan, Bickel, Peter J., Chen, Aiyou, Imbens, Guido W., and Pollmann, Michael
- Subjects
Statistics - Methodology ,Economics - Econometrics - Abstract
We develop new semiparametric methods for estimating treatment effects. We focus on settings where the outcome distributions may be thick tailed, where treatment effects may be small, where sample sizes are large and where assignment is completely random. This setting is of particular interest in recent online experimentation. We propose using parametric models for the treatment effects, leading to semiparametric models for the outcome distributions. We derive the semiparametric efficiency bound for the treatment effects for this setting, and propose efficient estimators. In the leading case with constant quantile treatment effects one of the proposed efficient estimators has an interesting interpretation as a weighted average of quantile treatment effects, with the weights proportional to minus the second derivative of the log of the density of the potential outcomes. Our analysis also suggests an extension of Huber's model and trimmed mean to include asymmetry., Comment: forthcoming in Journal of the Royal Statistical Society Series B: Statistical Methodology
- Published
- 2021
- Full Text
- View/download PDF
29. Federated Causal Inference in Heterogeneous Observational Data
- Author
-
Xiong, Ruoxuan, Koenecke, Allison, Powell, Michael, Shen, Zhu, Vogelstein, Joshua T., and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics ,Quantitative Biology - Quantitative Methods ,Statistics - Methodology - Abstract
We are interested in estimating the effect of a treatment applied to individuals at multiple sites, where data is stored locally for each site. Due to privacy constraints, individual-level data cannot be shared across sites; the sites may also have heterogeneous populations and treatment assignment mechanisms. Motivated by these considerations, we develop federated methods to draw inference on the average treatment effects of combined data across sites. Our methods first compute summary statistics locally using propensity scores and then aggregate these statistics across sites to obtain point and variance estimators of average treatment effects. We show that these estimators are consistent and asymptotically normal. To achieve these asymptotic properties, we find that the aggregation schemes need to account for the heterogeneity in treatment assignments and in outcomes across sites. We demonstrate the validity of our federated methods through a comparative study of two large medical claims databases.
- Published
- 2021
30. Economics: Applying Math to Real-World Problems
- Author
-
Athey, Susan
- Published
- 2003
- Full Text
- View/download PDF
31. Towards Costless Model Selection in Contextual Bandits: A Bias-Variance Perspective
- Author
-
Krishnamurthy, Sanath Kumar, Propp, Adrienne Margaret, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Model selection in supervised learning provides costless guarantees as if the model that best balances bias and variance was known a priori. We study the feasibility of similar guarantees for cumulative regret minimization in the stochastic contextual bandit setting. Recent work [Marinov and Zimmert, 2021] identifies instances where no algorithm can guarantee costless regret bounds. Nevertheless, we identify benign conditions where costless model selection is feasible: gradually increasing class complexity, and diminishing marginal returns for best-in-class policy value with increasing class complexity. Our algorithm is based on a novel misspecification test, and our analysis demonstrates the benefits of using model selection for reward estimation. Unlike prior work on model selection in contextual bandits, our algorithm carefully adapts to the evolving bias-variance trade-off as more data is collected. In particular, our algorithm and analysis go beyond adapting to the complexity of the simplest realizable class and instead adapt to the complexity of the simplest class whose estimation variance dominates the bias. For short horizons, this provides improved regret guarantees that depend on the complexity of simpler classes.
- Published
- 2021
32. Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
- Author
-
Zhan, Ruohan, Hadad, Vitor, Hirshberg, David A., and Athey, Susan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Methodology - Abstract
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments. However, policy evaluation is challenging if the target policy differs from the one used to collect data, and popular estimators, including doubly robust (DR) estimators, can be plagued by bias, excessive variance, or both. In particular, when the pattern of treatment assignment in the collected data looks little like the pattern generated by the policy to be evaluated, the importance weights used in DR estimators explode, leading to excessive variance. In this paper, we improve the DR estimator by adaptively weighting observations to control its variance. We show that a t-statistic based on our improved estimator is asymptotically normal under certain conditions, allowing us to form confidence intervals and test hypotheses. Using synthetic data and public benchmarks, we provide empirical evidence for our estimator's improved accuracy and inferential properties relative to existing alternatives.
- Published
- 2021
33. Policy Learning with Adaptively Collected Data
- Author
-
Zhan, Ruohan, Ren, Zhimei, Athey, Susan, and Zhou, Zhengyuan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Economics - Econometrics - Abstract
Learning optimal policies from historical data enables personalization in a wide variety of applications including healthcare, digital recommendations, and online education. The growing policy learning literature focuses on settings where the data collection rule stays fixed throughout the experiment. However, adaptive data collection is becoming more common in practice, from two primary sources: 1) data collected from adaptive experiments that are designed to improve inferential efficiency; 2) data collected from production systems that progressively evolve an operational policy to improve performance over time (e.g. contextual bandits). Yet adaptivity complicates the optimal policy identification ex post, since samples are dependent, and each treatment may not receive enough observations for each type of individual. In this paper, we make initial research inquiries into addressing the challenges of learning the optimal policy with adaptively collected data. We propose an algorithm based on generalized augmented inverse propensity weighted (AIPW) estimators, which non-uniformly reweight the elements of a standard AIPW estimator to control worst-case estimation variance. We establish a finite-sample regret upper bound for our algorithm and complement it with a regret lower bound that quantifies the fundamental difficulty of policy learning with adaptive data. When equipped with the best weighting scheme, our algorithm achieves minimax rate optimal regret guarantees even with diminishing exploration. Finally, we demonstrate our algorithm's effectiveness using both synthetic data and public benchmark datasets., Comment: Improved the upper bound; added simulations
- Published
- 2021
34. Adapting to Misspecification in Contextual Bandits with Offline Regression Oracles
- Author
-
Krishnamurthy, Sanath Kumar, Hadad, Vitor, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Computationally efficient contextual bandits are often based on estimating a predictive model of rewards given contexts and arms using past data. However, when the reward model is not well-specified, the bandit algorithm may incur unexpected regret, so recent work has focused on algorithms that are robust to misspecification. We propose a simple family of contextual bandit algorithms that adapt to misspecification error by reverting to a good safe policy when there is evidence that misspecification is causing a regret increase. Our algorithm requires only an offline regression oracle to ensure regret guarantees that gracefully degrade in terms of a measure of the average misspecification level. Compared to prior work, we attain similar regret guarantees, but we do no rely on a master algorithm, and do not require more robust oracles like online or constrained regression oracles (e.g., Foster et al. (2020a); Krishnamurthy et al. (2020)). This allows us to design algorithms for more general function approximation classes., Comment: ICML 2021
- Published
- 2021
35. Using Wasserstein Generative Adversarial Networks for the design of Monte Carlo simulations
- Author
-
Athey, Susan, Imbens, Guido W., Metzger, Jonas, and Munro, Evan
- Published
- 2024
- Full Text
- View/download PDF
36. Tractable contextual bandits beyond realizability
- Author
-
Krishnamurthy, Sanath Kumar, Hadad, Vitor, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Mathematics - Statistics Theory ,Statistics - Machine Learning - Abstract
Tractable contextual bandit algorithms often rely on the realizability assumption - i.e., that the true expected reward model belongs to a known class, such as linear functions. In this work, we present a tractable bandit algorithm that is not sensitive to the realizability assumption and computationally reduces to solving a constrained regression problem in every epoch. When realizability does not hold, our algorithm ensures the same guarantees on regret achieved by realizability-based algorithms under realizability, up to an additive term that accounts for the misspecification error. This extra term is proportional to T times a function of the mean squared error between the best model in the class and the true model, where T is the total number of time-steps. Our work sheds light on the bias-variance trade-off for tractable contextual bandits. This trade-off is not captured by algorithms that assume realizability, since under this assumption there exists an estimator in the class that attains zero bias., Comment: 35 pages, 6 figures
- Published
- 2020
37. Combining Experimental and Observational Data to Estimate Treatment Effects on Long Term Outcomes
- Author
-
Athey, Susan, Chetty, Raj, and Imbens, Guido
- Subjects
Statistics - Methodology ,Economics - Econometrics - Abstract
There has been an increase in interest in experimental evaluations to estimate causal effects, partly because their internal validity tends to be high. At the same time, as part of the big data revolution, large, detailed, and representative, administrative data sets have become more widely available. However, the credibility of estimates of causal effects based on such data sets alone can be low. In this paper, we develop statistical methods for systematically combining experimental and observational data to obtain credible estimates of the causal effect of a binary treatment on a primary outcome that we only observe in the observational sample. Both the observational and experimental samples contain data about a treatment, observable individual characteristics, and a secondary (often short term) outcome. To estimate the effect of a treatment on the primary outcome while addressing the potential confounding in the observational sample, we propose a method that makes use of estimates of the relationship between the treatment and the secondary outcome from the experimental sample. If assignment to the treatment in the observational sample were unconfounded, we would expect the treatment effects on the secondary outcome in the two samples to be similar. We interpret differences in the estimated causal effects on the secondary outcome between the two samples as evidence of unobserved confounders in the observational sample, and develop control function methods for using those differences to adjust the estimates of the treatment effects on the primary outcome. We illustrate these ideas by combining data on class size and third grade test scores from the Project STAR experiment with observational data on class size and both third and eighth grade test scores from the New York school system., Comment: 2 figures
- Published
- 2020
38. Alpha-1 adrenergic receptor antagonists to prevent hyperinflammation and death from lower respiratory tract infection
- Author
-
Koenecke, Allison, Powell, Michael, Xiong, Ruoxuan, Shen, Zhu, Fischer, Nicole, Huq, Sakibul, Khalafallah, Adham M., Trevisan, Marco, Sparen, Pär, Carrero, Juan J, Nishimura, Akihiko, Caffo, Brian, Stuart, Elizabeth A., Bai, Renyuan, Staedtke, Verena, Thomas, David L., Papadopoulos, Nickolas, Kinzler, Kenneth W., Vogelstein, Bert, Zhou, Shibin, Bettegowda, Chetan, Konig, Maximilian F., Mensh, Brett, Vogelstein, Joshua T., and Athey, Susan
- Subjects
Quantitative Biology - Tissues and Organs ,Quantitative Biology - Quantitative Methods - Abstract
In severe viral pneumonia, including Coronavirus disease 2019 (COVID-19), the viral replication phase is often followed by hyperinflammation, which can lead to acute respiratory distress syndrome, multi-organ failure, and death. We previously demonstrated that alpha-1 adrenergic receptor ($\alpha_1$-AR) antagonists can prevent hyperinflammation and death in mice. Here, we conducted retrospective analyses in two cohorts of patients with acute respiratory distress (ARD, n=18,547) and three cohorts with pneumonia (n=400,907). Federated across two ARD cohorts, we find that patients exposed to $\alpha_1$-AR antagonists, as compared to unexposed patients, had a 34% relative risk reduction for mechanical ventilation and death (OR=0.70, p=0.021). We replicated these methods on three pneumonia cohorts, all with similar effects on both outcomes. All results were robust to sensitivity analyses. These results highlight the urgent need for prospective trials testing whether prophylactic use of $\alpha_1$-AR antagonists ameliorates lower respiratory tract infection-associated hyperinflammation and death, as observed in COVID-19., Comment: 31 pages, 10 figures
- Published
- 2020
- Full Text
- View/download PDF
39. Survey Bandits with Regret Guarantees
- Author
-
Krishnamurthy, Sanath Kumar and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics ,Statistics - Machine Learning - Abstract
We consider a variant of the contextual bandit problem. In standard contextual bandits, when a user arrives we get the user's complete feature vector and then assign a treatment (arm) to that user. In a number of applications (like healthcare), collecting features from users can be costly. To address this issue, we propose algorithms that avoid needless feature collection while maintaining strong regret guarantees., Comment: 17 pages, 10 figures
- Published
- 2020
40. Stable Prediction with Model Misspecification and Agnostic Distribution Shift
- Author
-
Kuang, Kun, Xiong, Ruoxuan, Cui, Peng, Athey, Susan, and Li, Bo
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.
- Published
- 2020
41. Falling living standards during the COVID-19 crisis: Quantitative evidence from nine developing countries.
- Author
-
Egger, Dennis, Miguel, Edward, Warren, Shana S, Shenoy, Ashish, Collins, Elliott, Karlan, Dean, Parkerson, Doug, Mobarak, A Mushfiq, Fink, Günther, Udry, Christopher, Walker, Michael, Haushofer, Johannes, Larreboure, Magdalena, Athey, Susan, Lopez-Pena, Paula, Benhachmi, Salim, Humphreys, Macartan, Lowe, Layna, Meriggi, Niccoló F, Wabwire, Andrew, Davis, C Austin, Pape, Utz Johann, Graff, Tilman, Voors, Maarten, Nekesa, Carolyn, and Vernot, Corey
- Subjects
Humans ,Family Characteristics ,Seasons ,Domestic Violence ,Government Programs ,Developing Countries ,Agriculture ,Adult ,Child ,Employment ,Income ,Africa ,Colombia ,Asia ,Female ,Male ,Economic Recession ,Pandemics ,Surveys and Questionnaires ,COVID-19 ,SARS-CoV-2 ,Food Insecurity ,Basic Behavioral and Social Science ,Behavioral and Social Science - Abstract
Despite numerous journalistic accounts, systematic quantitative evidence on economic conditions during the ongoing COVID-19 pandemic remains scarce for most low- and middle-income countries, partly due to limitations of official economic statistics in environments with large informal sectors and subsistence agriculture. We assemble evidence from over 30,000 respondents in 16 original household surveys from nine countries in Africa (Burkina Faso, Ghana, Kenya, Rwanda, Sierra Leone), Asia (Bangladesh, Nepal, Philippines), and Latin America (Colombia). We document declines in employment and income in all settings beginning March 2020. The share of households experiencing an income drop ranges from 8 to 87% (median, 68%). Household coping strategies and government assistance were insufficient to sustain precrisis living standards, resulting in widespread food insecurity and dire economic conditions even 3 months into the crisis. We discuss promising policy responses and speculate about the risk of persistent adverse effects, especially among children and other vulnerable groups.
- Published
- 2021
42. Optimal Experimental Design for Staggered Rollouts
- Author
-
Xiong, Ruoxuan, Athey, Susan, Bayati, Mohsen, and Imbens, Guido
- Subjects
Economics - Econometrics ,Statistics - Methodology ,Statistics - Machine Learning - Abstract
In this paper, we study the design and analysis of experiments conducted on a set of units over multiple time periods where the starting time of the treatment may vary by unit. The design problem involves selecting an initial treatment time for each unit in order to most precisely estimate both the instantaneous and cumulative effects of the treatment. We first consider non-adaptive experiments, where all treatment assignment decisions are made prior to the start of the experiment. For this case, we show that the optimization problem is generally NP-hard, and we propose a near-optimal solution. Under this solution, the fraction entering treatment each period is initially low, then high, and finally low again. Next, we study an adaptive experimental design problem, where both the decision to continue the experiment and treatment assignment decisions are updated after each period's data is collected. For the adaptive case, we propose a new algorithm, the Precision-Guided Adaptive Experiment (PGAE) algorithm, that addresses the challenges at both the design stage and at the stage of estimating treatment effects, ensuring valid post-experiment inference accounting for the adaptive nature of the design. Using realistic settings, we demonstrate that our proposed solutions can reduce the opportunity cost of the experiments by over 50%, compared to static design benchmarks., Comment: Forthcoming in Management Science
- Published
- 2019
43. Confidence Intervals for Policy Evaluation in Adaptive Experiments
- Author
-
Hadad, Vitor, Hirshberg, David A., Zhan, Ruohan, Wager, Stefan, and Athey, Susan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Methodology - Abstract
Adaptive experiment designs can dramatically improve statistical efficiency in randomized trials, but they also complicate statistical inference. For example, it is now well known that the sample mean is biased in adaptive trials. Inferential challenges are exacerbated when our parameter of interest differs from the parameter the trial was designed to target, such as when we are interested in estimating the value of a sub-optimal treatment after running a trial to determine the optimal treatment using a stochastic bandit design. In this context, typical estimators that use inverse propensity weighting to eliminate sampling bias can be problematic: their distributions become skewed and heavy-tailed as the propensity scores decay to zero. In this paper, we present a class of estimators that overcome these issues. Our approach is to adaptively reweight the terms of an augmented inverse propensity weighting estimator to control the contribution of each term to the estimator's variance. This adaptive weighting scheme prevents estimates from becoming heavy-tailed, ensuring asymptotically correct coverage. It also reduces variance, allowing us to test hypotheses with greater power - especially hypotheses that were not targeted by the experimental design. We validate the accuracy of the resulting estimates and their confidence intervals in numerical experiments and show our methods compare favorably to existing alternatives in terms of RMSE and coverage.
- Published
- 2019
44. Using Wasserstein Generative Adversarial Networks for the Design of Monte Carlo Simulations
- Author
-
Athey, Susan, Imbens, Guido, Metzger, Jonas, and Munro, Evan
- Subjects
Economics - Econometrics ,Statistics - Methodology - Abstract
When researchers develop new econometric methods it is common practice to compare the performance of the new methods to those of existing methods in Monte Carlo studies. The credibility of such Monte Carlo studies is often limited because of the freedom the researcher has in choosing the design. In recent years a new class of generative models emerged in the machine learning literature, termed Generative Adversarial Networks (GANs) that can be used to systematically generate artificial data that closely mimics real economic datasets, while limiting the degrees of freedom for the researcher and optionally satisfying privacy guarantees with respect to their training data. In addition if an applied researcher is concerned with the performance of a particular statistical method on a specific data set (beyond its theoretical properties in large samples), she may wish to assess the performance, e.g., the coverage rate of confidence intervals or the bias of the estimator, using simulated data which resembles her setting. Tol illustrate these methods we apply Wasserstein GANs (WGANs) to compare a number of different estimators for average treatment effects under unconfoundedness in three distinct settings (corresponding to three real data sets) and present a methodology for assessing the robustness of the results. In this example, we find that (i) there is not one estimator that outperforms the others in all three settings, so researchers should tailor their analytic approach to a given setting, and (ii) systematic simulation studies can be helpful for selecting among competing methods in this situation., Comment: 30 pages, 4 figures
- Published
- 2019
45. Sufficient Representations for Categorical Variables
- Author
-
Johannemann, Jonathan, Hadad, Vitor, Athey, Susan, and Wager, Stefan
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning - Abstract
Many learning algorithms require categorical data to be transformed into real vectors before it can be used as input. Often, categorical variables are encoded as one-hot (or dummy) vectors. However, this mode of representation can be wasteful since it adds many low-signal regressors, especially when the number of unique categories is large. In this paper, we investigate simple alternative solutions for universally consistent estimators that rely on lower-dimensional real-valued representations of categorical variables that are "sufficient" in the sense that no predictive information is lost. We then compare preexisting and proposed methods on simulated and observational datasets.
- Published
- 2019
46. Counterfactual Inference for Consumer Choice Across Many Product Categories
- Author
-
Donnelly, Rob, Ruiz, Francisco R., Blei, David, and Athey, Susan
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics ,Statistics - Machine Learning - Abstract
This paper proposes a method for estimating consumer preferences among discrete choices, where the consumer chooses at most one product in a category, but selects from multiple categories in parallel. The consumer's utility is additive in the different categories. Her preferences about product attributes as well as her price sensitivity vary across products and are in general correlated across products. We build on techniques from the machine learning literature on probabilistic models of matrix factorization, extending the methods to account for time-varying product attributes and products going out of stock. We evaluate the performance of the model using held-out data from weeks with price changes or out of stock products. We show that our model improves over traditional modeling approaches that consider each category in isolation. One source of the improvement is the ability of the model to accurately estimate heterogeneity in preferences (by pooling information across categories); another source of improvement is its ability to estimate the preferences of consumers who have rarely or never made a purchase in a given category in the training data. Using held-out data, we show that our model can accurately distinguish which consumers are most price sensitive to a given product. We consider counterfactuals such as personally targeted price discounts, showing that using a richer model such as the one we propose substantially increases the benefits of personalization in discounts.
- Published
- 2019
- Full Text
- View/download PDF
47. Ensemble Methods for Causal Effects in Panel Data Settings
- Author
-
Athey, Susan, Bayati, Mohsen, Imbens, Guido, and Qu, Zhaonan
- Subjects
Economics - Econometrics - Abstract
This paper studies a panel data setting where the goal is to estimate causal effects of an intervention by predicting the counterfactual values of outcomes for treated units, had they not received the treatment. Several approaches have been proposed for this problem, including regression methods, synthetic control methods and matrix completion methods. This paper considers an ensemble approach, and shows that it performs better than any of the individual methods in several economic datasets. Matrix completion methods are often given the most weight by the ensemble, but this clearly depends on the setting. We argue that ensemble methods present a fruitful direction for further research in the causal panel data setting.
- Published
- 2019
48. Machine Learning Methods Economists Should Know About
- Author
-
Athey, Susan and Imbens, Guido
- Subjects
Economics - Econometrics ,Statistics - Machine Learning - Abstract
We discuss the relevance of the recent Machine Learning (ML) literature for economics and econometrics. First we discuss the differences in goals, methods and settings between the ML literature and the traditional econometrics and statistics literatures. Then we discuss some specific methods from the machine learning literature that we view as important for empirical researchers in economics. These include supervised learning methods for regression and classification, unsupervised learning methods, as well as matrix completion methods. Finally, we highlight newly developed methods at the intersection of ML and econometrics, methods that typically perform better than either off-the-shelf ML or more traditional econometric methods when applied to particular classes of problems, problems that include causal inference for average treatment effects, optimal policy estimation, and estimation of the counterfactual effect of price changes in consumer choice models.
- Published
- 2019
49. Estimating Treatment Effects with Causal Forests: An Application
- Author
-
Athey, Susan and Wager, Stefan
- Subjects
Statistics - Methodology - Abstract
We apply causal forests to a dataset derived from the National Study of Learning Mindsets, and consider resulting practical and conceptual challenges. In particular, we discuss how causal forests use estimated propensity scores to be more robust to confounding, and how they handle data with clustered errors., Comment: This note will appear in an upcoming issue of Observational Studies, Empirical Investigation of Methods for Heterogeneity, that compiles several analyses of the same dataset
- Published
- 2019
50. The Year in Review: Economics at the Antitrust Division 2021–2022
- Author
-
Athey, Susan, Pittman, Russell, and Zhang, Fan
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.