36 results on '"Chad Hazlett"'
Search Results
2. Political leadership has limited impact on fossil fuel taxes and subsidies
- Author
-
Cesar B. Martinez-Alvarez, Chad Hazlett, Paasha Mahdavi, and Michael L. Ross
- Subjects
leadership ,History ,Fossil Fuels ,Multidisciplinary ,Polymers and Plastics ,Taxes ,Industrial and Manufacturing Engineering ,political economy ,Leadership ,climate change ,Affordable and Clean Energy ,Renewable Energy ,Business and International Management ,fossil fuel subsidies ,Gasoline ,carbon pricing - Abstract
For countries to rapidly decarbonize, they need strong leadership, according to both academic studies and popular accounts. But leadership is difficult to measure, and its importance is unclear. We use original data to investigate the role of presidents, prime ministers, and monarchs in 155 countries from 1990 to 2015 in changing their countries’ gasoline taxes and subsidies. Our findings suggest that the impact of leaders on fossil fuel taxes and subsidies is surprisingly limited and often ephemeral. This holds true regardless of the leader’s age, gender, education, or political ideology. Rulers who govern during an economic crisis perform no better or worse than other rulers. Even presidents and prime ministers who were recognized by the United Nations for environmental leadership had no more success than other leaders in reducing subsidies or raising fuel taxes. Where leaders appear to play an important role—primarily in countries with large subsidies—their reforms often failed, with subsidies returning to prereform levels within the first 12 mo 62% of the time, and within 5 y 87% of the time. Our findings suggest that leaders of all types find it exceptionally hard to raise the cost of fossil fuels for consumers. To promote deep decarbonization, leaders are likely to have more success with other types of policies, such as reducing the costs and increasing the availability of renewable energy.
- Published
- 2022
3. From 'Is It Unconfounded?' to 'How Much Confounding Would It Take?': Applying the Sensitivity-Based Approach to Assess Causes of Support for Peace in Colombia
- Author
-
Chad Hazlett and Francesca Parente
- Subjects
Sociology and Political Science - Published
- 2023
4. Measuring Ethnic Bias: Can Misattribution-Based Tools from Social Psychology Reveal Group Biases that Economics Games Cannot?
- Author
-
Chad Hazlett, Daniel N. Posner, and Ashley Blum
- Subjects
Social psychology (sociology) ,behavioral games ,misattribution ,Sociology and Political Science ,conflict ,Political Science ,Ethnic group ,Political Science & Public Administration ,Clinical Research ,ethnic bias ,Behavioral and Social Science ,0502 economics and business ,050602 political science & public administration ,Misattribution of memory ,050207 economics ,ethnic preference ,Set (psychology) ,experimental economics ,05 social sciences ,Contrast (statistics) ,social psychology ,Public good ,Experimental economics ,0506 political science ,Mental Health ,economics games ,Political Science and International Relations ,Dictator ,Social psychology - Abstract
Economics games such as the Dictator and Public Goods Games have been widely used to measure ethnic bias in political science and economics. Yet these tools may fail to measure bias as intended because they are vulnerable to self-presentational concerns and/or fail to capture bias rooted in more automatic associative and affective reactions. We examine a set of misattribution-based approaches, adapted from social psychology, that may sidestep these concerns. Participants in Nairobi, Kenya completed a series of common economics games alongside versions of these misattribution tasks adapted for this setting, each designed to detect bias toward noncoethnics relative to coethnics. Several of the misattribution tasks show clear evidence of (expected) bias, arguably reflecting differences in positive/negative affect and heightened threat perception toward noncoethnics. The Dictator and Public Goods Games, by contrast, are unable to detect any bias in behavior toward noncoethnics versus coethnics. We conclude that researchers of ethnic and other biases may benefit from including misattribution-based procedures in their tool kits to widen the set of biases to which their investigations are sensitive.
- Published
- 2021
5. Understanding, Choosing, and Unifying Multilevel and Fixed Effect Approaches
- Author
-
Leonard Wainstein and Chad Hazlett
- Subjects
Mathematical optimization ,0504 sociology ,Sociology and Political Science ,Computer science ,05 social sciences ,Political Science and International Relations ,050602 political science & public administration ,050401 social sciences methods ,0506 political science - Abstract
When working with grouped data, investigators may choose between “fixed effects” models (FE) with specialized (e.g., cluster-robust) standard errors, or “multilevel models” (MLMs) employing “random effects.” We review the claims given in published works regarding this choice, then clarify how these approaches work and compare by showing that: (i) random effects employed in MLMs are simply “regularized” fixed effects; (ii) unmodified MLMs are consequently susceptible to bias—but there is a longstanding remedy; and (iii) the “default” MLM standard errors rely on narrow assumptions that can lead to undercoverage in many settings. Our review of over 100 papers using MLM in political science, education, and sociology show that these “known” concerns have been widely ignored in practice. We describe how to debias MLM’s coefficient estimates, and provide an option to more flexibly estimate their standard errors. Most illuminating, once MLMs are adjusted in these two ways the point estimate and standard error for the target coefficient are exactly equal to those of the analogous FE model with cluster-robust standard errors. For investigators working with observational data and who are interested only in inference on the target coefficient, either approach is equally appropriate and preferable to uncorrected MLM.
- Published
- 2020
6. An Omitted Variable Bias Framework for Sensitivity Analysis of Instrumental Variables
- Author
-
Carlos Cinelli and Chad Hazlett
- Published
- 2022
7. Wildfire Exposure Increases Pro-Environment Voting within Democratic but Not Republican Areas
- Author
-
Chad Hazlett and Matto Mildenberger
- Subjects
010504 meteorology & atmospheric sciences ,Sociology and Political Science ,media_common.quotation_subject ,05 social sciences ,Climate change ,01 natural sciences ,Hazard ,Democracy ,0506 political science ,Politics ,Ballot ,Salient ,Political science ,Voting ,Political Science and International Relations ,Development economics ,050602 political science & public administration ,Public support ,0105 earth and related environmental sciences ,media_common - Abstract
One political barrier to climate reforms is the temporal mismatch between short-term policy costs and long-term policy benefits. Will public support for climate reforms increase as climate-related disasters make the short-term costs of inaction more salient? Leveraging variation in the timing of Californian wildfires, we evaluate how exposure to a climate-related hazard influences political behavior rather than self-reported attitudes or behavioral intentions. We show that wildfires increased support for costly, climate-related ballot measures by 5 to 6 percentage points for those living within 5 kilometers of a recent wildfire, decaying to near zero beyond a distance of 15 kilometers. This effect is concentrated in Democratic-voting areas, and it is nearly zero in Republican-dominated areas. We conclude that experienced climate threats can enhance willingness-to-act but largely in places where voters are known to believe in climate change.
- Published
- 2020
8. How very massive atrocities end: A dataset and typology
- Author
-
Bridget Conley and Chad Hazlett
- Subjects
Typology ,Spanish Civil War ,Sociology and Political Science ,Political science ,Political Science and International Relations ,Psychological intervention ,Criminology ,Genocide ,Safety Research - Abstract
Understanding how the most severe mass atrocities have historically come to an end may aid in designing policy interventions to more rapidly terminate future episodes. To facilitate research in this area, we construct a new dataset covering all 43 very large mass atrocities perpetrated by governments or non-state actors since 1945 with at least 50,000 civilian fatalities. This article introduces and summarizes these data, including an inductively generated typology of three major ending types: those in which (i) violence is carried out to its intended conclusion (37%); (ii) the perpetrator is driven out of power militarily (26%); or (iii) the perpetrator shifts to a different strategy no longer involving mass atrocities against civilians (37%). We find that international actors play a range of important roles in endings, often involving encouragement and support for policy changes that reduce mass killings. Endings could be attributed principally to armed foreign interventions in only four cases, three of which involved regime change. Within the cases we study, no ending was attributable to a neutral peacekeeping mission.
- Published
- 2020
9. Trusted authorities can change minds and shift norms during conflict
- Author
-
Rebecca Littman, Rebecca Wolfe, Elizabeth R. Nugent, Chad Hazlett, Anthony Etim, Mohammed Bukar, Benjamin Crisman, Graeme Blair, and Jiyoung Kim
- Subjects
Adult ,Male ,Forgiveness ,reintegration ,Adolescent ,conflict ,media_common.quotation_subject ,Nigeria ,Social Sciences ,Violence ,Trust ,Young Adult ,Politics ,Clinical Research ,Order (exchange) ,Perception ,Political science ,Behavioral and Social Science ,Conflict resolution ,80 and over ,Social Norms ,Humans ,violent extremism ,Aged ,media_common ,Aged, 80 and over ,Peace ,Multidisciplinary ,Middle Aged ,Justice and Strong Institutions ,Test (assessment) ,Religion ,Leadership ,leaders ,Psychological and Cognitive Sciences ,Female ,Terrorism ,Attitude change ,Religious leader ,Social psychology ,norms - Abstract
Significance Violent extremist groups such as the Islamic State and Boko Haram have proliferated across the world in recent decades. While considerable scholarship addresses why people join violent extremist groups, much less attention has been paid to how former members reenter society. Yet successfully ending conflict requires reluctant communities to accept former members back home. In this research, we find that radio messages delivered by trusted authorities in Nigeria lead to large, positive changes in people’s willingness to accept former Boko Haram fighters back home and make people think their neighbors are more in favor of reintegration. Our results show that messages from leaders can create change on a mass scale at low cost, helping to end conflict and division., The reintegration of former members of violent extremist groups is a pressing policy challenge. Governments and policymakers often have to change minds among reticent populations and shift perceived community norms in order to pave the way for peaceful reintegration. How can they do so on a mass scale? Previous research shows that messages from trusted authorities can be effective in creating attitude change and shifting perceptions of social norms. In this study, we test whether messages from religious leaders—trusted authorities in many communities worldwide—can change minds and shift norms around an issue related to conflict resolution: the reintegration of former members of violent extremist groups. Our study takes place in Maiduguri, Nigeria, the birthplace of the violent extremist group Boko Haram. Participants were randomly assigned to listen to either a placebo radio message or to a treatment message from a religious leader emphasizing the importance of forgiveness, announcing the leader’s forgiveness of repentant fighters, and calling on followers to forgive. Participants were then asked about their attitudes, intended behaviors, and perceptions of social norms surrounding the reintegration of an ex–Boko Haram fighter. The religious leader message significantly increased support for reintegration and willingness to interact with the ex-fighter in social, political, and economic life (8 to 10 percentage points). It also shifted people’s beliefs that others in their community were more supportive of reintegration (6 to 10 percentage points). Our findings suggest that trusted authorities such as religious leaders can be effective messengers for promoting peace.
- Published
- 2021
10. Beyond Poverty as a Proxy Reducing Inequality in Infant Mortality by Identifying and Targeting Higher Risk Births
- Author
-
Antonio Pedro Ramos, Stephen R. Smith, and Chad Hazlett
- Subjects
education.field_of_study ,Inequality ,Poverty ,business.industry ,media_common.quotation_subject ,Population ,Contrast (statistics) ,Infant mortality ,Decile ,Intervention (counseling) ,Medicine ,Proxy (statistics) ,education ,business ,media_common ,Demography - Abstract
BackgroundInfant mortality remains high and unevenly distributed in much of sub-Saharan Africa. Though cheap and effective interventions can in principle prevent many of these deaths, given a finite budget for supplying such interventions in many countries, their life-saving benefit is critically constrained by the ability to target them to those individuals who would have otherwise have likely died. While countries routinely use poverty- or wealth-based thresholds to target policies and interventions including those related to maternal health and early mortality, we examine whether models that consider other easy-to-measure and factors can substantially improve our ability to identify higher-risk births, and consequently the potential for life-saving intervention.MethodsUsing up to 25 variables from the demographic health survey for 22 sub-Saharan African countries, we employ established machine learning methods to estimate per-child risk of infant mortality, separately in each country. We use out-of-sample testing methods to provide an honest assessment of how useful these methods are for identifying high risk births. We also compare this to a benchmark similar to standard practice, in which only (within-country) measures of wealth can be used for targeting.ResultsTargeting based on wealth alone is only slightly better than random targeting: the poorest 10 percent of the population experiencing approximately 10 percent of total infant mortality burden. By contrast the 10 percent of the population at highest risk according to our model account for 15-30 percent of infants deaths, depending on country. In other words, for a hypothetical intervention with some fixed effectiveness and that can be administered to only a chosen 10 percent of the population, such a targeting approach would save 1.5-3 times as many lives as one based on wealth alone.InterpretationIn the 22 sub-Saharan African countries studied, approaches that use commonly available types of data to flexibly predict high-risk births has the potential to substantially improve the targeting and thus maximum potential effectiveness of life-saving interventions against early mortality.Research in ContextEvidence before the studyIn many low-income countries, interventions that aim to reduce infant mortality cannot be made universally available. While low-cost medical interventions are available to avoid many of the most common causes of death, these interventions can only save lives if they are targeted to individuals who would have otherwise died. Currently, most national programs and policies aiming to reduce infant mortality employ only poverty-related assessments to target those programs. Large gains in the effective targeting of such programs, and thus their life-saving potential, may be possible if risk can be assessed using more flexible statistical models that draw on a variety of easy-to-collect data, but existing models have not. Surprisingly, few prior studies have combined multiple risk factors to estimate early-life mortality risk in the general population. 1,2,3 None of these employ flexible (machine learning) models, or compare the results to a traditional poverty-or wealth-based targeting benchmark.Added value of this studyWorking with publicly available data from 22 countries in sub-Saharan Africa, we first show that targeting early mortality interventions based only on income assessments cannot do much better than random distribution in most countries. We then employ established machine learning methods to show how using a range of generally available data prior to each birth, a risk score can be computed for each birth that is more predictive of early mortality. Targeting the highest risk decile of births, for example, can capture only about 10% of deaths using income alone, but 15-30% (depending on the country) with our more inclusive model.Implications of all available evidenceWe conclude that it is feasible, using standard machine learning methods and data similarly to what is routinely collected, to greatly improve upon income-based targeting schemes for interventions seeking to minimize early mortality. In the cases studied, these could greatly improve the targeting and consequently the life-saving potential of these interventions.
- Published
- 2021
11. Pre-existing conditions in Hispanics/Latinxs that are COVID-19 risk factors
- Author
-
Timothy S. Chang, Yi Ding, Malika K. Freund, Ruth Johnson, Tommer Schwarz, Julie M. Yabu, Chad Hazlett, Jeffrey N. Chiang, David A. Wulf, Daniel H. Geschwind, Manish J. Butte, Bogdan Pasaniuc, Anna L. Antonio, Maryam Ariannejad, Angela M. Badillo, Brunilda Balliu, Yael Berkovich, Michael Broudy, Tony Dang, Chris Denny, Eleazar Eskin, Eran Halperin, Brian L. Hill, Ankur Jain, Vivek Katakwar, Clara Lajonchere, Clara Magyar, Sheila Minton, Ghouse Mohammed, Ariff Muhamed, Pabba Pavan, Michael A. Pfeffer, Nadav Rakocz, Akos Rudas, Rey Salonga, Timothy J. Sanders, Paul Tung, Vu Vu, and Ailsa Zheng
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Multidisciplinary ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Public health ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,public health ,02 engineering and technology ,Disease ,021001 nanoscience & nanotechnology ,Multiple risk factors ,Article ,virology ,03 medical and health sciences ,030104 developmental biology ,Disease severity ,Internal medicine ,Health care ,medicine ,lcsh:Q ,Risk factor ,0210 nano-technology ,business ,lcsh:Science - Abstract
Coronavirus disease 2019 (COVID-19) has exposed health care disparities in minority groups including Hispanics/Latinxs (HL). Studies of COVID-19 risk factors for HL have relied on county-level data. We investigated COVID-19 risk factors in HL using individual-level, electronic health records in a Los Angeles health system between March 9, 2020, and August 31, 2020. Of 9,287 HL tested for SARS-CoV-2, 562 were positive. HL constituted an increasing percentage of all COVID-19 positive individuals as disease severity escalated. Multiple risk factors identified in Non-Hispanic/Latinx whites (NHL-W), like renal disease, also conveyed risk in HL. Pre-existing nonrheumatic mitral valve disorder was a risk factor for HL hospitalization but not for NHL-W COVID-19 or HL influenza hospitalization, suggesting it may be a specific HL COVID-19 risk. Admission laboratory values also suggested that HL presented with a greater inflammatory response. COVID-19 risk factors for HL can help guide equitable government policies and identify at-risk populations., Graphical abstract, Public health; virology
- Published
- 2021
12. Credible learning of hydroxychloroquine and dexamethasone effects on COVID-19 mortality outside of randomized trials
- Author
-
Brian T. Montague, Bogdan Pasaniuc, Chad Hazlett, Kristine M. Erlandson, Onyebuchi A. Arah, and David Ami Wulf
- Subjects
Pediatrics ,medicine.medical_specialty ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Mortality rate ,Clinical Trials and Supportive Activities ,Hydroxychloroquine ,Percentage point ,Disease ,Neurodegenerative ,Cardiovascular ,law.invention ,Good Health and Well Being ,Randomized controlled trial ,law ,Clinical Research ,Cohort ,medicine ,business ,Dexamethasone ,medicine.drug - Abstract
ObjectivesTo investigate the effectiveness of hydroxychloroquine and dexamethasone on coronavirus disease (COVID-19) mortality using patient data outside of randomized trials.DesignPhenotypes derived from electronic health records were analyzed using the stability-controlled quasi-experiment (SCQE) to provide a range of possible causal effects of hydroxy-chloroquine and dexamethasone on COVID-19 mortality.Setting and participantsData from 2,007 COVID-19 positive patients hospitalized at a large university hospital system over the course of 200 days and not enrolled in randomized trials were analyzed using SCQE. For hyrdoxychloroquine, we examine a high-use cohort (n=766, days 1 to 43) and a later, low-use cohort (n=548, days 44 to 82). For dexamethasone, we examine a low-use cohort (n=614, days 44 to 101) and high-use cohort (n=622, days 102 to 200).Outcome measure14-day mortality, with a secondary outcome of 28-day mortality.ResultsHydroxycholoroquine could only have been significantly (pConclusionsThe assumptions required for a beneficial effect of hydroxychloroquine on 14 day mortality are difficult to sustain, while the assumptions required for hydroxychloroquine to be harmful are difficult to reject with confidence. Dexamethasone, by contrast, was beneficial under a wide range of plausible assumptions, and was only harmful if a nearly impossible assumption is met. More broadly, the SCQE reveals what inferences can be credibly supported by evidence from non-randomized uses of experimental therapies, making it a useful tool when randomized trials have not yet produced clear evidence or to provide corroborative evidence from different populations.
- Published
- 2020
13. Prior diagnoses and medications as risk factors for COVID-19 in a Los Angeles Health System
- Author
-
Daniel H. Geschwind, Julie M Yabu, Ami Wulf, Chad Hazlett, Yi Ding, Jeffrey N. Chiang, Ruth Johnson, Bogdan Pasaniuc, Manish J. Butte, Tommer Schwarz, Timothy S. Chang, and Malika K. Freund
- Subjects
medicine.medical_specialty ,Anemia ,business.industry ,Odds ratio ,Disease ,medicine.disease ,Mental health ,Intensive care unit ,vitamin D deficiency ,Article ,law.invention ,law ,Internal medicine ,medicine ,Dementia ,business ,Depression (differential diagnoses) - Abstract
SummaryWith the continuing coronavirus disease 2019 (COVID-19) pandemic coupled with phased reopening, it is critical to identify risk factors associated with susceptibility and severity of disease in a diverse population to help shape government policies, guide clinical decision making, and prioritize future COVID-19 research. In this retrospective case-control study, we used de-identified electronic health records (EHR) from the University of California Los Angeles (UCLA) Health System between March 9th, 2020 and June 14th, 2020 to identify risk factors for COVID-19 susceptibility (severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) PCR test positive), inpatient admission, and severe outcomes (treatment in an intensive care unit or intubation). Of the 26,602 individuals tested by PCR for SARS-CoV-2, 992 were COVID-19 positive (3.7% of Tested), 220 were admitted in the hospital (22% of COVID-19 positive), and 77 had a severe outcome (35% of Inpatient). Consistent with previous studies, males and individuals older than 65 years old had increased risk of inpatient admission. Notably, individuals self-identifying as Hispanic or Latino constituted an increasing percentage of COVID-19 patients as disease severity escalated, comprising 24% of those testing positive, but 40% of those with a severe outcome, a disparity that remained after correcting for medical co-morbidities. Cardiovascular disease, hypertension, and renal disease were premorbid risk factors present before SARS-CoV-2 PCR testing associated with COVID-19 susceptibility. Less well-established risk factors for COVID-19 susceptibility included pre-existing dementia (odds ratio (OR) 5.2 [3.2-8.3], p=2.6 × 10−10), mental health conditions (depression OR 2.1 [1.6-2.8], p=1.1 × 10−6) and vitamin D deficiency (OR 1.8 [1.4-2.2], p=5.7 × 10−6). Renal diseases including end-stage renal disease and anemia due to chronic renal disease were the predominant premorbid risk factors for COVID-19 inpatient admission. Other less established risk factors for COVID-19 inpatient admission included previous renal transplant (OR 9.7 [2.8-39], p=3.2×10−4) and disorders of the immune system (OR 6.0 [2.3, 16], p=2.7×10−4). Prior use of oral steroid medications was associated with decreased COVID-19 positive testing risk (OR 0.61 [0.45, 0.81], p=4.3×10−4), but increased inpatient admission risk (OR 4.5 [2.3, 8.9], p=1.8×10−5). We did not observe that prior use of angiotensin converting enzyme inhibitors or angiotensin receptor blockers increased the risk of testing positive for SARS-CoV-2, being admitted to the hospital, or having a severe outcome. This study involving direct EHR extraction identified known and less well-established demographics, and prior diagnoses and medications as risk factors for COVID-19 susceptibility and inpatient admission. Knowledge of these risk factors including marked ethnic disparities observed in disease severity should guide government policies, identify at-risk populations, inform clinical decision making, and prioritize future COVID-19 research.
- Published
- 2020
14. Principle ERP reduction and analysis: Estimating and using principle ERP waveforms underlying ERPs across tasks, subjects and electrodes
- Author
-
Chad Hazlett, Sandra K. Loo, Charlotte DiStefano, Patricia Z. Tan, Shafali S. Jeste, Holly Truong, Damla Şentürk, and Emilie Campos
- Subjects
Male ,Computer science ,Autism Spectrum Disorder ,Cognitive Neuroscience ,Speech recognition ,Electroencephalography ,Medical and Health Sciences ,050105 experimental psychology ,lcsh:RC321-571 ,Reduction (complexity) ,Attention deficit hyperactivity disorder ,03 medical and health sciences ,0302 clinical medicine ,Computer-Assisted ,Clinical Research ,Component (UML) ,Behavioral and Social Science ,medicine ,Waveform ,Humans ,0501 psychology and cognitive sciences ,EEG ,ICA ,Set (psychology) ,Child ,Preschool ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Electrodes ,Evoked Potentials ,Pediatric ,PCA ,Neurology & Neurosurgery ,medicine.diagnostic_test ,05 social sciences ,Psychology and Cognitive Sciences ,Neurosciences ,Brain ,Attention deficit hyperactivity disorder (ADHD) ,Autism spectrum disorder (ASD) ,Amplitude ,Neurology ,Attention Deficit Disorder with Hyperactivity ,Signal Processing ,Female ,030217 neurology & neurosurgery ,Algorithms ,ERP - Abstract
Event-related potentials (ERP) waveforms are the summation of many overlapping signals. Changes in the peak or mean amplitude of a waveform over a given time period, therefore, cannot reliably be attributed to a particular ERP component of ex ante interest, as is the standard approach to ERP analysis. Though this problem is widely recognized, it is not well addressed in practice. Our approach begins by presuming that any observed ERP waveform - at any electrode, for any trial type, and for any participant - is approximately a weighted combination of signals from an underlying set of what we refer to as principle ERPs, or pERPs. We propose an accessible approach to analyzing complete ERP waveforms in terms of their underlying pERPs. First, we propose the principle ERP reduction (pERP-RED) algorithm for investigators to estimate a suitable set of pERPs from their data, which may span multiple tasks. Next, we provide tools and illustrations of pERP-space analysis, whereby observed ERPs are decomposed into the amplitudes of the contributing pERPs, which can be contrasted across conditions or groups to reveal which pERPs differ (substantively and/or significantly) between conditions/groups. Differences on all pERPs can be reported together rather than selectively, providing complete information on all components in the waveform, thereby avoiding selective reporting or user discretion regarding the choice of which components or windows to use. The scalp distribution of each pERP can also be plotted for any group/condition. We demonstrate this suite of tools through simulations and on real data collected from multiple experiments on participants diagnosed with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder. Software for conducting these analyses is provided in the pERPred package for R.
- Published
- 2020
15. Making sense of sensitivity: extending omitted variable bias
- Author
-
Carlos Cinelli and Chad Hazlett
- Subjects
Statistics and Probability ,Statistics & Probability ,Residual ,01 natural sciences ,010104 statistics & probability ,Robustness value ,Clinical Research ,Covariate ,050602 political science & public administration ,Econometrics ,Confounding ,Point estimation ,0101 mathematics ,Mathematics ,Applied Mathematics ,05 social sciences ,Statistics ,Regression analysis ,Omitted-variable bias ,Regression ,0506 political science ,Omitted variable bias ,Causal inference ,Statistics, Probability and Uncertainty ,Sensitivity analysis - Abstract
Summary We extend the omitted variable bias framework with a suite of tools for sensitivity analysis in regression models that does not require assumptions on the functional form of the treatment assignment mechanism nor on the distribution of the unobserved confounders, naturally handles multiple confounders, possibly acting non-linearly, exploits expert knowledge to bound sensitivity parameters and can be easily computed by using only standard regression results. In particular, we introduce two novel sensitivity measures suited for routine reporting. The robustness value describes the minimum strength of association that unobserved confounding would need to have, both with the treatment and with the outcome, to change the research conclusions. The partial R2 of the treatment with the outcome shows how strongly confounders explaining all the residual outcome variation would have to be associated with the treatment to eliminate the estimated effect. Next, we offer graphical tools for elaborating on problematic confounders, examining the sensitivity of point estimates and t-values, as well as ‘extreme scenarios’. Finally, we describe problems with a common ‘benchmarking’ practice and introduce a novel procedure to bound the strength of confounders formally on the basis of a comparison with observed covariates. We apply these methods to a running example that estimates the effect of exposure to violence on attitudes toward peace.
- Published
- 2020
16. Sensemakr: Sensitivity Analysis Tools for OLS in R and Stata
- Author
-
Carlos Cinelli, Chad Hazlett, and Jeremy Ferwerda
- Subjects
Computer science ,Robustness (computer science) ,Causal inference ,Confounding ,Econometrics ,Omitted-variable bias ,Regression analysis ,Observational study ,Point estimation ,Imperfect - Abstract
This paper introduces the package sensemakr for R and Stata, which implements a suite of sensitivity analysis tools for regression models developed in Cinelli and Hazlett (2020a). Given a regression model, sensemakr can compute sensitivity statistics for routine reporting, such as the robustness value, which describes the minimum strength that unobserved confounders need to have to overturn a research conclusion. The package also provides plotting tools that visually demonstrate the sensitivity of point estimates and t-values to hypothetical confounders. Finally, sensemakr implements formal bounds on sensitivity parameters by means of comparison with the explanatory power of observed variables. All these tools are based on the familiar "omitted variable bias" framework, do not require assumptions regarding the functional form of the treatment assignment mechanism nor the distribution of the unobserved confounders, and naturally handle multiple, non-linear confounders. With sensemakr, users can transparently report the sensitivity of their causal inferences to unobserved confounding, thereby enabling a more precise, quantitative debate as to what can be concluded from imperfect observational studies.
- Published
- 2020
17. Inference without randomization or ignorability: A stability-controlled quasi-experiment on the prevention of tuberculosis
- Author
-
David Ami Wulf, Werner Maokola, and Chad Hazlett
- Subjects
Statistics and Probability ,Epidemiology ,Antitubercular Agents ,HIV Infections ,01 natural sciences ,law.invention ,010104 statistics & probability ,03 medical and health sciences ,Random Allocation ,0302 clinical medicine ,Randomized controlled trial ,law ,parasitic diseases ,Isoniazid ,Medicine ,Humans ,Tuberculosis ,030212 general & internal medicine ,0101 mathematics ,business.industry ,Incidence (epidemiology) ,Confounding ,Percentage point ,Ignorability ,Causal inference ,Observational study ,business ,Quasi-experiment ,Demography - Abstract
Author(s): Hazlett, Chad; Maokola, Werner; Wulf, David Ami | Abstract: When determining the effectiveness of a new treatment, randomized trials are not always possible or desirable. The stability-controlled quasi-experiment (SCQE) (Hazlett, 2019) is an observational approach that replaces the usual “no-unobserved confounding” assumption with one on the change in non-treatment outcome between successive cohorts, or the “baseline trend.” We extend this method to allow variance estimation and inference, and apply it for the first time by examining whether isoniazid preventive therapy (IPT) reduced tuberculosis (TB) incidence among 26,715 HIV patients in Tanzania. After IPT became available in the clinics we studied, a non-random 25% of patients received it. Within a year, fewer than 1% of patients on IPT developed TB, compared to 16% of the untreated. Regression adjustment using available covari-ates produces an equally large and highly significant estimate of -15 percentage point (pp) [95%CI: -16.6, -13.7]. While those estimates may generate confidence in IPT’s effectiveness, they cannot eliminate confounding. By contrast, SCQE reveals that the average treatment effect on the treated must be small and indistinguishable from zero, if we assume the baseline trend was flat over the study period. Rather, to argue that IPT was beneficial requires claiming that the (non-treatment) incidence rate rose by at least 0.5 pp per year. This is plausible, but far from certain. The SCQE approach has broad applicability and will sometimes lead to definitive claims of effectiveness. In this case, it usefully aids in protecting against over-confidence in claims that IPT was effective.
- Published
- 2019
18. A Persuasive Peace: Syrian Refugees' Attitudes Towards Compromise and Civil War Termination
- Author
-
Kristin Fabbe, Chad Hazlett, and Tolga Sinmazdemir
- Subjects
021110 strategic, defence & security studies ,Syrian refugees ,Sociology and Political Science ,Compromise ,media_common.quotation_subject ,Refugee ,05 social sciences ,0211 other engineering and technologies ,02 engineering and technology ,Criminology ,0506 political science ,Spanish Civil War ,Political science ,Political Science and International Relations ,050602 political science & public administration ,Safety Research ,media_common - Abstract
Civilians who have fled violent conflict and settled in neighboring countries are integral to processes of civil war termination. Contingent on their attitudes, they can either back peaceful settlements or support warring groups and continued fighting. Attitudes toward peaceful settlement are expected to be especially obdurate for civilians who have been exposed to violence. In a survey of 1,120 Syrian refugees in Turkey conducted in 2016, we use experiments to examine attitudes towards two critical phases of conflict termination – a ceasefire and a peace agreement. We examine the rigidity/flexibility of refugees’ attitudes to see if subtle changes in how wartime losses are framed or in who endorses a peace process can shift willingness to compromise with the incumbent Assad regime. Our results show, first, that refugees are far more likely to agree to a ceasefire proposed by a civilian as opposed to one proposed by armed actors from either the Syrian government or the opposition. Second, simply describing the refugee community’s wartime experience as suffering rather than sacrifice substantially increases willingness to compromise with the regime to bring about peace. This effect remains strong among those who experienced greater violence. Together, these results show that even among a highly pro-opposition population that has experienced severe violence, willingness to settle and make peace are remarkably flexible and dependent upon these cues.
- Published
- 2019
19. Wildfire Exposure Increases Pro-Climate Political Behaviors
- Author
-
Chad Hazlett and Matto Mildenberger
- Subjects
Census block ,Politics ,Geography ,Ballot ,Natural experiment ,media_common.quotation_subject ,Obstacle ,Development economics ,Climate change ,Hazard ,Democracy ,media_common - Abstract
Despite the climate threat's severity, global policy responses remain anemic. One political challenge has been the temporal mismatch between short-term climate policy costs and long-term climate policy benefits. Will this policymaking obstacle weaken as the impacts of climate change begin to realize? Here we analyze the impact of a climate-related hazard on public support for costly climate reforms. Using a natural experiment based on randomness in the timing of California wildfires we link, for the first time, threat exposure to realized political behavior rather than self-reported attitudes or behavioral intentions. We find that census block groups within 15 km of a wildfire have approximately 4 to 6 percentage points higher support for costly pro-climate ballot measures. The effects are stronger for block groups closest to wildfires, dropping by approximately 1.7 percentage points for every 10km of distance. Moreover, the effect is concentrated among census block groups with a large or medium concentrations of Democratic voters; by contrast, voters in Republican-dominated census block groups are largely unresponsive to wildfires. Our results suggest that experienced climate threats may only enhance willingness-to-act in areas where the public already holds pro-climate identities.
- Published
- 2019
20. Estimating Causal Effects of New Treatments Despite Self-Selection: The Case of Experimental Medical Treatments
- Author
-
Chad Hazlett
- Subjects
non-randomized trials ,Statistics and Probability ,Non-randomized trials ,Computer science ,media_common.quotation_subject ,Outcome (game theory) ,Article ,QA273-280 ,law.invention ,Clinical trials ,Randomized controlled trial ,law ,QA1-939 ,Point estimation ,Observational studies ,observational studies ,Selection (genetic algorithm) ,Simple (philosophy) ,media_common ,Selection bias ,clinical trials ,Actuarial science ,8.4 Research design and methodologies (health services) ,Expanded access ,Observational study ,Generic health relevance ,Statistics, Probability and Uncertainty ,Probabilities. Mathematical statistics ,Mathematics ,Health and social care services research - Abstract
Providing terminally ill patients with access to experimental treatments, as allowed by recent “right to try” laws and “expanded access” programs, poses a variety of ethical questions. While practitioners and investigators may assume it is impossible to learn the effects of these treatment without randomized trials, this paper describes a simple tool to estimate the effects of these experimental treatments on those who take them, despite the problem of selection into treatment, and without assumptions about the selection process. The key assumption is that the average outcome, such as survival, would remain stable over time in the absence of the new treatment. Such an assumption is unprovable, but can often be credibly judged by reference to historical data and by experts familiar with the disease and its treatment. Further, where this assumption may be violated, the result can be adjusted to account for a hypothesized change in the non-treatment outcome, or to conduct a sensitivity analysis. The method is simple to understand and implement, requiring just four numbers to form a point estimate. Such an approach can be used not only to learn which experimental treatments are promising, but also to warn us when treatments are actually harmful – especially when they might otherwise appear to be beneficial, as illustrated by example here. While this note focuses on experimental medical treatments as a motivating case, more generally this approach can be employed where a new treatment becomes available or has a large increase in uptake, where selection bias is a concern, and where an assumption on the change in average non-treatment outcome over time can credibly be imposed.
- Published
- 2018
21. Covariate balancing propensity score for a continuous treatment: Application to the efficacy of political advertisements
- Author
-
Chad Hazlett, Christian Fong, and Kosuke Imai
- Subjects
Statistics and Probability ,treatment effect ,Computer science ,generalized propensity score ,Statistics & Probability ,covariate balance ,01 natural sciences ,010104 statistics & probability ,Covariate ,Statistics ,050602 political science & public administration ,Econometrics ,0101 mathematics ,inverse-probability weighting ,Parametric statistics ,Inverse probability weighting ,05 social sciences ,Nonparametric statistics ,Advertising ,0506 political science ,Weighting ,Modeling and Simulation ,Causal inference ,Propensity score matching ,Observational study ,Statistics, Probability and Uncertainty - Abstract
Propensity score matching and weighting are popular methods when estimating causal effects in observational studies. Beyond the assumption of unconfoundedness, however, these methods also require the model for the propensity score to be correctly specified. The recently proposed covariate balancing propensity score (CBPS) methodology increases the robustness to model misspecification by directly optimizing sample covariate balance between the treatment and control groups. In this paper, we extend the CBPS to a continuous treatment. We propose the covariate balancing generalized propensity score (CBGPS) methodology, which minimizes the association between covariates and the treatment. We develop both parametric and nonparametric approaches and show their superior performance over the standard maximum likelihood estimation in a simulation study. The CBGPS methodology is applied to an observational study, whose goal is to estimate the causal effects of political advertisements on campaign contributions. We also provide open-source software that implements the proposed methods.
- Published
- 2018
22. A Persuasive Peace: Syrian Refugees' Attitudes Towards Compromise and Civil War Termination
- Author
-
Tolga Sinmazdemir, Kristin Fabbe, and Chad Hazlett
- Subjects
education.field_of_study ,Government ,Compromise ,media_common.quotation_subject ,Refugee ,Population ,Opposition (politics) ,Spanish Civil War ,Political science ,Political economy ,Sacrifice ,education ,Settlement (litigation) ,media_common - Abstract
Civilians who have fled violent conflict and settled in neighboring countries are integral to processes of civil war termination. Contingent on their attitudes, they can either back peaceful settlements or support warring groups and continued fighting. Attitudes toward peaceful settlement are expected to be especially obdurate for civilians who have been exposed to violence. In a survey of 1,120 Syrian refugees in Turkey conducted in 2016, we use experiments to examine attitudes towards two critical phases of conflict termination - a ceasefire and a peace agreement. We test the malleability of refugees' attitudes to see if subtle changes in how these processes are framed or who endorses them can render a ceasefire proposal more or less favorable, or produce attitudes that are more or less open to compromise with the incumbent regime of Assad. Our results show, first, that refugees are far more likely to agree to a ceasefire proposed by a civilian as opposed to one proposed by armed actors from either the Syrian government or the opposition. Second, we find that merely describing the refugee community's wartime experience as suffering rather than sacrifice increases willingness to compromise with the Syrian government to bring about peace. This effect remains strong among those experiencing greater violence. Together, these results show that even among a highly pro-opposition population that has experienced severe violence, attitudes toward willingness to settle and make peace are remarkably malleable, depending on factors such as who proposes a deal and how wartime losses are characterized.
- Published
- 2018
23. Trajectory Balancing: A General Reweighting Approach to Causal Inference With Time-Series Cross-Sectional Data
- Author
-
Chad Hazlett and Yiqing Xu
- Subjects
Mathematical optimization ,Computer science ,Kernel (statistics) ,Causal inference ,Covariate ,Feature (machine learning) ,Trajectory ,Stability (learning theory) ,Difference in differences ,Factor analysis - Abstract
We introduce trajectory balancing, a general reweighting approach to causal inference with time-series cross-sectional (TSCS) data. We focus on settings in which one or more units is exposed to treatment at a given time, while a set of control units remain untreated throughout a time window of interest. First, we show that many commonly used TSCS methods imply an assumption that a unit's non-treatment potential outcomes in the post-treatment period are linear in that unit's pre-treatment outcomes as well as time-invariant covariates. Under this assumption, we introduce the mean balancing method that reweights the control units such that the averages of the pre-treatment outcomes and covariates are approximately equal between the treatment and (reweighted) control groups. Second, we relax the linearity assumption and propose the kernel balancing method that seeks an approximate balance on a kernel-based feature expansion of the pre-treatment outcomes and covariates. The resulting approach inherits the property of handling time-vary confounders as in synthetic control and latent factor models, but has the advantages of: (1) improving feasibility and stability with reduced user discretion compared to existing approaches; (2) accommodating both short and long pre-treatment time periods with many or few treated units; and (3) achieving balance on the high-order "trajectory" of pre-treatment outcomes rather than their simple average at each time period. We illustrate this method with simulations and two empirical examples.
- Published
- 2018
24. Global progress and backsliding on gasoline taxes and subsidies
- Author
-
Michael L. Ross, Paasha Mahdavi, and Chad Hazlett
- Subjects
Consumption (economics) ,Environmental Engineering ,010504 meteorology & atmospheric sciences ,Public economics ,Renewable Energy, Sustainability and the Environment ,business.industry ,020209 energy ,Fossil fuel ,Energy Engineering and Power Technology ,Subsidy ,02 engineering and technology ,Limiting ,International economics ,01 natural sciences ,Energy policy ,Electronic, Optical and Magnetic Materials ,Climate Action ,Fuel Technology ,Greenhouse gas ,0202 electrical engineering, electronic engineering, information engineering ,Economics ,Electrical and Electronic Engineering ,Gasoline ,business ,0105 earth and related environmental sciences - Abstract
To reduce greenhouse gas emissions in the coming decades, many governments will have to reform their energy policies. These policies are difficult to measure with any precision. As a result, it is unclear whether progress has been made towards important energy policy reforms, such as reducing fossil fuel subsidies. We use new data to measure net taxes and subsidies for gasoline in almost all countries at the monthly level and find evidence of both progress and backsliding. From 2003 to 2015, gasoline taxes rose in 83 states but fell in 46 states. During the same period, the global mean gasoline tax fell by 13.3% due to faster consumption growth in countries with lower taxes. Our results suggest that global progress towards fossil fuel price reform has been mixed, and that many governments are failing to exploit one of the most cost-effective policy tools for limiting greenhouse gas emissions. Reforms of energy markets are necessary to face the low carbon transition but are problematic to measure. New data evaluate implicit taxes and subsidies for gasoline in almost all countries at monthly intervals showing mixed results that highlight the difficulty in implementing effective policy tools.
- Published
- 2017
25. Stress-testing the affect misattribution procedure: Heterogeneous control of affect misattribution procedure effects under incentives
- Author
-
Chad Hazlett and Adam J. Berinsky
- Subjects
Adult ,Male ,Motivation ,Social Psychology ,05 social sciences ,Control (management) ,Implicit-association test ,050109 social psychology ,Affect (psychology) ,Stress testing (software) ,050105 experimental psychology ,Prime (order theory) ,Affect ,Judgment ,Social desirability bias ,Incentive ,Attitude ,Pattern Recognition, Visual ,Humans ,0501 psychology and cognitive sciences ,Misattribution of memory ,Female ,Psychology ,Social psychology ,Psychomotor Performance - Abstract
The affect misattribution procedure (AMP) is widely used to measure sensitive attitudes towards classes of stimuli, by estimating the effect that affectively charged prime images have on subsequent judgements of neutral target images. We test its resistance to efforts to conceal one's attitudes, by replicating the standard AMP design while offering small incentives to conceal attitudes towards the prime images. We find that although the average AMP effect remains positive, it decreases significantly in magnitude. Moreover, this reduction in the mean AMP effect under incentives masks large heterogeneity: one subset of individuals continues to experience the ‘full’ AMP effect, while another reduces their effect to approximately zero. The AMP thus appears to be resistant to efforts to conceal one's attitudes for some individuals but is highly controllable for others. We further find that those individuals with high self-reported effort to avoid the influence of the prime are more often able to eliminate their AMP effect. We conclude by discussing possible mechanisms.
- Published
- 2017
26. Kernel-Based Regularized Least Squares in R ( KRLS ) and Stata ( krls )
- Author
-
Jens Hainmueller, Chad Hazlett, and Jeremy Ferwerda
- Subjects
Statistics and Probability ,Generalized linear model ,media_common.quotation_subject ,Statistics & Probability ,0211 other engineering and technologies ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,machine learning ,regression ,classification ,prediction ,Stata ,R ,Covariate ,lcsh:Statistics ,lcsh:HA1-4737 ,0105 earth and related environmental sciences ,Mathematics ,media_common ,Pointwise ,Variables ,business.industry ,Statistics ,Estimator ,021107 urban & regional planning ,Regression analysis ,Kernel (statistics) ,Ordinary least squares ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,computer ,Algorithm ,Software - Abstract
The Stata package krls as well as the R package KRLS implement kernel-based regularized least squares (KRLS), a machine learning method described in Hainmueller and Hazlett (2014) that allows users to tackle regression and classification problems without strong functional form assumptions or a specification search. The flexible KRLS estimator learns the functional form from the data, thereby protecting inferences against misspecification bias. Yet it nevertheless allows for interpretability and inference in ways similar to ordinary regression models. In particular, KRLS provides closed-form estimates for the predicted values, variances, and the pointwise partial derivatives that characterize the marginal effects of each independent variable at each data point in the covariate space. The method is thus a convenient and powerful alternative to ordinary least squares and other generalized linear models for regression-based analyses.
- Published
- 2017
27. When Exit is an Option: Effects of Indiscriminate Violence on Attitudes Among Syrian Refugees in Turkey
- Author
-
Tolga Sinmazdemir, Chad Hazlett, and Kristin Fabbe
- Subjects
Politics ,Syrian refugees ,Natural experiment ,Refugee ,Political science ,Opposition (politics) ,Criminology ,Solidarity - Abstract
How does violence during conflict affect the political attitudes of civilians who leave the conflict zone? Using a survey of 1,384 Syrian refugees in Turkey, we employ a natural experiment owing to the inaccuracy of barrel bombs to examine the effect of having one's home destroyed on political and community loyalties. We find that refugees who lose a home to barrel bombing, while more likely to feel threatened by the Assad regime, are less supportive of the opposition, and instead more likely to say no armed group in the conflict represents them – opposite to what is expected when civilians are captive in the conflict zone and must choose sides for their protection. Respondents also show heightened volunteership towards fellow refugees. Altogether, this suggests that when civilians flee the conflict zone, they withdraw support from all armed groups rather than choosing sides, instead showing solidarity with their civilian community.
- Published
- 2017
28. Kernel Regularized Least Squares: Reducing Misspecification Bias with a Flexible and Interpretable Machine Learning Approach
- Author
-
Jens Hainmueller and Chad Hazlett
- Subjects
Pointwise ,021110 strategic, defence & security studies ,Sociology and Political Science ,Computer science ,business.industry ,05 social sciences ,0211 other engineering and technologies ,Asymptotic distribution ,Estimator ,Inference ,02 engineering and technology ,Kernel Bandwidth ,Machine learning ,computer.software_genre ,Least squares ,0506 political science ,Kernel (statistics) ,Political Science and International Relations ,050602 political science & public administration ,Artificial intelligence ,business ,computer ,Parametric statistics - Abstract
We propose the use of Kernel Regularized Least Squares (KRLS) for social science modeling and inference problems. KRLS borrows from machine learning methods designed to solve regression and classification problems without relying on linearity or additivity assumptions. The method constructs a flexible hypothesis space that uses kernels as radial basis functions and finds the best-fitting surface in this space by minimizing a complexity-penalized least squares problem. We argue that the method is well-suited for social science inquiry because it avoids strong parametric assumptions, yet allows interpretation in ways analogous to generalized linear models while also permitting more complex interpretation to examine nonlinearities, interactions, and heterogeneous effects. We also extend the method in several directions to make it more effective for social inquiry, by (1) deriving estimators for the pointwise marginal effects and their variances, (2) establishing unbiasedness, consistency, and asymptotic normality of the KRLS estimator under fairly general conditions, (3) proposing a simple automated rule for choosing the kernel bandwidth, and (4) providing companion software. We illustrate the use of the method through simulations and empirical examples.
- Published
- 2014
29. Kernel Balancing: A flexible non-parametric weighting procedure for estimating causal effects
- Author
-
Chad Hazlett
- Subjects
FOS: Computer and information sciences ,Matching (statistics) ,Average treatment effect ,Nonparametric statistics ,Machine Learning (stat.ML) ,Multivariate normal distribution ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,Statistics - Applications ,Weighting ,Methodology (stat.ME) ,Joint probability distribution ,Statistics - Machine Learning ,Kernel (statistics) ,Propensity score matching ,Statistics ,FOS: Mathematics ,Applications (stat.AP) ,Statistics - Methodology ,Mathematics - Abstract
In the absence of unobserved confounders, matching and weighting methods are widely used to estimate causal quantities including the Average Treatment Effect on the Treated (ATT). Unfortunately, these methods do not necessarily achieve their goal of making the multivariate distribution of covariates for the control group identical to that of the treated, leaving some (potentially multivariate) functions of the covariates with different means between the two groups. When these "imbalanced" functions influence the non-treatment potential outcome, the conditioning on observed covariates fails, and ATT estimates may be biased. Kernel balancing, introduced here, targets a weaker requirement for unbiased ATT estimation, specifically, that the expected non-treatment potential outcome for the treatment and control groups are equal. The conditional expectation of the non-treatment potential outcome is assumed to fall in the space of functions associated with a choice of kernel, implying a set of basis functions in which this regression surface is linear. Weights are then chosen on the control units such that the treated and control group have equal means on these basis functions. As a result, the expectation of the non-treatment potential outcome must also be equal for the treated and control groups after weighting, allowing unbiased ATT estimation by subsequent difference in means or an outcome model using these weights. Moreover, the weights produced are (1) precisely those that equalize a particular kernel-based approximation of the multivariate distribution of covariates for the treated and control, and (2) equivalent to a form of stabilized inverse propensity score weighting, though it does not require assuming any model of the treatment assignment mechanism. An R package, KBAL, is provided to implement this approach., Work originally included in PhD Thesis, May 2014, MIT
- Published
- 2016
30. Dorsal Anterior Cingulate Cortex Resolves Conflict from Distracting Stimuli by Boosting Attention toward Relevant Events
- Author
-
Chad Hazlett, A. Gopalakrishnan, Marty G. Woldorff, and Daniel H. Weissman
- Subjects
Adult ,Male ,Adolescent ,Brain activity and meditation ,Cognitive Neuroscience ,education ,Gyrus Cinguli ,behavioral disciplines and activities ,Error-related negativity ,Conflict, Psychological ,Cellular and Molecular Neuroscience ,Reaction Time ,medicine ,Humans ,Attention ,Anterior cingulate cortex ,Cued speech ,medicine.diagnostic_test ,Cognition ,Magnetic Resonance Imaging ,Emotional lateralization ,medicine.anatomical_structure ,Speech Perception ,Focusing attention ,Female ,Functional magnetic resonance imaging ,Psychology ,Neuroscience ,psychological phenomena and processes ,Cognitive psychology - Abstract
In everyday life, we often focus greater attention on behaviorally relevant stimuli to limit the processing of distracting events. For example, when distracting voices intrude upon a conversation at a noisy social gathering, we concentrate more attention on the speaker of interest to better comprehend his or her speech. In the present study, we investigated whether dorsal/caudal regions of the anterior cingulate cortex (dACC), thought to make a major contribution to cognitive control, boost attentional resources toward behaviorally relevant stimuli as a means for limiting the processing of distracting events. Sixteen healthy participants performed a cued global/local selective attention task while brain activity was recorded with event-related functional magnetic resonance imaging. Consistent with our hypotheses, greater dACC activity during distracting events predicted reduced behavioral measures of interference from those same events. dACC activity also differed for cues to attend to global versus local features of upcoming visual objects, further indicating a role in directing attention toward task-relevant stimuli. Our findings indicate a role for dACC in focusing attention on behaviorally relevant stimuli, especially when the achievement of our behavioral goals is threatened by distracting events.
- Published
- 2004
31. Functional Parcellation of Attentional Control Regions of the Brain
- Author
-
Anders M. Dale, Chad Hazlett, Harlan M. Fichtenholtz, Daniel H. Weissman, Marty G. Woldorff, and Allen W. Song
- Subjects
Adult ,Male ,Brain activity and meditation ,Cognitive Neuroscience ,Posterior parietal cortex ,Brain mapping ,Mental Processes ,Reference Values ,Parietal Lobe ,medicine ,Humans ,Attention ,Evoked Potentials ,Brain Mapping ,medicine.diagnostic_test ,Functional specialization ,Attentional control ,Parietal lobe ,Visual spatial attention ,Magnetic Resonance Imaging ,Frontal Lobe ,Space Perception ,Female ,Cues ,Nerve Net ,Functional magnetic resonance imaging ,Psychology ,Neuroscience ,Cognitive psychology - Abstract
Recently, a number of investigators have examined the neural loci of psychological processes enabling the control of visual spatial attention using cued-attention paradigms in combination with event-related functional magnetic resonance imaging. Findings from these studies have provided strong evidence for the involvement of a fronto-parietal network in attentional control. In the present study, we build upon this previous work to further investigate these attentional control systems. In particular, we employed additional controls for nonattentional sensory and interpretative aspects of cue processing to determine whether distinct regions in the fronto-parietal network are involved in different aspects of cue processing, such as cue-symbol interpretation and attentional orienting. In addition, we used shorter cue-target intervals that were closer to those used in the behavioral and event-related potential cueing literatures. Twenty participants performed a cued spatial attention task while brain activity was recorded with functional magnetic resonance imaging. We found functional specialization for different aspects of cue processing in the lateral and medial subregions of the frontal and parietal cortex. In particular, the medial subregions were more specific to the orienting of visual spatial attention, while the lateral subregions were associated with more general aspects of cue processing, such as cue-symbol interpretation. Additional cue-related effects included differential activations in midline frontal regions and pretarget enhancements in the thalamus and early visual cortical areas.
- Published
- 2004
32. Effects of practice on executive control investigated with fMRI
- Author
-
Marty G. Woldorff, Daniel H. Weissman, George R. Mangun, and Chad Hazlett
- Subjects
Adult ,Male ,Visual perception ,Brain activity and meditation ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Brain mapping ,Functional Laterality ,Behavioral Neuroscience ,Perception ,Reaction Time ,medicine ,Humans ,Attention ,media_common ,Cerebral Cortex ,Cued speech ,Analysis of Variance ,Brain Mapping ,medicine.diagnostic_test ,Cognition ,Magnetic Resonance Imaging ,Functional imaging ,Inhibition, Psychological ,Practice, Psychological ,Visual Perception ,Female ,Cues ,Psychology ,Functional magnetic resonance imaging ,Neuroscience ,Photic Stimulation - Abstract
Various models of executive control predict that practice should modulate the recruitment of executive brain mechanisms. To investigate this issue, we asked 15 participants to perform a cued global/local attention task while brain activity was recorded with event-related functional magnetic resonance imaging (fMRI). Practice significantly reduced the recruitment of left inferior parietal regions that were engaged when participants oriented attention in response to global and local cue stimuli. In contrast, practice increased the recruitment of midline frontal regions that were engaged by interference between global and local forms during target processing. These findings support models of executive control in which practice increases the tendency for stimuli to automatically evoke task-relevant processes and responses.
- Published
- 2002
33. The epidemiology of lethal violence in Darfur: using micro-data to explore complex patterns of ongoing armed conflict
- Author
-
Chad Hazlett, Christian Davenport, Alex de Waal, and Joshua Kennedy
- Subjects
Warfare ,Health (social science) ,Belligerent ,Human factors and ergonomics ,Poison control ,Datasets as Topic ,Criminology ,Violence ,Suicide prevention ,Occupational safety and health ,Sudan ,History and Philosophy of Science ,Law ,Political science ,Conflict resolution ,Humans ,Combatant ,Epidemiologic Methods ,Peacekeeping - Abstract
This article describes and analyzes patterns of lethal violence in Darfur, Sudan, during 2008-09, drawing upon a uniquely detailed dataset generated by the United Nations-African Union hybrid operation in Darfur (UNAMID), combined with data generated through aggregation of reports from open-source venues. These data enable detailed analysis of patterns of perpetrator/victim and belligerent groups over time, and show how violence changed over the four years following the height of armed conflict in 2003-05. During the reference period, violent incidents were sporadic and diverse and included: battles between the major combatants; battles among subgroups of combatant coalitions that were ostensibly allied; inter-tribal conflict; incidents of one-sided violence against civilians by different parties; and incidents of banditry. The conflict as a whole defies easy categorization. The exercise illustrates the limits of existing frameworks for categorizing armed violence and underlines the importance of rigorous microlevel data collection and improved models for understanding the dynamics of collective violence. By analogy with the use of the epidemiological data for infectious diseases to help design emergency health interventions, we argue for improved use of data on lethal violence in the design and implementation of peacekeeping, humanitarian and conflict resolution interventions.
- Published
- 2013
34. KRLS: A Stata Package for Kernel-Based Regularized Least Squares
- Author
-
Jeremy Ferwerda, Chad Hazlett, and Jens Hainmueller
- Subjects
Pointwise ,Variables ,business.industry ,media_common.quotation_subject ,Estimator ,Regression analysis ,Machine learning ,computer.software_genre ,Kernel (statistics) ,Covariate ,Partial derivative ,Artificial intelligence ,business ,computer ,Mathematics ,media_common ,Interpretability - Abstract
The Stata package krls implements kernel-based regularized least squares (KRLS), a machine learning method described in Hainmueller and Hazlett (2014) that allows users to tackle regression and classi cation problems without strong functional form assumptions or a speci cation search. The flexible KRLS estimator learns the functional form from the data, thereby protecting inferences against misspeci cation bias. Yet it nevertheless allows for interpretability and inference in ways similar to ordinary regression models. In particular, KRLS provides closed-form estimates for the predicted values, variances, and the pointwise partial derivatives that characterize the marginal e ects of each independent variable at each data point in the covariate space. The method is thus a convenient and powerful alternative to OLS and other GLMs for regression-based analyses. We also provide a companion package and replication code that implements the method in R.
- Published
- 2013
35. Kernel Regularized Least Squares: Moving Beyond Linearity and Additivity Without Sacrificing Interpretability
- Author
-
Chad Hazlett and Jens Hainmueller
- Subjects
Pointwise ,business.industry ,Asymptotic distribution ,Inference ,Estimator ,Kernel Bandwidth ,Machine learning ,computer.software_genre ,Least squares ,Kernel (statistics) ,Artificial intelligence ,business ,computer ,Mathematics ,Parametric statistics - Abstract
We propose the use of Kernel Regularized Least Squares (KRLS) for social science modeling and inference problems. KRLS borrows from machine learning methods designed to solve regression and classification problems without relying on linearity or additivity assumptions. The method constructs a flexible hypothesis space that uses kernels as radial basis functions and finds the best-fitting surface in this space by minimizing a complexity-penalized least squares problem. We argue that the method is well-suited for social science inquiry because it avoids strong parametric assumptions, yet allows interpretation in ways analogous to generalized linear models while also permitting more complex interpretation to examine non-linearities, interactions, and heterogeneous effects. We also extend the method in several directions to make it more effective for social inquiry, by (1) deriving estimators for the pointwise marginal effects and their variances, (2) establishing unbiasedness, consistency, and asymptotic normality of the KRLS estimator under fairly general conditions, (3) proposing a simple automated rule for choosing the kernel bandwidth, and (4) providing companion software. We illustrate the use of the method through simulations and empirical examples.
- Published
- 2012
36. Mechanisms of moving the mind's eye: planning and execution of spatial shifts of attention
- Author
-
Marty G. Woldorff and Chad Hazlett
- Subjects
Attentional shift ,Adult ,Male ,Time Factors ,Adolescent ,Cognitive Neuroscience ,Decision Making ,Models, Psychological ,Shift time ,Visual field ,Mental Processes ,Space Perception ,Reaction Time ,Humans ,Attention ,Female ,Cues ,Visual Fields ,Psychology ,Social psychology ,Cognitive psychology - Abstract
The usefulness of attentional orienting, both in the real world and in the laboratory, depends not only on the ability to attend to objects or other inputs but also on the ability to shift attention between them. Although understanding the basic characteristics of these shifts is a critical step toward understanding the brain mechanisms that produce them, the literature remains unresolved on a very basic and potentially revealing characteristic of these shifts—namely, whether attention takes longer to shift a farther distance across the visual field. We addressed this question using a series of behavioral tasks involving the voluntary orienting of attention to locations in the visual field. The findings support a model in which attentional shifts include separate “planning” and “execution” stages and in which only the planning stage requires more time for shifts of a greater distance. These results offer resolution to the longstanding debate concerning the effect of attentional shift distance on shift time and provide insight into the fundamental mechanisms of attentional shifting.
- Published
- 2004
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.