15,449 results on '"Bartlett, P. A."'
Search Results
2. COVID-19 Global Pandemic Upheaval: CTE Teachers Response in the United States
- Author
-
John Cannon, Mary Self, Allen Kitchel, Sally Arnett-Hartwick, Carol Billing, Kevin Elliott, Michelle Bartlett, Mari Borr, and Jeremy Jeffery
- Abstract
The United States along with the rest of the world has experienced an unprecedented disruption in daily life due to the COVID-19 pandemic. Almost everyone has experienced some sort of stay at home order resulting in an economic catastrophe greater than the Great Recession of 2008 and on par with the Great Depression almost a century ago. Educational institutions at both the K-12 and post-secondary levels have not been immune from the shutdown, with many schools closed from mid-March through the end of the 2020 school year. Many schools moved classes to remote, distance delivery platforms. Career and Technical Education (CTE) teachers were tasked with creative engaging learning activities online for curricula which is taught in a hands-on contextual learning environment. This paper will present preliminary results from research conducted by a collaborative group of nine researchers from across the United States with collectively over 200 years of career and technical education experience. The conceptual framework used for this study was Danielson's Framework for Teaching and Enhancing Professional Practice and Foundations of Career and Technical Education including Constructivism. 3,267 participants representing all 50 states responded to the 37-item survey. The research objectives included description of participants and identified challenges to planning and delivery of CTE content when schools were closed, and instruction was moved to remote/distance/online platforms. Participants ranked their challenges as instructors and their perceptions of challenges that were experienced by their students. CTE teachers ranked replicating classroom or lab environments online and lack of experience teaching online as their biggest challenges. The perceptions of the participants concerning challenges for their students included motivation to guide and manage their own learning and students' access to reliable internet connection. The emergence and prevalence of the COVID-19 pandemic added a layer of complexity to educational practice that was not foreseen and for which no intentional preparation had occurred. Understanding how CTE teachers and instructors responded to this call, and the challenges they and their students encountered, is important to efforts to improve practice in the future and to be in a better position should another crisis occur that forces learning to be delivered in alternative formats from that of the traditional face-to-face classroom. [Note: The page range (177-194) shown on the PDF is incorrect. The correct page range is 177-193.]
- Published
- 2024
3. Dealing with multiple intercurrent events using hypothetical and treatment policy strategies simultaneously
- Author
-
Parra, Camila Olarte, Daniel, Rhian M., and Bartlett, Jonathan W.
- Subjects
Statistics - Methodology - Abstract
To precisely define the treatment effect of interest in a clinical trial, the ICH E9 estimand addendum describes that relevant so-called intercurrent events should be identified and strategies specified to deal with them. Handling intercurrent events with different strategies leads to different estimands. In this paper, we focus on estimands that involve addressing one intercurrent event with the treatment policy strategy and another with the hypothetical strategy. We define these estimands using potential outcomes and causal diagrams, considering the possible causal relationships between the two intercurrent events and other variables. We show that there are different causal estimand definitions and assumptions one could adopt, each having different implications for estimation, which is demonstrated in a simulation study. The different considerations are illustrated conceptually using a diabetes trial as an example.
- Published
- 2025
4. The Velocity Field Olympics: Assessing velocity field reconstructions with direct distance tracers
- Author
-
Stiskalek, Richard, Desmond, Harry, Devriendt, Julien, Slyz, Adrianne, Lavaux, Guilhem, Hudson, Michael J., Bartlett, Deaglan J., and Courtois, Hélène M.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
The peculiar velocity field of the local Universe provides direct insights into its matter distribution and the underlying theory of gravity, and is essential in cosmological analyses for modelling deviations from the Hubble flow. Numerous methods have been developed to reconstruct the density and velocity fields at $z \lesssim 0.05$, typically constrained by redshift-space galaxy positions or by direct distance tracers such as the Tully-Fisher relation, the fundamental plane, or Type Ia supernovae. We introduce a validation framework to evaluate the accuracy of these reconstructions against catalogues of direct distance tracers. Our framework assesses the goodness-of-fit of each reconstruction using Bayesian evidence, residual redshift discrepancies, velocity scaling, and the need for external bulk flows. Applying this framework to a suite of reconstructions -- including those derived from the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm and from linear theory -- we find that the non-linear BORG reconstruction consistently outperforms others. We highlight the utility of such a comparative approach for supernova or gravitational wave cosmological studies, where selecting an optimal peculiar velocity model is essential. Additionally, we present calibrated bulk flow curves predicted by the reconstructions and perform a density-velocity cross-correlation using a linear theory reconstruction to constrain the growth factor, yielding $S_8 = 0.69 \pm 0.034$. This result is in significant tension with Planck but agrees with other peculiar velocity studies., Comment: 25 pages, 16 figures. To be submitted to MNRAS, comments are welcome
- Published
- 2025
5. Raiders of the Lost Dependency: Fixing Dependency Conflicts in Python using LLMs
- Author
-
Bartlett, Antony, Liem, Cynthia, and Panichella, Annibale
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence - Abstract
Fixing Python dependency issues is a tedious and error-prone task for developers, who must manually identify and resolve environment dependencies and version constraints of third-party modules and Python interpreters. Researchers have attempted to automate this process by relying on large knowledge graphs and database lookup tables. However, these traditional approaches face limitations due to the variety of dependency error types, large sets of possible module versions, and conflicts among transitive dependencies. This study explores the potential of using large language models (LLMs) to automatically fix dependency issues in Python programs. We introduce PLLM (pronounced "plum"), a novel technique that employs retrieval-augmented generation (RAG) to help an LLM infer Python versions and required modules for a given Python file. PLLM builds a testing environment that iteratively (1) prompts the LLM for module combinations, (2) tests the suggested changes, and (3) provides feedback (error messages) to the LLM to refine the fix. This feedback cycle leverages natural language processing (NLP) to intelligently parse and interpret build error messages. We benchmark PLLM on the Gistable HG2.9K dataset, a collection of challenging single-file Python gists. We compare PLLM against two state-of-the-art automatic dependency inference approaches, namely PyEGo and ReadPyE, w.r.t. the ability to resolve dependency issues. Our results indicate that PLLM can fix more dependency issues than the two baselines, with +218 (+15.97%) more fixes over ReadPyE and +281 (+21.58%) over PyEGo. Our deeper analyses suggest that PLLM is particularly beneficial for projects with many dependencies and for specific third-party numerical and machine-learning modules. Our findings demonstrate the potential of LLM-based approaches to iteratively resolve Python dependency issues., Comment: Under submission to TOSEM, 2025
- Published
- 2025
6. The Physics of Life: Exploring Information as a Distinctive Feature of Living Systems
- Author
-
Bartlett, Stuart, Eckford, Andrew W., Egbert, Matthew, Lingam, Manasvi, Kolchinsky, Artemy, Frank, Adam, and Ghoshal, Gourab
- Subjects
Condensed Matter - Soft Condensed Matter ,Astrophysics - Earth and Planetary Astrophysics ,Computer Science - Information Theory ,Nonlinear Sciences - Adaptation and Self-Organizing Systems ,Quantitative Biology - Quantitative Methods - Abstract
This paper explores the idea that information is an essential and distinctive feature of living systems. Unlike non-living systems, living systems actively acquire, process, and use information about their environments to respond to changing conditions, sustain themselves, and achieve other intrinsic goals. We discuss relevant theoretical frameworks such as ``semantic information'' and ``fitness value of information''. We also highlight the broader implications of our perspective for fields such as origins-of-life research and astrobiology. In particular, we touch on the transition to information-driven systems as a key step in abiogenesis, informational constraints as determinants of planetary habitability, and informational biosignatures for detecting life beyond Earth. We briefly discuss experimental platforms which offer opportunities to investigate these theoretical concepts in controlled environments. By integrating theoretical and experimental approaches, this perspective advances our understanding of life's informational dynamics and its universal principles across diverse scientific domains., Comment: 10 pages 4 figures 131 references
- Published
- 2025
7. Process-Supervised Reward Models for Clinical Note Generation: A Scalable Approach Guided by Domain Expertise
- Author
-
Wang, Hanyin, Xu, Qiping, Liu, Bolun, Hussein, Guleid, Korsapati, Hariprasad, Labban, Mohamad El, Iheasirim, Kingsley, Hassan, Mohamed, Anil, Gokhan, Bartlett, Brian, and Sun, Jimeng
- Subjects
Computer Science - Computation and Language - Abstract
Process-supervised reward models (PRMs), which verify large language model (LLM) outputs step-by-step, have achieved significant success in mathematical and coding problems. However, their application to other domains remains largely unexplored. In this work, we train a PRM to provide step-level reward signals for clinical notes generated by LLMs from patient-doctor dialogues. Guided by real-world clinician expertise, we carefully designed step definitions for clinical notes and utilized Gemini-Pro 1.5 to automatically generate process supervision data at scale. Our proposed PRM, trained on the LLaMA-3.1 8B instruct model, demonstrated superior performance compared to Gemini-Pro 1.5 and an outcome-supervised reward model (ORM) across two key evaluations: (1) the accuracy of selecting gold-reference samples from error-containing samples, achieving 98.8% (versus 61.3% for ORM and 93.8% for Gemini-Pro 1.5), and (2) the accuracy of selecting physician-preferred notes, achieving 56.2% (compared to 51.2% for ORM and 50.0% for Gemini-Pro 1.5). Additionally, we conducted ablation studies to determine optimal loss functions and data selection strategies, along with physician reader studies to explore predictors of downstream Best-of-N performance. Our promising results suggest the potential of PRMs to extend beyond the clinical domain, offering a scalable and effective solution for diverse generative tasks.
- Published
- 2024
8. The Estimand Framework and Causal Inference: Complementary not Competing Paradigms
- Author
-
Drury, Thomas, Bartlett, Jonathan W., Wright, David, and Keene, Oliver N.
- Subjects
Statistics - Methodology ,Statistics - Applications - Abstract
The creation of the ICH E9 (R1) estimands framework has led to more precise specification of the treatment effects of interest in the design and statistical analysis of clinical trials. However, it is unclear how the new framework relates to causal inference, as both approaches appear to define what is being estimated and have a quantity labelled an estimand. Using illustrative examples, we show that both approaches can be used to define a population-based summary of an effect on an outcome for a specified population and highlight the similarities and differences between these approaches. We demonstrate that the ICH E9 (R1) estimand framework offers a descriptive, structured approach that is more accessible to non-mathematicians, facilitating clearer communication of trial objectives and results. We then contrast this with the causal inference framework, which provides a mathematically precise definition of an estimand, and allows the explicit articulation of assumptions through tools such as causal graphs. Despite these differences, the two paradigms should be viewed as complementary rather than competing. The combined use of both approaches enhances the ability to communicate what is being estimated. We encourage those familiar with one framework to appreciate the concepts of the other to strengthen the robustness and clarity of clinical trial design, analysis, and interpretation.
- Published
- 2024
9. A Statistical Analysis for Supervised Deep Learning with Exponential Families for Intrinsically Low-dimensional Data
- Author
-
Chakraborty, Saptarshi and Bartlett, Peter L.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Mathematics - Statistics Theory - Abstract
Recent advances have revealed that the rate of convergence of the expected test error in deep supervised learning decays as a function of the intrinsic dimension and not the dimension $d$ of the input space. Existing literature defines this intrinsic dimension as the Minkowski dimension or the manifold dimension of the support of the underlying probability measures, which often results in sub-optimal rates and unrealistic assumptions. In this paper, we consider supervised deep learning when the response given the explanatory variable is distributed according to an exponential family with a $\beta$-H\"older smooth mean function. We consider an entropic notion of the intrinsic data-dimension and demonstrate that with $n$ independent and identically distributed samples, the test error scales as $\tilde{\mathcal{O}}\left(n^{-\frac{2\beta}{2\beta + \bar{d}_{2\beta}(\lambda)}}\right)$, where $\bar{d}_{2\beta}(\lambda)$ is the $2\beta$-entropic dimension of $\lambda$, the distribution of the explanatory variables. This improves on the best-known rates. Furthermore, under the assumption of an upper-bounded density of the explanatory variables, we characterize the rate of convergence as $\tilde{\mathcal{O}}\left( d^{\frac{2\lfloor\beta\rfloor(\beta + d)}{2\beta + d}}n^{-\frac{2\beta}{2\beta + d}}\right)$, establishing that the dependence on $d$ is not exponential but at most polynomial. We also demonstrate that when the explanatory variable has a lower bounded density, this rate in terms of the number of data samples, is nearly optimal for learning the dependence structure for exponential families.
- Published
- 2024
10. An Atlas for 3d Conformal Field Theories with a U(1) Global Symmetry
- Author
-
Bartlett-Tisdall, Samuel, Herzog, Christopher P., and Schaub, Vladimir
- Subjects
High Energy Physics - Theory - Abstract
We present a collection of numerical bootstrap computations for 3d CFTs with a U(1) global symmetry. We test the accuracy of our method and fix conventions through a computation of bounds on the OPE coefficients for low-lying operators in the free fermion, free scalar, and generalised free vector field theories. We then compute new OPE bounds for scalar operators in the Gross-Neveu-Yukawa model, $O(2)$ model, and large $N$ limit of the $O(N)$ model. Additionally, we present a number of exclusion plots for such 3d CFTs. In particular, we look at the space of even and odd parity scalar operators in the low-lying spectrum that are compatible with crossing symmetry. As well as recovering the known theories, there are some kinks that indicate new unknown theories., Comment: 17 pages, 7 figures, 2 tables
- Published
- 2024
11. Supports for Multilingual Students Who Are Classified as English Learners. Overview Brief #15: Vulnerable Populations. Updated
- Author
-
EdResearch for Action, Annenberg Institute for School Reform at Brown University, Results for America, Michigan State University (MSU), College of Education, University of Vermont, Madeline Mavrogordato, Caroline Bartlett, Rebecca Callahan, David DeMatthews, and Elena Izquierdo
- Abstract
The EdResearch for Action "Overview Series" summarizes the research on key topics to provide K-12 education decision makers and advocates with an evidence base to ground discussions about how to best serve students. This research brief breaks down what is known about multilingual students classified as English Learners (ML-ELs), how ML-ELs perform in K-12 education, and what challenges they face. Key insights provided include: (1) research-based practices--such as bilingual program models--district and school leaders can use to support the academic success and linguistic development of ML-ELs; and (2) one-size-fits-all practices to avoid that can limit many students' opportunities to engage with rigorous content. [This brief was produced in collaboration with the University of Texas at Austin, College of Education.]
- Published
- 2024
12. Preparing Inclusive Early Childhood Educators (PIECE): A Conceptualization of Multilingualism, English Learning, and Inclusivity
- Author
-
Leanne M. Evans, Tatiana Joseph, Sara Jozwik, and Maggie Bartlett
- Abstract
The purpose of this article is to share the examination of inclusivity as a paradigm for fostering authenticity and agency (Moore, 2017) among teacher candidates. This framing challenges the notion of inclusion as a tool of meritocracy used to manage learners through expectations that uphold monolingualism, decenter racial histories, and rely on rigid behavior plans. In this work, the authors interrogate the impact inclusion as assimilation has on English learners' authentic ways of knowing and being. Thus, they present a conceptualization of spaces of difference (Agbenyaga & Klibthong, 2012) within the context of an Inclusive Early Childhood Teacher Education (IECTE) program and the objectives of the Preparing Inclusive Early Childhood Educators (PIECE) project. With its rigorous coursework, clinical experiences, multi-tiered mentorship, and practice-based professional development, the PIECE project aims to develop inclusive early childhood educators at the preservice and in-service levels. Infused throughout the PIECE project is an emphasis on cultivating the knowledge and skills needed to provide high-quality instruction that improves educational outcomes for English learners (ELs). Frameworks of transformative theory and intersectionality perspectives provided the authors with a grounding for the work within the PIECE project community of learners (i.e., teacher candidates, teacher educators, and school district partners). This article summarizes critical concepts of inclusivity centered in the PIECE project work. These concepts include (1) understanding oneself to look beyond; (2) disrupting notions of normalcy and naturalized language; and (3) reconceptualizing inclusivity as a social justice act.
- Published
- 2024
- Full Text
- View/download PDF
13. Removal of an Aural Foreign Body by Magnetism
- Author
-
Prentice, Emily, Bartlett, Emily, and Ilgen, Jon
- Subjects
Aural Foreign Body ,Magnetic Bead - Abstract
Case Presentation: A male patient in his thirties with a history of polysubstance use presented to the emergency department (ED) due to an abrasion on his left forehead caused by banging his head against a wall in self-injurious behavior. A non-contrast computed tomography of the head obtained to rule out intracranial injury incidentally demonstrated a radiodense foreign body in the left external ear canal. A round metallic foreign body was subsequently visualized on otoscopic examination. The aural foreign body (AFB) was identified as a metallic bead that the patient had placed into his own ear; however, he reported no associated discomfort, hearing changes, or discharge. Traditional approaches for removing AFBs were considered; however, due to the position and smooth surface of the bead, there was concern they would be unsuccessful. Recognizing the metallic nature of the AFB, the clinician held a ceramic donut magnet adjacent to the patient’s ear and subsequently extracted the AFB without complication or patient discomfort.Discussion: Aural foreign bodies account for a significant number of visits to EDs annually. Removal of AFBs can be challenging, often requiring specialized equipment or specialty referral for management. Using magnetism over short distances for the purpose of extracting metallic AFBs presents a low-cost, low-risk intervention. When used in applicable scenarios, this technique can decrease the need for specialty referral and can especially benefit patients seeking care in less-resourced settings.
- Published
- 2025
14. Sensitivity analysis methods for outcome missingness using substantive-model-compatible multiple imputation and their application in causal inference
- Author
-
Zhang, Jiaxin, Dashti, S. Ghazaleh, Carlin, John B., Lee, Katherine J., Bartlett, Jonathan W., and Moreno-Betancur, Margarita
- Subjects
Statistics - Methodology - Abstract
When using multiple imputation (MI) for missing data, maintaining compatibility between the imputation model and substantive analysis is important for avoiding bias. For example, some causal inference methods incorporate an outcome model with exposure-confounder interactions that must be reflected in the imputation model. Two approaches for compatible imputation with multivariable missingness have been proposed: Substantive-Model-Compatible Fully Conditional Specification (SMCFCS) and a stacked-imputation-based approach (SMC-stack). If the imputation model is correctly specified, both approaches are guaranteed to be unbiased under the "missing at random" assumption. However, this assumption is violated when the outcome causes its own missingness, which is common in practice. In such settings, sensitivity analyses are needed to assess the impact of alternative assumptions on results. An appealing solution for sensitivity analysis is delta-adjustment using MI, specifically "not-at-random" (NAR)FCS. However, the issue of imputation model compatibility has not been considered in sensitivity analysis, with a naive implementation of NARFCS being susceptible to bias. To address this gap, we propose two approaches for compatible sensitivity analysis when the outcome causes its own missingness. The proposed approaches, NAR-SMCFCS and NAR-SMC-stack, extend SMCFCS and SMC-stack, respectively, with delta-adjustment for the outcome. We evaluate these approaches using a simulation study that is motivated by a case study, to which the methods were also applied. The simulation results confirmed that a naive implementation of NARFCS produced bias in effect estimates, while NAR-SMCFCS and NAR-SMC-stack were approximately unbiased. The proposed compatible approaches provide promising avenues for conducting sensitivity analysis to missingness assumptions in causal inference.
- Published
- 2024
15. How contextuality and antidistinguishability are related
- Author
-
Srikumar, Maiyuren, Bartlett, Stephen D., and Karanjai, Angela
- Subjects
Quantum Physics - Abstract
Contextuality is a key characteristic that separates quantum from classical phenomena and an important tool in understanding the potential advantage of quantum computation. However, when assessing the quantum resources available for quantum information processing, there is no formalism to determine whether a set of states can exhibit contextuality and whether such proofs of contextuality indicate anything about the resourcefulness of that set. Introducing a well-motivated notion of what it means for a set of states to be contextual, we establish a relationship between contextuality and antidistinguishability of sets of states. We go beyond the traditional notions of contextuality and antidistinguishability and treat both properties as resources, demonstrating that the degree of contextuality within a set of states has a direct connection to its level of antidistinguishability. If a set of states is contextual, then it must be weakly antidistinguishable and vice-versa. However, maximal contextuality emerges as a stronger property than traditional antidistinguishability., Comment: 8 pages, 2 figures
- Published
- 2024
16. A Statistical Analysis of Deep Federated Learning for Intrinsically Low-dimensional Data
- Author
-
Chakraborty, Saptarshi and Bartlett, Peter L.
- Subjects
Statistics - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Mathematics - Statistics Theory - Abstract
Federated Learning (FL) has emerged as a groundbreaking paradigm in collaborative machine learning, emphasizing decentralized model training to address data privacy concerns. While significant progress has been made in optimizing federated learning, the exploration of generalization error, particularly in heterogeneous settings, has been limited, focusing mainly on parametric cases. This paper investigates the generalization properties of deep federated regression within a two-stage sampling model. Our findings highlight that the intrinsic dimension, defined by the entropic dimension, is crucial for determining convergence rates when appropriate network sizes are used. Specifically, if the true relationship between response and explanatory variables is charecterized by a $\beta$-H\"older function and there are $n$ independent and identically distributed (i.i.d.) samples from $m$ participating clients, the error rate for participating clients scales at most as $\tilde{O}\left((mn)^{-2\beta/(2\beta + \bar{d}_{2\beta}(\lambda))}\right)$, and for non-participating clients, it scales as $\tilde{O}\left(\Delta \cdot m^{-2\beta/(2\beta + \bar{d}_{2\beta}(\lambda))} + (mn)^{-2\beta/(2\beta + \bar{d}_{2\beta}(\lambda))}\right)$. Here, $\bar{d}_{2\beta}(\lambda)$ represents the $2\beta$-entropic dimension of $\lambda$, the marginal distribution of the explanatory variables, and $\Delta$ characterizes the dependence between the sampling stages. Our results explicitly account for the "closeness" of clients, demonstrating that the convergence rates of deep federated learners depend on intrinsic rather than nominal high-dimensionality.
- Published
- 2024
17. Fast Best-of-N Decoding via Speculative Rejection
- Author
-
Sun, Hanshi, Haider, Momin, Zhang, Ruiqi, Yang, Huitao, Qiu, Jiahao, Yin, Ming, Wang, Mengdi, Bartlett, Peter, and Zanette, Andrea
- Subjects
Computer Science - Computation and Language - Abstract
The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient., Comment: NeurIPS 2024
- Published
- 2024
18. syren-new: Precise formulae for the linear and nonlinear matter power spectra with massive neutrinos and dynamical dark energy
- Author
-
Sui, Ce, Bartlett, Deaglan J., Pandey, Shivam, Desmond, Harry, Ferreira, Pedro G., and Wandelt, Benjamin D.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Instrumentation and Methods for Astrophysics ,Computer Science - Machine Learning ,Computer Science - Neural and Evolutionary Computing - Abstract
Current and future large scale structure surveys aim to constrain the neutrino mass and the equation of state of dark energy. We aim to construct accurate and interpretable symbolic approximations to the linear and nonlinear matter power spectra as a function of cosmological parameters in extended $\Lambda$CDM models which contain massive neutrinos and non-constant equations of state for dark energy. This constitutes an extension of the syren-halofit emulators to incorporate these two effects, which we call syren-new (SYmbolic-Regression-ENhanced power spectrum emulator with NEutrinos and $W_0-w_a$). We also obtain a simple approximation to the derived parameter $\sigma_8$ as a function of the cosmological parameters for these models. Our results for the linear power spectrum are designed to emulate CLASS, whereas for the nonlinear case we aim to match the results of EuclidEmulator2. We compare our results to existing emulators and $N$-body simulations. Our analytic emulators for $\sigma_8$, the linear and nonlinear power spectra achieve root mean squared errors of 0.1%, 0.3% and 1.3%, respectively, across a wide range of cosmological parameters, redshifts and wavenumbers. We verify that emulator-related discrepancies are subdominant compared to observational errors and other modelling uncertainties when computing shear power spectra for LSST-like surveys. Our expressions have similar accuracy to existing (numerical) emulators, but are at least an order of magnitude faster, both on a CPU and GPU. Our work greatly improves the accuracy, speed and range of applicability of current symbolic approximations to the linear and nonlinear matter power spectra. We provide publicly available code for all symbolic approximations found., Comment: 18 pages, 15 figures
- Published
- 2024
19. Thresholds for post-selected quantum error correction from statistical mechanics
- Author
-
English, Lucas H., Williamson, Dominic J., and Bartlett, Stephen D.
- Subjects
Quantum Physics - Abstract
We identify regimes where post-selection can be used scalably in quantum error correction (QEC) to improve performance. We use statistical mechanical models to analytically quantify the performance and thresholds of post-selected QEC, with a focus on the surface code. Based on the non-equilibrium magnetization of these models, we identify a simple heuristic technique for post-selection that does not require a decoder. Along with performance gains, this heuristic allows us to derive analytic expressions for post-selected conditional logical thresholds and abort thresholds of surface codes. We find that such post-selected QEC is characterised by four distinct thermodynamic phases, and detail the implications of this phase space for practical, scalable quantum computation., Comment: 11 pages, comments welcome
- Published
- 2024
20. FutureFill: Fast Generation from Convolutional Sequence Models
- Author
-
Agarwal, Naman, Chen, Xinyi, Dogariu, Evan, Feinberg, Vlad, Suo, Daniel, Bartlett, Peter, and Hazan, Elad
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
We address the challenge of efficient auto-regressive generation in sequence prediction models by introducing FutureFill - a method for fast generation that applies to any sequence prediction algorithm based on convolutional operators. Our approach reduces the generation time requirement from quadratic to quasilinear relative to the context length. Additionally, FutureFill requires a prefill cache sized only by the number of tokens generated, which is smaller than the cache requirements for standard convolutional and attention-based models. We validate our theoretical findings with experimental evidence demonstrating correctness and efficiency gains in a synthetic generation task.
- Published
- 2024
21. Hybrid Aerial-Ground Vehicle Autonomy in GPS-denied Environments
- Author
-
Bartlett, Tara
- Subjects
Computer Science - Robotics - Abstract
The DARPA Subterranean Challenge is leading the development of robots capable of mapping underground mines and tunnels up to 8km in length and identify objects and people. Developing these autonomous abilities paves the way for future planetary cave and surface exploration missions. The Co-STAR team, competing in this challenge, is developing a hybrid aerial-ground vehicle, known as the Rollocopter. The current design of this vehicle is a drone with wheels attached. This allows for the vehicle to roll, actuated by the propellers, and fly only when necessary, hence benefiting from the reduced power consumption of the ground mode and the enhanced mobility of the aerial mode. This thesis focuses on the development and increased robustness of the local planning architecture for the Rollocopter. The first development of thesis is a local planner capable of collision avoidance. The local planning node provides the basic functionality required for the vehicle to navigate autonomously. The next stage was augmenting this with the ability to plan more reliably without localisation. This was then integrated with a hybrid mobility mode capable of rolling and flying to exploit power and mobility benefits of the respective configurations. A traversability analysis algorithm as well as determining the terrain that the vehicle is able to traverse is in the late stages of development for informing the decisions of the hybrid planner. A simulator was developed to test the planning algorithms and improve the robustness of the vehicle to different environments. The results presented in this thesis are related to the mobility of the rollocopter and the range of environments that the vehicle is capable of traversing. Videos are included in which the vehicle successfully navigates through dust-ridden tunnels, horizontal mazes, and areas with rough terrain., Comment: This thesis was submitted to The University of Sydney in partial fulfilment of the requirements for the degree of Bachelor of Engineering Honours (Aeronautical)(Space))
- Published
- 2024
22. Real-time Coupled Centroidal Motion and Footstep Planning for Biped Robots
- Author
-
Bartlett, Tara and Manchester, Ian R.
- Subjects
Computer Science - Robotics - Abstract
This paper presents an algorithm that finds a centroidal motion and footstep plan for a Spring-Loaded Inverted Pendulum (SLIP)-like bipedal robot model substantially faster than real-time. This is achieved with a novel representation of the dynamic footstep planning problem, where each point in the environment is considered a potential foothold that can apply a force to the center of mass to keep it on a desired trajectory. For a biped, up to two such footholds per time step must be selected, and we approximate this cardinality constraint with an iteratively reweighted $l_1$-norm minimization. Along with a linearizing approximation of an angular momentum constraint, this results in a quadratic program can be solved for a contact schedule and center of mass trajectory with automatic gait discovery. A 2 s planning horizon with 13 time steps and 20 surfaces available at each time is solved in 142 ms, roughly ten times faster than comparable existing methods in the literature. We demonstrate the versatility of this program in a variety of simulated environments., Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024
- Published
- 2024
23. CHARM: Creating Halos with Auto-Regressive Multi-stage networks
- Author
-
Pandey, Shivam, Modi, Chirag, Wandelt, Benjamin D., Bartlett, Deaglan J., Bayer, Adrian E., Bryan, Greg L., Ho, Matthew, Lavaux, Guilhem, Makinen, T. Lucas, and Villaescusa-Navarro, Francisco
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies ,Statistics - Machine Learning - Abstract
To maximize the amount of information extracted from cosmological datasets, simulations that accurately represent these observations are necessary. However, traditional simulations that evolve particles under gravity by estimating particle-particle interactions (N-body simulations) are computationally expensive and prohibitive to scale to the large volumes and resolutions necessary for the upcoming datasets. Moreover, modeling the distribution of galaxies typically involves identifying virialized dark matter halos, which is also a time- and memory-consuming process for large N-body simulations, further exacerbating the computational cost. In this study, we introduce CHARM, a novel method for creating mock halo catalogs by matching the spatial, mass, and velocity statistics of halos directly from the large-scale distribution of the dark matter density field. We develop multi-stage neural spline flow-based networks to learn this mapping at redshift z=0.5 directly with computationally cheaper low-resolution particle mesh simulations instead of relying on the high-resolution N-body simulations. We show that the mock halo catalogs and painted galaxy catalogs have the same statistical properties as obtained from $N$-body simulations in both real space and redshift space. Finally, we use these mock catalogs for cosmological inference using redshift-space galaxy power spectrum, bispectrum, and wavelet-based statistics using simulation-based inference, performing the first inference with accelerated forward model simulations and finding unbiased cosmological constraints with well-calibrated posteriors. The code was developed as part of the Simons Collaboration on Learning the Universe and is publicly available at \url{https://github.com/shivampcosmo/CHARM}., Comment: 12 pages and 8 figures. This is a Learning the Universe Publication
- Published
- 2024
24. Low-overhead magic state distillation with color codes
- Author
-
Lee, Seok-Hyung, Thomsen, Felix, Fazio, Nicholas, Brown, Benjamin J., and Bartlett, Stephen D.
- Subjects
Quantum Physics - Abstract
Fault-tolerant implementation of non-Clifford gates is a major challenge for achieving universal fault-tolerant quantum computing with quantum error-correcting codes. Magic state distillation is the most well-studied method for this but requires significant resources. Hence, it is crucial to tailor and optimize magic state distillation for specific codes from both logical- and physical-level perspectives. In this work, we perform such optimization for two-dimensional color codes, which are promising due to their higher encoding rates compared to surface codes, transversal implementation of Clifford gates, and efficient lattice surgery. We propose two distillation schemes based on the 15-to-1 distillation circuit and lattice surgery, which differ in their methods for handling faulty rotations. Our first scheme uses faulty T-measurement, offering resource efficiency when the target infidelity is above a certain threshold ($\sim 35p^3$ for physical error rate $p$). To achieve lower infidelities while maintaining resource efficiency, our second scheme exploits a distillation-free fault-tolerant magic state preparation protocol, achieving significantly lower infidelities (e.g., $\sim 10^{-19}$ for $p = 10^{-4}$) than the first scheme. Notably, our schemes outperform the best existing magic state distillation methods for color codes by up to about two orders of magnitude in resource costs for a given achievable target infidelity., Comment: 42 pages (22 pages for main text), 21 figures, 3 tables; v2 - updated combined MSD scheme (without autocorrection qubits) thanks to Sam Roberts's suggestion & additional comparison with a previous color code MSD scheme in Fig. 14
- Published
- 2024
25. COmoving Computer Acceleration (COCA): $N$-body simulations in an emulated frame of reference
- Author
-
Bartlett, Deaglan J., Chiarenza, Marco, Doeser, Ludvig, and Leclercq, Florent
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
$N$-body simulations are computationally expensive, so machine-learning (ML)-based emulation techniques have emerged as a way to increase their speed. Although fast, surrogate models have limited trustworthiness due to potentially substantial emulation errors that current approaches cannot correct for. To alleviate this problem, we introduce COmoving Computer Acceleration (COCA), a hybrid framework interfacing ML with an $N$-body simulator. The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design. This approach corresponds to solving for the perturbation of particle trajectories around the machine-learnt solution, which is computationally cheaper than obtaining the full solution, yet is guaranteed to converge to the truth as one increases the number of force evaluations. Although applicable to any ML algorithm and $N$-body simulator, this approach is assessed in the particular case of particle-mesh cosmological simulations in a frame of reference predicted by a convolutional neural network, where the time dependence is encoded as an additional input parameter to the network. COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without ML. We obtain accurate final density and velocity fields for a reduced computational budget. We demonstrate that this method shows robustness when applied to examples outside the range of the training data. When compared to the direct emulation of the Lagrangian displacement field using the same training resources, COCA's ability to correct emulation errors results in more accurate predictions. COCA makes $N$-body simulations cheaper by skipping unnecessary force evaluations, while still solving the correct equations of motion and correcting for emulation errors made by ML., Comment: 23 pages, 13 figures. Accepted for publication in A&A
- Published
- 2024
26. Navigating Virtual Halls: Stories of Online Transfer, Working Adult Learners' Journeys with Student Services
- Author
-
LaShica Davis Waters and Michelle Bartlett
- Abstract
This qualitative narrative inquiry study explores lived experiences of seven working adult part-time online students, mostly community college transfers, in a Research 1 institution's online degree-completion program. Using in-depth interviews, four key themes are identified: access, engagement, inclusion, and university pride in students' interactions with student services. Access covers service availability, engagement the interaction quality, inclusion the feeling of being valued, and university pride the overall perception of their educational journey. Engagement further divides into positive connections and negative disconnections. Findings suggest positive experiences with student services, recommending enhanced service provisions for similar students to foster accessibility, engagement, inclusivity, and a positive institutional perception.
- Published
- 2024
27. The Impact of the Online Learning Readiness Self-Check Survey with Australian Tertiary Enabling Students
- Author
-
Robert Whannell, Mitchell Parkes, Tim Bartlett-Taylor, and Ingrid Harrington
- Abstract
This study reports on two key aspects relating to the use of the Online Learning Readiness Self-Check (OLRSC) survey, which has been proposed as identifying non-traditional students' readiness for online learning, and their strengths and weaknesses in six key areas. The first aspect validates the use of the instrument based on data from 199 students engaged in an online tertiary enabling course at a regional university in Australia. Factor analysis verified the scale structure of the instrument; however, two items were removed prior to the final analysis due to low communality and/or high cross loading with other items. This is followed by an examination of whether the instrument might be useful for the early identification of students who are at risk of disengagement from the enabling program. While it was hypothesised that the instrument, which measured factors such as the quality of interaction with peers and instructors, their capacity to manage technology and how well they managed learning, should have been a useful tool to identify early disengagement, the hypothesis was not supported. No significant associations were identified between any of the instrument's scales and early withdrawal from the course or completion of the first unit of study. Future recommendations for educators are made with a view to improving student engagement.
- Published
- 2024
28. Bridging the Gap: Graduate Dispositional Employability and the Interconnected Relationship between Third Space Career and Learning Support Services
- Author
-
Jennifer Luke and Cristy Bartlett
- Abstract
This conceptual paper explores the dynamic interplay between university career development and learning support services. A distinct focus on enhancing dispositional employability of both staff and the graduates they support is discussed. Integral to successful career preparedness, the essential attributes of dispositional employability include openness to work change; resilience via a sense of control over career decision-making; an optimistic and proactive approach to seeking future opportunities; motivation in career self-management, and confidence in linking work and personal identity. Additionally, the paper also discusses how career and learning support services are positioned within the third space of higher education which is outside of administrative and traditional academic spaces. The significance of collaborating whilst maintaining distinct career development and learning advisory services is highlighted, so as to enhance graduate employability via effective connection of academic learning with career readiness. Investigations of the literature in the field lead to a conceptual model, the Maturity Model of Integrated Career and Learning Services (MM-ICLS), and recommendations that encourage collaborative peer support and capacity building for these third space staff, and congruent student support that will strengthen the dispositional employability of graduates and empower student success.
- Published
- 2024
29. Going the Distance: The Teaching Profession in a Post-COVID World
- Author
-
Lora Bartlett, Alisun Thompson, Judith Warren Little, Riley Collins, Lora Bartlett, Alisun Thompson, Judith Warren Little, and Riley Collins
- Abstract
In "Going the Distance," Lora Bartlett, Alisun Thompson, Judith Warren Little, and Riley Collins examine the professional conditions that support career commitment among K-12 educators--and the factors that threaten teacher retention. Drawing insight from the period of significant teacher turnover and burnout both during and beyond COVID-19 school shutdowns in the United States, the authors offer clear guidance for policies and practices that meet the needs of teachers and nourish a robust teaching workforce. The work presents vivid firsthand accounts of teaching during crisis that were captured as part of the Suddenly Distant Research Project, a longitudinal study of the experiences of seventy-five teachers in nine states over thirty months, from the school closures of spring 2020 through two full school years. The authors characterize the pandemic as a perspective-shifting experience that exposed existing structural problems and created new ones: a widespread sociopolitical framing of teaching as an occupation constrained by strict regulation and oversight, an overreliance on test-based accountability, a decline in public investment in education, and growing legislative constraints on what teachers could teach. Identifying contextual differences between teachers who left and those who persevered, the work calls for solutions--including increased teacher voice, collaborative workplace cultures, and reforming school accountability systems--that support teachers to pursue ambitious educational goals in ordinary times and equip them to respond rapidly and capably in times of crisis.
- Published
- 2024
30. Multiple Imputation of Partially Observed Covariates in Discrete-Time Survival Analysis
- Author
-
Anna-Carolina Haensch, Jonathan Bartlett, and Bernd Weiß
- Abstract
Discrete-time survival analysis (DTSA) models are a popular way of modeling events in the social sciences. However, the analysis of discrete-time survival data is challenged by missing data in one or more covariates. Negative consequences of missing covariate data include efficiency losses and possible bias. A popular approach to circumventing these consequences is multiple imputation (MI). In MI, it is crucial to include outcome information in the imputation models. As there is little guidance on how to incorporate the observed outcome information into the imputation model of missing covariates in DTSA, we explore different existing approaches using fully conditional specification (FCS) MI and substantive-model compatible (SMC)-FCS MI. We extend SMC-FCS for DTSA and provide an implementation in the smcfcs R package. We compare the approaches using Monte Carlo simulations and demonstrate a good performance of the new approach compared to existing approaches.
- Published
- 2024
- Full Text
- View/download PDF
31. Inclusive Civic Education and School Democracy through Participatory Budgeting
- Author
-
Tara Bartlett and Daniel Schugurensky
- Abstract
Calls for more civic education have risen alongside political polarization, social injustices, and threats to democracy. Although civic learning opportunities are disproportionately accessible and often not inclusive, one model has shown promising results. Through School Participatory Budgeting, students deliberate and decide how to allocate funds to improve their school community using a democratic process and learn democracy by doing. However, like other civic engagement programs, students with disabilities (SWD) are often underrepresented. To address this challenge, a pilot project focused on engaging SWD in every aspect of the process, including overrepresentation on the student steering committee. In a mixed methods case study, we explored the effects of participation in the process and found that the inclusive model increased civic knowledge, attitudes, skills, and practices for all students and fostered self-skills, relationships, and school leadership roles for SWD. These findings yield important lessons for future implementation of inclusive civic education practices.
- Published
- 2024
- Full Text
- View/download PDF
32. On the role of familiarity and developmental exposure in music-evoked autobiographical memories
- Author
-
Kathios, Nicholas, Bloom, Paul Alexander, Singh, Anshita, Bartlett, Ella, Algharazi, Sameah, Siegelman, Matthew, Shen, Fan, Beresford, Lea, DiMaggio-Potter, Michaelle E, Bennett, Sarah, Natarajan, Nandhini, Ou, Yongtian, Loui, Psyche, Aly, Mariam, and Tottenham, Nim
- Subjects
Biological Psychology ,Cognitive and Computational Psychology ,Psychology ,Applied and Developmental Psychology ,Neurosciences ,Behavioral and Social Science ,Clinical Research ,2.3 Psychological ,social and economic factors ,Mental health ,Music ,autobiographical memory ,reminiscence bump ,aging ,Cognitive Sciences ,Experimental Psychology ,Applied and developmental psychology ,Biological psychology ,Cognitive and computational psychology - Abstract
Music-evoked autobiographical memories (MEAMs) are typically elicited by music that listeners have heard before. While studies that have directly manipulated music familiarity show that familiar music evokes more MEAMs than music listeners have not heard before, music that is unfamiliar to the listener can also sporadically cue autobiographical memory. Here we examined whether music that sounds familiar even without previous exposure can produce spontaneous MEAMs. Cognitively healthy older adults (N = 75, ages 65-80 years) listened to music clips that were chosen by researchers to be either familiar or unfamiliar (i.e., varying by prior exposure). Participants then disclosed whether the clip elicited a MEAM and later provided self-reported familiarity ratings for each. Self-reported familiarity was positively associated with the occurrence of MEAMs in response to familiar, but not the unfamiliar, music. The likelihood of reporting MEAMs for music released during youth (i.e., the "reminiscence bump") relative to young adulthood (20-25 years) included both music released during participants' adolescence (14-18 years) and middle childhood (5-9 years) once self-reported familiarity was accounted for. These developmental effects could not be accounted for by music-evoked affect. Overall, our results suggest that the phenomenon of MEAMs hinges upon both perceptions of familiarity and prior exposure.
- Published
- 2024
33. The East Bay Diesel Exposure Project: a biomonitoring study of parents and their children in heavily impacted communities.
- Author
-
Sultana, Daniel, Kauffman, Duyen, Castorina, Rosemary, Paulsen, Michael, Bartlett, Russell, Ranjbar, Kelsey, Gunier, Robert, Aguirre, Victor, Rowen, Marina, Garban, Natalia, DeGuzman, Josephine, She, Jianwen, Patterson, Regan, Simpson, Christopher, Bradman, Asa, and Hoover, Sara
- Subjects
1-nitropyrene ,Biomonitoring ,Children ,Diesel exhaust ,Human exposure ,Urinary metabolites ,Humans ,Biological Monitoring ,Female ,Male ,Child ,Pyrenes ,Adult ,Vehicle Emissions ,Air Pollution ,Indoor ,Environmental Exposure ,San Francisco ,Parents ,Air Pollutants ,Dust ,Child ,Preschool ,Middle Aged ,Seasons ,Environmental Monitoring ,Adolescent ,Bays - Abstract
BACKGROUND: Diesel exhaust (DE) exposures pose concerns for serious health effects, including asthma and lung cancer, in California communities burdened by multiple stressors. OBJECTIVE: To evaluate DE exposures in disproportionately impacted communities using biomonitoring and compare results for adults and children within and between families. METHODS: We recruited 40 families in the San Francisco East Bay area. Two metabolites of 1-nitropyrene (1-NP), a marker for DE exposures, were measured in urine samples from parent-child pairs. For 25 families, we collected single-day spot urine samples during two sampling rounds separated by an average of four months. For the 15 other families, we collected daily spot urine samples over four consecutive days during the two sampling rounds. We also measured 1-NP in household dust and indoor air. Associations between urinary metabolite levels and participant demographics, season, and 1-NP levels in dust and air were evaluated. RESULTS: At least one 1-NP metabolite was present in 96.6% of the urine samples. Detection frequencies for 1-NP in dust and indoor air were 97% and 74%, respectively. Results from random effect models indicated that levels of the 1-NP metabolite 6-hydroxy-1-nitropyrene (6-OHNP) were significantly higher in parents compared with their children (p-value = 0.005). Urinary 1-NP metabolite levels were generally higher during the fall and winter months. Within-subject variability was higher than between-subject variability (~60% of total variance versus ~40%, respectively), indicating high short-term temporal variability. IMPACT: Biomonitoring, coupled with air monitoring, improves understanding of hyperlocal air pollution impacts. Results from these studies will inform the design of effective exposure mitigation strategies in disproportionately affected communities.
- Published
- 2024
34. Scant evidence for thawing quintessence
- Author
-
Wolf, William J., García-García, Carlos, Bartlett, Deaglan J., and Ferreira, Pedro G.
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,General Relativity and Quantum Cosmology ,High Energy Physics - Phenomenology ,High Energy Physics - Theory - Abstract
New constraints on the expansion rate of the Universe seem to favor evolving dark energy in the form of thawing quintessence models, i.e., models for which a canonical, minimally coupled scalar field has, at late times, begun to evolve away from potential energy domination. We scrutinize the evidence for thawing quintessence by exploring what it predicts for the equation of state. We show that, in terms of the usual Chevalier-Polarski-Linder parameters, ($w_0$, $w_a$), thawing quintessence is, in fact, only marginally consistent with a compilation of the current data. Despite this, we embrace the possibility that thawing quintessence is dark energy and find constraints on the microphysics of this scenario. We do so in terms of the effective mass $m^2$ and energy scale $V_0$ of the scalar field potential. We are particularly careful to enforce un-informative, flat priors on these parameters so as to minimize their effect on the final posteriors. While the current data favors a large and negative value of $m^2$, when we compare these models to the standard $\Lambda$CDM model we find that there is scant evidence for thawing quintessence., Comment: Updated to match published PRD version
- Published
- 2024
- Full Text
- View/download PDF
35. Automating the Practice of Science -- Opportunities, Challenges, and Implications
- Author
-
Musslick, Sebastian, Bartlett, Laura K., Chandramouli, Suyog H., Dubova, Marina, Gobet, Fernand, Griffiths, Thomas L., Hullman, Jessica, King, Ross D., Kutz, J. Nathan, Lucas, Christopher G., Mahesh, Suhas, Pestilli, Franco, Sloman, Sabina J., and Holmes, William R.
- Subjects
Computer Science - Computers and Society ,Physics - Physics and Society - Abstract
Automation transformed various aspects of our human civilization, revolutionizing industries and streamlining processes. In the domain of scientific inquiry, automated approaches emerged as powerful tools, holding promise for accelerating discovery, enhancing reproducibility, and overcoming the traditional impediments to scientific progress. This article evaluates the scope of automation within scientific practice and assesses recent approaches. Furthermore, it discusses different perspectives to the following questions: Where do the greatest opportunities lie for automation in scientific practice?; What are the current bottlenecks of automating scientific practice?; and What are significant ethical and practical consequences of automating scientific practice? By discussing the motivations behind automated science, analyzing the hurdles encountered, and examining its implications, this article invites researchers, policymakers, and stakeholders to navigate the rapidly evolving frontier of automated scientific practice.
- Published
- 2024
36. Statistical Patterns in the Equations of Physics and the Emergence of a Meta-Law of Nature
- Author
-
Constantin, Andrei, Bartlett, Deaglan, Desmond, Harry, and Ferreira, Pedro G.
- Subjects
Physics - Physics and Society ,Computer Science - Computation and Language ,High Energy Physics - Theory ,Physics - Data Analysis, Statistics and Probability ,Physics - History and Philosophy of Physics - Abstract
Physics, as a fundamental science, aims to understand the laws of Nature and describe them in mathematical equations. While the physical reality manifests itself in a wide range of phenomena with varying levels of complexity, the equations that describe them display certain statistical regularities and patterns, which we begin to explore here. By drawing inspiration from linguistics, where Zipf's law states that the frequency of any word in a large corpus of text is roughly inversely proportional to its rank in the frequency table, we investigate whether similar patterns for the distribution of operators emerge in the equations of physics. We analyse three corpora of formulae and find, using sophisticated implicit-likelihood methods, that the frequency of operators as a function of their rank in the frequency table is best described by an exponential law with a stable exponent, in contrast with Zipf's inverse power-law. Understanding the underlying reasons behind this statistical pattern may shed light on Nature's modus operandi or reveal recurrent patterns in physicists' attempts to formalise the laws of Nature. It may also provide crucial input for symbolic regression, potentially augmenting language models to generate symbolic models for physical phenomena. By pioneering the study of statistical regularities in the equations of physics, our results open the door for a meta-law of Nature, a (probabilistic) law that all physical laws obey., Comment: 9 pages, 5 figures
- Published
- 2024
37. SKAO Observation Execution Tool: Designing for concurrent, responsive observations
- Author
-
Pursiainen, Viivi, Williams, Stewart J., Kenny, Thaddeus, Bartlett, Elizabeth S., Biggs, Andrew D., McCollam, Brendan, Acosta, Danilo, Ellis, Sean, and Lung, Rupert
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
The SKA Observatory, currently in the construction phase, will have two of the world's largest radio telescopes when completed in 2028. The scale of the project introduces unique challenges for the telescope software design and implementation at all levels, from user interfacing software down to the lower-level control of individual telescope elements. The Observation Execution Tool (OET) is part of the Observation Science Operations (OSO) suite of applications and is responsible for orchestrating the highest level of telescope control through the execution of telescope control scripts. One of the main challenges for the OET is creating a design that can robustly run concurrent observations on multiple subarrays while remaining responsive to the user. The Scaled Agile Framework (SAFe) development process followed by the SKA project also means the software should be allow to iterative implementation and easily accommodate new and changing requirements. This paper concentrates on the design decisions and challenges in the development of the OET, how we have solved some of the specific technical problems and details on how we remain flexible for future requirements., Comment: 6 pages, 2 figures, submitted to 2024 SPIE Astronomical Telescopes + Instrumentation, Software and Cyberinfrastructure for Astronomy VIII conference
- Published
- 2024
38. Development of the observatory software for the SKAO
- Author
-
Kenny, Thaddeus, Williams, Stewart J., Pursiainen, Viivi, Bartlett, Elizabeth S., McCollam, Brendan, Biggs, Andrew D., Ellis, Sean, and Lung, Rupert
- Subjects
Astrophysics - Instrumentation and Methods for Astrophysics - Abstract
The Observatory Science Operations (OSO) subsystem of the SKAO consists of a range of complex tools which will be used to propose, design, schedule and execute observations. Bridging the gap between the science and telescope domains is the key responsibility of OSO, requiring considerations of usability, performance, availability and accessibility, amongst others. This paper describes the state of the observatory software as we approach construction milestones, how the applications meet these requirements using a modern technology architecture, and challenges so far.
- Published
- 2024
39. An 'ultimate' coupled cluster method based entirely on $T_2$
- Author
-
Windom, Zachary W., Perera, Ajith, and Bartlett, Rodney J.
- Subjects
Physics - Chemical Physics - Abstract
Electronic structure methods built around double-electron excitations have a rich history in quantum chemistry. However, it seems to be the case that such methods are only suitable in particular situations and are not naturally equipped to simultaneously handle the variety of electron correlations that might be present in chemical systems. To this end, the current work seeks a computationally efficient, low-rank, "ultimate" coupled cluster method based exclusively on $T_2$ and its products which can effectively emulate more "complete" methods that explicitly consider higher-rank, $T_{2m}$ operators. We introduce a hierarchy of methods designed to systematically account for higher, even order cluster operators - like $T_4, T_6, \cdots, T_{2m}$ - by invoking tenets of the factorization theorem of perturbation theory and expectation-value coupled cluster theory. It is shown that each member within this methodological hierarchy is defined such that both the wavefunction and energy are correct through some order in many-body perturbation theory (MBPT), and can be extended up to arbitrarily high orders in $T_2$. The efficacy of such approximations are determined by studying the potential energy surface of several prototypical systems that are chosen to represent both non-dynamic, static, and dynamic correlation regimes. We find that the proposed hierarchy of augmented $T_2$ methods essentially reduce to standard CCD for problems where dynamic electron correlations dominate, but offer improvements in situations where non-dynamic and static correlations become relevant. A notable highlight of this work is that the cheapest methods in this hierarchy - which are correct through fifth-order in MBPT - consistently emulate the behavior of the $\mathcal{O}(N^{10})$ CCDQ method, yet only require a $\mathcal{O}(N^{6})$ algorithm by virtue of factorized intermediates.
- Published
- 2024
40. Causal Leverage Density: A General Approach to Semantic Information
- Author
-
Bartlett, Stuart J
- Subjects
Condensed Matter - Statistical Mechanics ,Nonlinear Sciences - Adaptation and Self-Organizing Systems - Abstract
I introduce a new approach to semantic information based upon the influence of erasure operations (interventions) upon distributions of a system's future trajectories through its phase space. Semantic (meaningful) information is distinguished from syntactic information by the property of having some intrinsic causal power on the future of a given system. As Shannon famously stated, syntactic information is a simple property of probability distributions (the elementary Shannon expression), or correlations between two subsystems and thus does not tell us anything about the meaning of a given message. Kolchinsky & Wolpert (2018) introduced a powerful framework for computing semantic information, which employs interventions upon the state of a system (either initial or dynamic) to erase syntactic information that might influence the viability of a subsystem (such as an organism in an environment). In this work I adapt this framework such that rather than using the viability of a subsystem, we simply observe the changes in future trajectories through a system's phase space as a result of informational interventions (erasures or scrambling). This allows for a more general formalisation of semantic information that does not assume a primary role for the viability of a subsystem (to use examples from Kolchinsky & Wolpert (2018), a rock, a hurricane, or a cell). Many systems of interest have a semantic component, such as a neural network, but may not have such an intrinsic connection to viability as living organisms or dissipative structures. Hence this simple approach to semantic information could be applied to any living, non-living or technological system in order to quantify whether a given quantity of syntactic information within it also has semantic or causal power.
- Published
- 2024
41. Factorized Quadruples and a Predictor of Higher-Level Correlation in Thermochemistry
- Author
-
Thorpe, James H., Windom, Zachary W., Bartlett, Rodney J., and Matthews, Devin A.
- Subjects
Physics - Chemical Physics - Abstract
Coupled cluster theory has had a momentous impact on the ab initio prediction of molecular properties, and remains a staple ingratiate in high-accuracy thermochemical model chemistries. However, these methods require inclusion of at least some connected quadruple excitations, which generally scale at best as $\mathcal{O}(N^9)$ with the number of basis functions. It very difficult to predict, a priori, the effect correlation past CCSD(T) has on a give reaction energies. The purpose of this work is to examine cost-effective quadruple corrections based on the factorization theorem of many-body perturbation theory that may address these challenges. We show that the $\mathcal{O}(N^7)$, factorized CCSD(TQ${}_\text{f}$) method introduces minimal error to predicted correlation and reaction energies as compared to the $\mathcal{O}(N^9)$ CCSD(TQ). Further, we examine the performance of Goodson's continued fraction method in the estimation of CCSDT(Q)${}_\Lambda$ contributions to reaction energies, as well as a "new" method related to %TAE[(T)] that we refer to as a scaled perturbation estimator. We find that the scaled perturbation estimator based upon CCSD(TQ${}_\text{f}$)/cc-pVDZ is capable of predicting CCSDT(Q)${}_\Lambda$/cc-pVDZ contributions to reaction energies with an average error of 0.07 kcal mol${}^{-1}$ and a RMST of 0.52 kcal mol${}^{-1}$ when applied to a test-suite of nearly 3000 reactions. This offers a means by which to reliably ballpark how important post-CCSD(T) contributions are to reaction energies while incurring no more than CCSD(T) formal cost and a little mental math.
- Published
- 2024
42. An attractive way to correct for missing singles excitations in unitary coupled cluster doubles theory
- Author
-
Windom, Zachary W., Claudino, Daniel, and Bartlett, Rodney J.
- Subjects
Quantum Physics ,Physics - Chemical Physics - Abstract
Coupled cluster methods based exclusively on double excitations are comparatively "cheap" and interesting model chemistries, as they are typically able to capture the bulk of the dynamical electron correlation effects. The trade-off in such approximations is that the effect of neglected excitations, particularly single excitations, can be considerable. Using standard and electron pair-restricted $T_2$ operators to define two flavors of unitary coupled cluster doubles (UCCD) methods, we investigate the extent in which missing single excitations can be recovered from low-order corrections in many-body perturbation theory (MBPT) within the unitary coupled cluster (UCC) formalism. Our analysis includes the derivations of finite-order, UCC energy functionals which are used as a basis to define perturbative estimates of missed single excitations. This leads to the novel UCCD[4S] and UCCD[6S] methods, which consider energy corrections for missing singles excitations through fourth- and sixth-order in MBPT, respectively. We also apply the same methodology to the electron pair-restricted ansatz, but the improvements are only marginal. Our findings show that augmenting UCCD with these post hoc perturbative corrections can lead to UCCSD-quality results.
- Published
- 2024
43. Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
- Author
-
Cai, Yuhang, Wu, Jingfeng, Mei, Song, Lindsey, Michael, and Bartlett, Peter L.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Mathematics - Optimization and Control - Abstract
The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes., Comment: Clarify our results on sigmoid neural networks
- Published
- 2024
44. Scaling Laws in Linear Regression: Compute, Parameters, and Data
- Author
-
Lin, Licong, Wu, Jingfeng, Kakade, Sham M., Bartlett, Peter L., and Lee, Jason D.
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Mathematics - Statistics Theory ,Statistics - Machine Learning - Abstract
Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the variance error increases with model size. This disagrees with the general form of neural scaling laws, which predict that increasing model size monotonically improves performance. We study the theory of scaling laws in an infinite dimensional linear regression setup. Specifically, we consider a model with $M$ parameters as a linear function of sketched covariates. The model is trained by one-pass stochastic gradient descent (SGD) using $N$ data. Assuming the optimal parameter satisfies a Gaussian prior and the data covariance matrix has a power-law spectrum of degree $a>1$, we show that the reducible part of the test error is $\Theta(M^{-(a-1)} + N^{-(a-1)/a})$. The variance error, which increases with $M$, is dominated by the other errors due to the implicit regularization of SGD, thus disappearing from the bound. Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
- Published
- 2024
45. Where to place a mosquito trap for West Nile Virus surveillance?
- Author
-
Chakravarti, Anwesha, Li, Bo, Bartlett, Dan, Irwin, Patrick, and Smith, Rebecca
- Subjects
Statistics - Applications - Abstract
The rapid spread of West Nile Virus (WNV) is a growing concern. With no vaccines or specific medications available, prevention through mosquito control is the only solution to curb the spread. Mosquito traps, used to detect viral presence in mosquito populations, are essential tools for WNV surveillance. But how do we decide where to place a mosquito trap? And what makes a good trap location, anyway? We present a robust statistical approach to determine a mosquito trap's ability to predict human WNV cases in the Chicago metropolitan area and its suburbs. We then use this value to detect the landscape, demographic, and socioeconomic factors associated with a mosquito trap's predictive ability. This approach enables resource-limited mosquito control programs to identify better trap locations while reducing trap numbers to increase trap-based surveillance efficiency. The approach can also be applied to a wide range of different environmental surveillance programs., Comment: 22 pages, 9 figures
- Published
- 2024
46. Loneliness and social isolation correlate with multiple modifiable risk factors for chronic diseases including dementia
- Author
-
Bartlett, Larissa, Fair, Hannah, Bindoff, Aidan, Kitsos, Alex, Hamrah, Mohammad Shoaib, Roccati, Eddy, and Vickers, James C.
- Published
- 2025
- Full Text
- View/download PDF
47. Molecular precursors for the electrodeposition of 2D-layered metal chalcogenides
- Author
-
Bartlett, Philip N., de Groot, C. H. Kees, Greenacre, Victoria K., Huang, Ruomeng, Noori, Yasir J., Reid, Gillian, and Thomas, Shibin
- Published
- 2025
- Full Text
- View/download PDF
48. Ten-Year Outcome of a Randomized Trial: Cytoreduction and HIPEC with Mitomycin C Versus Oxaliplatin for Appendiceal Neoplasm with Peritoneal Dissemination
- Author
-
Levine, Edward A., Cos, Heidy, Votanopoulos, Konstantinos I., Shen, Perry, Russell, Greg, Mansfield, Paul, Fournier, Keith, Bartlett, David, and Stewart, John H.
- Published
- 2025
- Full Text
- View/download PDF
49. Psychosocial Experiences Associated with Dysphagia and Relevant Clinical Implications Among Adults with Parkinson Disease
- Author
-
Bartlett, Rebecca S., Walters, Andrew S., and Wayment, Heidi A.
- Published
- 2025
- Full Text
- View/download PDF
50. Perceptions of Dysphagia Evaluation and Treatment Among Individuals with Parkinson’s Disease
- Author
-
Bartlett, Rebecca S., Walters, Andrew S., Stewart, Rosa S., and Wayment, Heidi A.
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.