605,584 results on '"David, M."'
Search Results
2. AN AUTOMATED SELF-INSTRUCTIONAL COURSE IN BRAZILIAN PORTUGUSE FOR SPEAKERS OF SPANISH, BASIC PROGRAM--SECOND FOCUS (FRAMES 126-610) DISCRIMINATION AND PRODUCTION OF BRAZILIAN PROTUGUESE SEGMENTAL PHONEMES.
- Author
-
FELDMAN, DAVID M.
- Abstract
A PROGRAMED TEXT WAS PREPARED. FRAMES 126-610 OF A BASIC LANGUAGE PROGRAM (SECOND FOCUS) WERE INCLUDED. (THIS DOCUMENT IS AN APPENDIX TO ED 010 319 AND IS SUPPLEMENTARY TO ED 010 321.) (JK)
- Published
- 2024
3. AN AUTOMATED SELF-INSTRUCTIONAL COURSE IN BRAZILIAN PORTUGUESE FOR SPEAKERS OF SPANISH, BASIC PROGRAM--THIRD FOCUS (FRAMES 611-720) MAJOR CORRELATIONS BETWEEN BRAZILIAN PORTUGUESE PHONOLOGY AND ORTHOGRAPHY.
- Author
-
FELDMAN, DAVID M.
- Abstract
A PROGRAMED TEXT WAS PREPARED FOR A COURSE IN LEARNING THE BRAZILIAN-PORTUGUESE LANGUAGE. FRAMES 611-720 OF A BASIC LANGUAGE PROGRAM (THIRD FOCUS) WERE INCLUDED. (THIS DOCUMENT IS AN APPENDIX TO ED 010 319 AND IS SUPPLEMENTARY TO ED 010 323.) (JK)
- Published
- 2024
4. AN AUTOMATED SELF-INSTRUCTIONAL COURSE IN BRAZILIAN PORTUGUESE FOR SPEAKERS OF SPANISH, BASIC PROGRAM--FIRST FOCUS (FRAMES 1-125) NOTIONS OF ARTICULATORY PHONETICS FOR BRAZILIAN PORTUGUESE.
- Author
-
FELDMAN, DAVID M.
- Abstract
A PROGRAMED TEXT WAS PREPARED FOR A COURSE IN LEARNING THE BRAZILIAN-PORTUGUESE LANGUAGE. FRAMES 1-125 OF THE BASIC LANGUAGE PROGRAM (FIRST FOCUS) WERE INCLUDED. (THIS DOCUMENT IS AN APPENDIX TO ED 010 319.) (JK)
- Published
- 2024
5. The Agile Student Practice Project: Simulating an Agile Project in the Classroom for a Real-World Experience
- Author
-
David M. Woods and Andrea Hulshult
- Abstract
In response to the adoption of Agile practices and processes by businesses, IT/IS educators are working to add Agile content to their courses. Teaching students about Agile involves teaching them about the history, mindset, and values of Agile, along with an introduction to the practices and processes used in an Agile product. Along with this, it is essential that students gain experience using Agile in a project setting. This paper discusses an Agile practice project where students use all aspects of Agile to address a problem and build a solution using Legos. The use of Legos, along with a project that students can easily see themselves using, the practice project allows students to focus on developing their Agile skills and mindset. The project serves as a useful transition from traditional classroom instruction about Agile to a project for a real-world client.
- Published
- 2024
6. In vivo 4D x-ray dark-field lung imaging in mice
- Author
-
How, Ying Ying, Reyne, Nicole, Croughan, Michelle K., Cmielewski, Patricia, Batey, Daniel, Costello, Lucy F., Smith, Ronan, Ahlers, Jannis N., Cholewa, Marian, Kolodziej, Magdalena, Duerr, Julia, Mall, Marcus A., Kitchen, Marcus J., Asselin-Labat, Marie-Liesse, Paganin, David M., Donnelley, Martin, and Morgan, Kaye S.
- Subjects
Physics - Medical Physics - Abstract
X-ray dark-field imaging is well-suited to visualizing the health of the lungs because the alveoli create a strong dark-field signal. However, time-resolved and tomographic (i.e., 4D) dark-field imaging is challenging, since most x-ray dark-field techniques require multiple sample exposures, captured while scanning the position of crystals or gratings. Here, we present the first in vivo 4D x-ray dark-field lung imaging in mice. This was achieved by synchronizing the data acquisition process of a single-exposure grid-based imaging approach with the breath cycle. The short data acquisition time per dark-field projection made this approach feasible for 4D x-ray dark-field imaging by minimizing the motion-blurring effect, the total time required and the radiation dose imposed on the sample. Images were captured from a control mouse and from mouse models of muco-obstructive disease and lung cancer, where a change in the size of the alveoli was expected. This work demonstrates that the 4D dark-field signal provides complementary information that is inaccessible from conventional attenuation-based CT images, in particular, how the size of the alveoli from different parts of the lungs changes throughout a breath cycle, with examples shown across the different models. By quantifying the dark-field signal and relating it to other physical properties of the alveoli, this technique could be used to perform functional lung imaging that allows the assessment of both global and regional lung conditions where the size or expansion of the alveoli is affected., Comment: 11 pages, 5 figures, supplementary information available
- Published
- 2024
7. Quantum Policy Gradient in Reproducing Kernel Hilbert Space
- Author
-
Bossens, David M., Bharti, Kishor, and Thompson, Jayne
- Subjects
Quantum Physics ,Computer Science - Machine Learning - Abstract
Parametrised quantum circuits offer expressive and data-efficient representations for machine learning. Due to quantum states residing in a high-dimensional Hilbert space, parametrised quantum circuits have a natural interpretation in terms of kernel methods. The representation of quantum circuits in terms of quantum kernels has been studied widely in quantum supervised learning, but has been overlooked in the context of quantum reinforcement learning. This paper proposes parametric and non-parametric policy gradient and actor-critic algorithms with quantum kernel policies in quantum environments. This approach, implemented with both numerical and analytical quantum policy gradient techniques, allows exploiting the many advantages of kernel methods, including available analytic forms for the gradient of the policy and tunable expressiveness. The proposed approach is suitable for vector-valued action spaces and each of the formulations demonstrates a quadratic reduction in query complexity compared to their classical counterparts. Two actor-critic algorithms, one based on stochastic policy gradient and one based on deterministic policy gradient (comparable to the popular DDPG algorithm), demonstrate additional query complexity reductions compared to quantum policy gradient algorithms under favourable conditions.
- Published
- 2024
8. Deep Learning in Classical X-ray Ghost Imaging for Dose Reduction
- Author
-
Huang, Yiyue, Loesel, Philipp D., Paganin, David M., and Kingston, Andrew M.
- Subjects
Physics - Optics ,Physics - Medical Physics - Abstract
Ghost imaging (GI) is an unconventional technique that combines information from two correlated patterned light fields to compute an image of the object of interest. GI can be performed with visible light as well as penetrating radiation such as x-rays, electrons, etc. Penetrating radiation is usually ionizing and damages biological specimens; therefore, minimising the dose of this radiation in a medical or biological imaging context is important. GI has been proposed as a potential way to achieve this. With prior knowledge of the object of interest, such as sparsity in a specific basis (e.g., Fourier basis) or access to a large dataset for neural network training, it is possible to reconstruct an image of the object with a limited number of measurements. However, low sampling does not inherently equate to low dose. Here, we specifically explore the scenario where reduced sampling corresponds to low-dose conditions. In this simulation-based paper, we examine how deep learning (DL) techniques could reduce dose in classical x-ray GI. Since GI is based on illumination patterns, we start by exploring optimal sets of patterns that allow us to reconstruct the image with the fewest measurements, or lowest sampling rate, possible. We then propose a DL neural network that can directly reconstruct images from GI measurements even when the sampling rate is extremely low. We demonstrate that our deep learning-based GI (DLGI) approach has potential in image reconstruction, with results comparable to direct imaging (DI) at the same dose. However, given the same prior knowledge and detector quantum efficiency, it is very challenging for DLGI to outperform DI under low-dose conditions. We discuss how it may be achievable due to the higher sensitivity of bucket detectors over pixel detectors., Comment: 12 pages, 10 figures
- Published
- 2024
9. Analyzing The Language of Visual Tokens
- Author
-
Chan, David M., Corona, Rodolfo, Park, Joonyong, Cho, Cheol Jun, Bai, Yutong, and Darrell, Trevor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
With the introduction of transformer-based models for vision and language tasks, such as LLaVA and Chameleon, there has been renewed interest in the discrete tokenized representation of images. These models often treat image patches as discrete tokens, analogous to words in natural language, learning joint alignments between visual and human languages. However, little is known about the statistical behavior of these visual languages - whether they follow similar frequency distributions, grammatical structures, or topologies as natural languages. In this paper, we take a natural-language-centric approach to analyzing discrete visual languages and uncover striking similarities and fundamental differences. We demonstrate that, although visual languages adhere to Zipfian distributions, higher token innovation drives greater entropy and lower compression, with tokens predominantly representing object parts, indicating intermediate granularity. We also show that visual languages lack cohesive grammatical structures, leading to higher perplexity and weaker hierarchical organization compared to natural languages. Finally, we demonstrate that, while vision models align more closely with natural languages than other models, this alignment remains significantly weaker than the cohesion found within natural languages. Through these experiments, we demonstrate how understanding the statistical properties of discrete visual languages can inform the design of more effective computer vision models.
- Published
- 2024
10. Exact periodic solutions of the generalized Constantin-Lax-Majda equation with dissipation
- Author
-
Silantyev, Denis A., Lushnikov, Pavel M., Siegel, Michael, and Ambrose, David M.
- Subjects
Mathematics - Analysis of PDEs ,Nonlinear Sciences - Pattern Formation and Solitons ,Nonlinear Sciences - Exactly Solvable and Integrable Systems ,35Q35, 35A21, 35C05, 35C06, 34A34 - Abstract
We present exact pole dynamics solutions to the generalized Constantin-Lax-Majda (gCLM) equation in a periodic geometry with dissipation $-\Lambda^\sigma$, where its spatial Fourier transform is $\widehat{\Lambda^\sigma}=|k|^\sigma$. The gCLM equation is a simplified model for singularity formation in the 3D incompressible Euler equations. It includes an advection term with parameter $a$, which allows different relative weights for advection and vortex stretching. There has been intense interest in the gCLM equation, and it has served as a proving ground for the development of methods to study singularity formation in the 3D Euler equations. Several exact solutions for the problem on the real line have been previously found by the method of pole dynamics, but only one such solution has been reported for the periodic geometry. We derive new periodic solutions for $a=0$ and $1/2$ and $\sigma=0$ and $1$, for which a closed collection of (periodically repeated) poles evolve in the complex plane. Self-similar finite-time blow-up of the solutions is analyzed and compared for the different values of $\sigma$, and to a global-in-time well-posedness theory for solutions with small data presented in a previous paper of the authors. Motivated by the exact solutions, the well-posedness theory is extended to include the case $a=0$, $\sigma \geq 0$. Several interesting features of the solutions are discussed., Comment: 46 pages, 13 figures
- Published
- 2024
11. Topological States in Finite Graphene Nanoribbons Tuned by Electric Fields
- Author
-
Kuo, David M T
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
In this comprehensive study, we conduct a theoretical investigation into the Stark shift of topological states (TSs) in finite armchair graphene nanoribbons (AGNRs) and heterostructures under transverse electric fields. Our focus centers on the multiple end zigzag edge states of AGNRs and the interface states of $9-7-9$ AGNR heterostructures. For the formal TSs, we observe a distinctive blue Stark shift in energy levels relative to the electric field within a range where the energy levels of TSs do not merge into the energy levels of bulk states. Conversely, for the latter TSs, we identify an oscillatory Stark shift in energy levels around the Fermi level. Simultaneously, we reveal the impact of the Stark effect on the transmission coefficients for both types of TSs. Notably, we uncover intriguing spectra in the multiple end zigzag edge states. In the case of finite $9-7-9$ AGNR heterostructures, the spectra of transmission coefficient reveal that the coupling strength between the topological interface states can be well controlled by the transverse electric fields. The outcomes of this research not only contribute to a deeper understanding of the electronic property in graphene-based materials but also pave the way for innovations in next-generation electronic devices and quantum technologies., Comment: 7 pages and 10 figures
- Published
- 2024
12. Learning Rules Explaining Interactive Theorem Proving Tactic Prediction
- Author
-
Zhang, Liao, Cerna, David M., and Kaliszyk, Cezary
- Subjects
Computer Science - Logic in Computer Science ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,F.4.1, I.2.4 - Abstract
Formally verifying the correctness of mathematical proofs is more accessible than ever, however, the learning curve remains steep for many of the state-of-the-art interactive theorem provers (ITP). Deriving the most appropriate subsequent proof step, and reasoning about it, given the multitude of possibilities, remains a daunting task for novice users. To improve the situation, several investigations have developed machine learning based guidance for tactic selection. Such approaches struggle to learn non-trivial relationships between the chosen tactic and the structure of the proof state and represent them as symbolic expressions. To address these issues we (i) We represent the problem as an Inductive Logic Programming (ILP) task, (ii) Using the ILP representation we enriched the feature space by encoding additional, computationally expensive properties as background knowledge predicates, (iii) We use this enriched feature space to learn rules explaining when a tactic is applicable to a given proof state, (iv) we use the learned rules to filter the output of an existing tactic selection approach and empirically show improvement over the non-filtering approaches., Comment: 15 pages, 5 figures
- Published
- 2024
13. EigenVI: score-based variational inference with orthogonal function expansions
- Author
-
Cai, Diana, Modi, Chirag, Margossian, Charles C., Gower, Robert M., Blei, David M., and Saul, Lawrence K.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Statistics - Computation - Abstract
We develop EigenVI, an eigenvalue-based approach for black-box variational inference (BBVI). EigenVI constructs its variational approximations from orthogonal function expansions. For distributions over $\mathbb{R}^D$, the lowest order term in these expansions provides a Gaussian variational approximation, while higher-order terms provide a systematic way to model non-Gaussianity. These approximations are flexible enough to model complex distributions (multimodal, asymmetric), but they are simple enough that one can calculate their low-order moments and draw samples from them. EigenVI can also model other types of random variables (e.g., nonnegative, bounded) by constructing variational approximations from different families of orthogonal functions. Within these families, EigenVI computes the variational approximation that best matches the score function of the target distribution by minimizing a stochastic estimate of the Fisher divergence. Notably, this optimization reduces to solving a minimum eigenvalue problem, so that EigenVI effectively sidesteps the iterative gradient-based optimizations that are required for many other BBVI algorithms. (Gradient-based methods can be sensitive to learning rates, termination criteria, and other tunable hyperparameters.) We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb. On these distributions, we find that EigenVI is more accurate than existing methods for Gaussian BBVI., Comment: 25 pages, 9 figures. Advances in Neural Information Processing Systems (NeurIPS), 2024
- Published
- 2024
14. High-resolution x-ray scanning with a diffuse, Huffman-patterned probe to minimise radiation damage
- Author
-
Aminzadeh, Alaleh, Kingston, Andrew M., Roberts, Lindon, Paganin, David M., Petersen, Timothy C., and Svalbe, Imants D.
- Subjects
Physics - Optics - Abstract
Scanning objects with a more tightly focused beam (for example of photons or electrons) can provide higher-resolution images. However the stronger localisation of energy deposition can damage tissues in organic samples or may rearrange the chemical structure or physical properties of inorganic materials. Scanning an object with a broad beam can deliver an equivalent probe energy but spreads it over a much wider footprint. Sharp images can be reconstructed from the diffuse implanted signal when a decoding step can recover a delta-like impulse response. Huffman sequences, by design, have the optimal delta-like autocorrelation for aperiodic (non-cyclic) convolution and are well-conditioned. Here we adapt 1D Huffman sequences to design 2D Huffman-like discrete arrays that have spatially broad, relatively thin and uniform intensity profiles that retain excellent aperiodic autocorrelation metrics. Examples of broad shaped diffuse beams were developed for the case of x-ray imaging. A variety of masks were fabricated by the deposition of finely structured layers of tantalum on a silicon oxide wafer. The layers form a pattern of discrete pixels that modify the shape of an incident uniform beam of low energy x-rays as it passes through the mask. The intensity profiles of the x-ray beams after transmission through these masks were validated, first by acquiring direct-detector x-ray images of the masks, and second by raster scanning a pinhole over each mask pattern, pixel-by-pixel, collecting "bucket" signals as applied in traditional ghost imaging. The masks were then used to raster scan the shaped x-ray beam over several simple binary and "gray" test objects, again producing bucket signals, from which sharp reconstructed object images were obtained by deconvolving their bucket images.
- Published
- 2024
15. Separating edges from microstructure in X-ray dark-field imaging: Evolving and devolving perspectives via the X-ray Fokker-Planck equation
- Author
-
Alloo, Samantha J., Paganin, David M., Croughan, Michelle K., Ahlers, Jannis N., Pavlov, Konstantin M., and Morgan, Kaye S.
- Subjects
Physics - Optics ,Physics - Applied Physics ,Physics - Medical Physics - Abstract
A key contribution to X-ray dark-field (XDF) is X-ray diffusion by sample structures smaller than the imaging system's spatial resolution. However, some XDF techniques report that resolvable sample edges also generate XDF. Speckle-based X-ray imaging (SBXI) extracts XDF by analyzing sample-imposed changes to a reference speckle pattern's visibility. We present an algorithm for SBXI (a variant of our Multimodal Intrinsic Speckle-Tracking (MIST) algorithm) capable of separating these two distinct XDF contrast mechanisms. The algorithm uses the 'devolving' X-ray Imaging Fokker-Planck equation as its forward model and then solves the associated multimodal inverse problem, to retrieve sample attenuation, phase, and XDF. Previous MIST variants were based on the evolving Fokker-Planck equation, which considers how a reference-speckle image is modified by introducing a sample. The devolving perspective instead considers how the image collected in the presence of the sample and speckle membrane optically flows in reverse, to generate the reference-speckle image when the sample is removed from the system. We compare single- and multiple-exposure multimodal retrieval algorithms from the two Fokker-Planck perspectives. We demonstrate that the devolving perspective can distinguish between two physically different XDF contrast mechanisms; unresolved microstructure- and sharp-edge-induced XDF. This was verified by applying the retrieval algorithms to two experimental data sets. We anticipate that this work will be useful in: Yielding a pair of complementary XDF images that separate sharp-edge diffuse scatter from unresolved microstructure diffuse scatter. XDF computed tomography, where the strong edge XDF can lead to tainting streaking artefacts. Sample preparation, as samples will not need to be embedded since the strong XDF edge signal seen between the sample and air can be separated out., Comment: 21 pages and 7 figures
- Published
- 2024
16. A Trust-Region Method for Graphical Stein Variational Inference
- Author
-
Pavlovic, Liam and Rosen, David M.
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Stein variational inference (SVI) is a sample-based approximate Bayesian inference technique that generates a sample set by jointly optimizing the samples' locations to minimize an information-theoretic measure of discrepancy with the target probability distribution. SVI thus provides a fast and significantly more sample-efficient approach to Bayesian inference than traditional (random-sampling-based) alternatives. However, the optimization techniques employed in existing SVI methods struggle to address problems in which the target distribution is high-dimensional, poorly-conditioned, or non-convex, which severely limits the range of their practical applicability. In this paper, we propose a novel trust-region optimization approach for SVI that successfully addresses each of these challenges. Our method builds upon prior work in SVI by leveraging conditional independences in the target distribution (to achieve high-dimensional scaling) and second-order information (to address poor conditioning), while additionally providing an effective adaptive step control procedure, which is essential for ensuring convergence on challenging non-convex optimization problems. Experimental results show our method achieves superior numerical performance, both in convergence rate and sample accuracy, and scales better in high-dimensional distributions, than previous SVI techniques.
- Published
- 2024
17. Estimating the Causal Effects of T Cell Receptors
- Author
-
Weinstein, Eli N., Wood, Elizabeth B., and Blei, David M.
- Subjects
Statistics - Machine Learning ,Computer Science - Machine Learning ,Quantitative Biology - Genomics - Abstract
A central question in human immunology is how a patient's repertoire of T cells impacts disease. Here, we introduce a method to infer the causal effects of T cell receptor (TCR) sequences on patient outcomes using observational TCR repertoire sequencing data and clinical outcomes data. Our approach corrects for unobserved confounders, such as a patient's environment and life history, by using the patient's immature, pre-selection TCR repertoire. The pre-selection repertoire can be estimated from nonproductive TCR data, which is widely available. It is generated by a randomized mutational process, V(D)J recombination, which provides a natural experiment. We show formally how to use the pre-selection repertoire to draw causal inferences, and develop a scalable neural-network estimator for our identification formula. Our method produces an estimate of the effect of interventions that add a specific TCR sequence to patient repertoires. As a demonstration, we use it to analyze the effects of TCRs on COVID-19 severity, uncovering potentially therapeutic TCRs that are (1) observed in patients, (2) bind SARS-CoV-2 antigens in vitro and (3) have strong positive effects on clinical outcomes.
- Published
- 2024
18. Hypothesis Testing the Circuit Hypothesis in LLMs
- Author
-
Shi, Claudia, Beltran-Velez, Nicolas, Nazaret, Achille, Zheng, Carolina, Garriga-Alonso, Adrià, Jesson, Andrew, Makar, Maggie, and Blei, David M.
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Large language models (LLMs) demonstrate surprising capabilities, but we do not understand how they are implemented. One hypothesis suggests that these capabilities are primarily executed by small subnetworks within the LLM, known as circuits. But how can we evaluate this hypothesis? In this paper, we formalize a set of criteria that a circuit is hypothesized to meet and develop a suite of hypothesis tests to evaluate how well circuits satisfy them. The criteria focus on the extent to which the LLM's behavior is preserved, the degree of localization of this behavior, and whether the circuit is minimal. We apply these tests to six circuits described in the research literature. We find that synthetic circuits -- circuits that are hard-coded in the model -- align with the idealized properties. Circuits discovered in Transformer models satisfy the criteria to varying degrees. To facilitate future empirical studies of circuits, we created the \textit{circuitry} package, a wrapper around the \textit{TransformerLens} library, which abstracts away lower-level manipulations of hooks and activations. The software is available at \url{https://github.com/blei-lab/circuitry}., Comment: Code available here: https://github.com/blei-lab/circuitry
- Published
- 2024
19. Autonomous Stabilization of Floquet States Using Static Dissipation
- Author
-
Ritter, Martin, Long, David M., Yue, Qianao, Chandran, Anushya, and Kollár, Alicia J.
- Subjects
Quantum Physics ,Condensed Matter - Mesoscale and Nanoscale Physics ,Condensed Matter - Quantum Gases - Abstract
Floquet engineering, in which the properties of a quantum system are modified through the application of strong periodic drives, is an indispensable tool in atomic and condensed matter systems. However, it is inevitably limited by intrinsic heating processes. We describe a simple autonomous scheme, which exploits a static coupling between the driven system and a lossy auxiliary, to cool large classes of Floquet systems into desired states. We present experimental and theoretical evidence for the stabilization of a chosen quasienergy state in a strongly modulated transmon qubit coupled to an auxiliary microwave cavity with fixed frequency and photon loss. The scheme naturally extends to Floquet systems with multiple degrees of freedom. As an example, we demonstrate the stabilization of topological photon pumping in a driven cavity-QED system numerically. The coupling to the auxiliary cavity increases the average photon current and the fidelity of non-classical states, such as high photon number Fock states, that can be prepared in the system cavity.
- Published
- 2024
20. Effect modification and non-collapsibility leads to conflicting treatment decisions: a review of marginal and conditional estimands and recommendations for decision-making
- Author
-
Phillippo, David M., Remiro-Azócar, Antonio, Heath, Anna, Baio, Gianluca, Dias, Sofia, Ades, A. E., and Welton, Nicky J.
- Subjects
Statistics - Methodology - Abstract
Effect modification occurs when a covariate alters the relative effectiveness of treatment compared to control. It is widely understood that, when effect modification is present, treatment recommendations may vary by population and by subgroups within the population. Population-adjustment methods are increasingly used to adjust for differences in effect modifiers between study populations and to produce population-adjusted estimates in a relevant target population for decision-making. It is also widely understood that marginal and conditional estimands for non-collapsible effect measures, such as odds ratios or hazard ratios, do not in general coincide even without effect modification. However, the consequences of both non-collapsibility and effect modification together are little-discussed in the literature. In this paper, we set out the definitions of conditional and marginal estimands, illustrate their properties when effect modification is present, and discuss the implications for decision-making. In particular, we show that effect modification can result in conflicting treatment rankings between conditional and marginal estimates. This is because conditional and marginal estimands correspond to different decision questions that are no longer aligned when effect modification is present. For time-to-event outcomes, the presence of covariates implies that marginal hazard ratios are time-varying, and effect modification can cause marginal hazard curves to cross. We conclude with practical recommendations for decision-making in the presence of effect modification, based on pragmatic comparisons of both conditional and marginal estimates in the decision target population. Currently, multilevel network meta-regression is the only population-adjustment method capable of producing both conditional and marginal estimates, in any decision target population., Comment: 30 pages, 8 figures
- Published
- 2024
21. Temperature-Stable Tunneling Current in Serial Double Quantum Dots: Insights from Nonequilibrium Green Functions and Pauli Spin Blockade
- Author
-
Kuo, David M T
- Subjects
Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
We theoretically investigate charge transport through serial double quantum dots (SDQDs) with strong electron correlations using nonequilibrium Green's function techniques. In the linear response regime, we compute the charge stability diagram and analyze the Coulomb oscillatory tunneling current, revealing both thermal and nonthermal broadening effects on the current spectra in relation to two gate voltages. In the nonlinear response regime, we focus on tunneling currents in SDQDs under the Pauli spin blockade (PSB) scenario. We find that current rectification with negative differential conductance is significantly degraded as temperature increases, making it challenging to distinguish between the inter-site spin triplet and singlet states. Notably, we observe a robust reversed tunneling current that remains stable against temperature variations, provided the resonant channel in the PSB scenario is coupled to the states of the right (left) electrode, which is fully occupied by particles (depleted). This characteristic provides valuable insights for designing transistors capable of operating over a wide temperature range., Comment: 10 pages and 5 Figures
- Published
- 2024
22. Inductive Conformal Prediction under Data Scarcity: Exploring the Impacts of Nonconformity Measures
- Author
-
Kato, Yuko, Tax, David M. J., and Loog, Marco
- Subjects
Computer Science - Machine Learning - Abstract
Conformal prediction, which makes no distributional assumptions about the data, has emerged as a powerful and reliable approach to uncertainty quantification in practical applications. The nonconformity measure used in conformal prediction quantifies how a test sample differs from the training data and the effectiveness of a conformal prediction interval may depend heavily on the precise measure employed. The impact of this choice has, however, not been widely explored, especially when dealing with limited amounts of data. The primary objective of this study is to evaluate the performance of various nonconformity measures (absolute error-based, normalized absolute error-based, and quantile-based measures) in terms of validity and efficiency when used in inductive conformal prediction. The focus is on small datasets, which is still a common setting in many real-world applications. Using synthetic and real-world data, we assess how different characteristics -- such as dataset size, noise, and dimensionality -- can affect the efficiency of conformal prediction intervals. Our results show that although there are differences, no single nonconformity measure consistently outperforms the others, as the effectiveness of each nonconformity measure is heavily influenced by the specific nature of the data. Additionally, we found that increasing dataset size does not always improve efficiency, suggesting the importance of fine-tuning models and, again, the need to carefully select the nonconformity measure for different applications.
- Published
- 2024
23. Observation of disorder-free localization and efficient disorder averaging on a quantum processor
- Author
-
Gyawali, Gaurav, Cochran, Tyler, Lensky, Yuri, Rosenberg, Eliott, Karamlou, Amir H., Kechedzhi, Kostyantyn, Berndtsson, Julia, Westerhout, Tom, Asfaw, Abraham, Abanin, Dmitry, Acharya, Rajeev, Beni, Laleh Aghababaie, Andersen, Trond I., Ansmann, Markus, Arute, Frank, Arya, Kunal, Astrakhantsev, Nikita, Atalaya, Juan, Babbush, Ryan, Ballard, Brian, Bardin, Joseph C., Bengtsson, Andreas, Bilmes, Alexander, Bortoli, Gina, Bourassa, Alexandre, Bovaird, Jenna, Brill, Leon, Broughton, Michael, Browne, David A., Buchea, Brett, Buckley, Bob B., Buell, David A., Burger, Tim, Burkett, Brian, Bushnell, Nicholas, Cabrera, Anthony, Campero, Juan, Chang, Hung-Shen, Chen, Zijun, Chiaro, Ben, Claes, Jahan, Cleland, Agnetta Y., Cogan, Josh, Collins, Roberto, Conner, Paul, Courtney, William, Crook, Alexander L., Das, Sayan, Debroy, Dripto M., De Lorenzo, Laura, Barba, Alexander Del Toro, Demura, Sean, Di Paolo, Agustin, Donohoe, Paul, Drozdov, Ilya, Dunsworth, Andrew, Earle, Clint, Eickbusch, Alec, Elbag, Aviv Moshe, Elzouka, Mahmoud, Erickson, Catherine, Faoro, Lara, Fatemi, Reza, Ferreira, Vinicius S., Burgos, Leslie Flores, Forati, Ebrahim, Fowler, Austin G., Foxen, Brooks, Ganjam, Suhas, Gasca, Robert, Giang, William, Gidney, Craig, Gilboa, Dar, Gosula, Raja, Dau, Alejandro Grajales, Graumann, Dietrich, Greene, Alex, Gross, Jonathan A., Habegger, Steve, Hamilton, Michael C., Hansen, Monica, Harrigan, Matthew P., Harrington, Sean D., Heslin, Stephen, Heu, Paula, Hill, Gordon, Hilton, Jeremy, Hoffmann, Markus R., Huang, Hsin-Yuan, Huff, Ashley, Huggins, William J., Ioffe, Lev B., Isakov, Sergei V., Jeffrey, Evan, Jiang, Zhang, Jones, Cody, Jordan, Stephen, Joshi, Chaitali, Juhas, Pavol, Kafri, Dvir, Kang, Hui, Khaire, Trupti, Khattar, Tanuj, Khezri, Mostafa, Kieferová, Mária, Kim, Seon, Klimov, Paul V., Klots, Andrey R., Kobrin, Bryce, Korotkov, Alexander N., Kostritsa, Fedor, Kreikebaum, John Mark, Kurilovich, Vladislav D., Landhuis, David, Lange-Dei, Tiano, Langley, Brandon W., Laptev, Pavel, Lau, Kim-Ming, Guevel, Loïck Le, Ledford, Justin, Lee, Joonho, Lee, Kenny, Lester, Brian J., Li, Wing Yan, Lill, Alexander T., Liu, Wayne, Livingston, William P., Locharla, Aditya, Lundahl, Daniel, Lunt, Aaron, Madhuk, Sid, Maloney, Ashley, Mandrà, Salvatore, Martin, Leigh S., Martin, Steven, Martin, Orion, Maxfield, Cameron, McClean, Jarrod R., McEwen, Matt, Meeks, Seneca, Megrant, Anthony, Mi, Xiao, Miao, Kevin C., Mieszala, Amanda, Molina, Sebastian, Montazeri, Shirin, Morvan, Alexis, Movassagh, Ramis, Neill, Charles, Nersisyan, Ani, Newman, Michael, Nguyen, Anthony, Nguyen, Murray, Ni, Chia-Hung, Niu, Murphy Yuezhen, Oliver, William D., Ottosson, Kristoffer, Pizzuto, Alex, Potter, Rebecca, Pritchard, Orion, Pryadko, Leonid P., Quintana, Chris, Reagor, Matthew J., Rhodes, David M., Roberts, Gabrielle, Rocque, Charles, Rubin, Nicholas C., Saei, Negar, Sankaragomathi, Kannan, Satzinger, Kevin J., Schurkus, Henry F., Schuster, Christopher, Shearn, Michael J., Shorter, Aaron, Shutty, Noah, Shvarts, Vladimir, Sivak, Volodymyr, Skruzny, Jindra, Small, Spencer, Smith, W. Clarke, Springer, Sofia, Sterling, George, Suchard, Jordan, Szalay, Marco, Szasz, Aaron, Sztein, Alex, Thor, Douglas, Torunbalci, M. Mert, Vaishnav, Abeer, Vdovichev, Sergey, Vidal, Guifré, Heidweiller, Catherine Vollgraff, Waltman, Steven, Wang, Shannon X., White, Theodore, Wong, Kristi, Woo, Bryan W. K., Xing, Cheng, Yao, Z. Jamie, Yeh, Ping, Ying, Bicheng, Yoo, Juhwan, Yosri, Noureldin, Young, Grayson, Zalcman, Adam, Zhang, Yaxing, Zhu, Ningfeng, Zobrist, Nicholas, Boixo, Sergio, Kelly, Julian, Lucero, Erik, Chen, Yu, Smelyanskiy, Vadim, Neven, Hartmut, Kovrizhin, Dmitry, Knolle, Johannes, Halimeh, Jad C., Aleiner, Igor, Moessner, Roderich, and Roushan, Pedram
- Subjects
Quantum Physics ,Condensed Matter - Disordered Systems and Neural Networks ,Condensed Matter - Strongly Correlated Electrons ,High Energy Physics - Lattice - Abstract
One of the most challenging problems in the computational study of localization in quantum manybody systems is to capture the effects of rare events, which requires sampling over exponentially many disorder realizations. We implement an efficient procedure on a quantum processor, leveraging quantum parallelism, to efficiently sample over all disorder realizations. We observe localization without disorder in quantum many-body dynamics in one and two dimensions: perturbations do not diffuse even though both the generator of evolution and the initial states are fully translationally invariant. The disorder strength as well as its density can be readily tuned using the initial state. Furthermore, we demonstrate the versatility of our platform by measuring Renyi entropies. Our method could also be extended to higher moments of the physical observables and disorder learning.
- Published
- 2024
24. The NuSTAR Local AGN $N_{\rm H}$ Distribution Survey (NuLANDS) I: Towards a Truly Representative Column Density Distribution in the Local Universe
- Author
-
Boorman, Peter G., Gandhi, Poshak, Buchner, Johannes, Stern, Daniel, Ricci, Claudio, Baloković, Mislav, Asmus, Daniel, Harrison, Fiona A., Svoboda, Jiří, Greenwell, Claire, Koss, Michael, Alexander, David M., Annuar, Adlyka, Bauer, Franz, Brandt, William N., Brightman, Murray, Panessa, Francesca, Chen, Chien-Ting J., Farrah, Duncan, Forster, Karl, Grefenstette, Brian, Hönig, Sebastian F., Hill, Adam B., Kammoun, Elias, Lansbury, George, Lanz, Lauranne, LaMassa, Stephanie, Madsen, Kristin, Marchesi, Stefano, Middleton, Matthew, Mingo, Beatriz, Parker, Michael L., Treister, Ezequiel, Ueda, Yoshihiro, Urry, C. Megan, and Zappacosta, Luca
- Subjects
Astrophysics - Astrophysics of Galaxies ,Astrophysics - High Energy Astrophysical Phenomena - Abstract
Hard X-ray-selected samples of Active Galactic Nuclei (AGN) provide one of the cleanest views of supermassive black hole accretion, but are biased against objects obscured by Compton-thick gas column densities of $N_{\rm H}$ $>$ 10$^{24}$ cm$^{-2}$. To tackle this issue, we present the NuSTAR Local AGN $N_{\rm H}$ Distribution Survey (NuLANDS)$-$a legacy sample of 122 nearby ($z$ $<$ 0.044) AGN primarily selected to have warm infrared colors from IRAS between 25$-$60 $\mu$m. We show that optically classified type 1 and 2 AGN in NuLANDS are indistinguishable in terms of optical [OIII] line flux and mid-to-far infrared AGN continuum bolometric indicators, as expected from an isotropically selected AGN sample, while type 2 AGN are deficient in terms of their observed hard X-ray flux. By testing many X-ray spectroscopic models, we show the measured line-of-sight column density varies on average by $\sim$ 1.4 orders of magnitude depending on the obscurer geometry. To circumvent such issues we propagate the uncertainties per source into the parent column density distribution, finding a directly measured Compton-thick fraction of 35 $\pm$ 9%. By construction, our sample will miss sources affected by severe narrow-line reddening, and thus segregates sources dominated by small-scale nuclear obscuration from large-scale host-galaxy obscuration. This bias implies an even higher intrinsic obscured AGN fraction may be possible, although tests for additional biases arising from our infrared selection find no strong effects on the measured column-density distribution. NuLANDS thus holds potential as an optimized sample for future follow-up with current and next-generation instruments aiming to study the local AGN population in an isotropic manner., Comment: Accepted for publication in ApJ. 50 pages (78 including appendix and bibliography), 21 figures
- Published
- 2024
25. A solar rotation signature in cosmic dust: frequency analysis of dust particle impacts on the Wind spacecraft
- Author
-
Baalmann, Lennart R., Hunziker, Silvan, Péronne, Arthur, Kirchner, James W., Glassmeier, Karl-Heinz, Malaspina, David M., Wilson III, Lynn B., Strähl, Christoph, Chadda, Shivank, and Sterken, Veerle J.
- Subjects
Astrophysics - Solar and Stellar Astrophysics ,Astrophysics - Earth and Planetary Astrophysics ,Physics - Space Physics - Abstract
Dust particle impacts on the Wind spacecraft were detected with its plasma wave instrument Wind/WAVES. Frequency analysis on this dust impact time series revealed spectral peaks indicative of a solar rotation signature. We investigated whether this solar rotation signature is embedded in the interplanetary or interstellar dust (ISD) and whether it is caused by co-rotating interaction regions (CIRs), by the sector structure of the interplanetary magnetic field (IMF), or by external effects. We performed frequency analysis on subsets of the data to investigate the origin of these spectral peaks, comparing segments of Wind's orbit when the spacecraft moved against or with the ISD inflow direction and comparing the time periods of the ISD focusing and defocusing phases of the solar magnetic cycle. A superposed epoch analysis of the number of dust impacts during CIRs was used to investigate the systematic effect of CIRs. Case studies of time periods with frequent or infrequent occurrences of CIRs were compared to synthetic data of dust impacts affected by CIRs. We performed similar case studies for time periods with a stable or chaotic IMF sector structure. The superposed epoch analysis was repeated for a time series of the spacecraft floating potential. Spectral peaks were found at the solar rotation period of ~27d and its harmonics at 13.5d and 9d. This solar rotation signature may affect both interplanetary and ISD. The appearance of this signature correlates with the occurrence of CIRs but not with the stability of the IMF sector structure. The CIRs cause, on average, a reduction in the number of dust impact detections. Periodic changes of the spacecraft's floating potential were found to partially counteract this reduction by enhancing the instrument's sensitivity to dust impacts; these changes of the floating potential are thus unlikely to be the cause of the solar rotation signature.
- Published
- 2024
- Full Text
- View/download PDF
26. An Overview of the Burer-Monteiro Method for Certifiable Robot Perception
- Author
-
Papalia, Alan, Tian, Yulun, Rosen, David M., How, Jonathan P., and Leonard, John J.
- Subjects
Computer Science - Robotics ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning ,49, 68 ,I.4.0 ,I.5.0 ,J.2 - Abstract
This paper presents an overview of the Burer-Monteiro method (BM), a technique that has been applied to solve robot perception problems to certifiable optimality in real-time. BM is often used to solve semidefinite programming relaxations, which can be used to perform global optimization for non-convex perception problems. Specifically, BM leverages the low-rank structure of typical semidefinite programs to dramatically reduce the computational cost of performing optimization. This paper discusses BM in certifiable perception, with three main objectives: (i) to consolidate information from the literature into a unified presentation, (ii) to elucidate the role of the linear independence constraint qualification (LICQ), a concept not yet well-covered in certifiable perception literature, and (iii) to share practical considerations that are discussed among practitioners but not thoroughly covered in the literature. Our general aim is to offer a practical primer for applying BM towards certifiable perception., Comment: Accepted to 2024 Robotics: Science and Systems (RSS) Safe Autonomy Workshop
- Published
- 2024
27. Melting of atomic hydrogen and deuterium with path-integral Monte Carlo
- Author
-
Ly, Kevin K. and Ceperley, David M.
- Subjects
Condensed Matter - Materials Science - Abstract
We calculate the melting line of atomic hydrogen and deuterium up to 900 GPa with path-integral Monte Carlo using a machine-learned interatomic potential. We improve upon previous simulations of melting by treating the electrons with reptation quantum Monte Carlo, and by performing solid and liquid simulations using isothermal-isobaric path-integral Monte Carlo. The resulting melting line for atomic hydrogen is higher than previous estimates. There is a small but resolvable decrease in the melting temperature as pressure is increased, which can be attributed to quantum effects.
- Published
- 2024
28. Retrospective Comparative Analysis of Prostate Cancer In-Basket Messages: Responses from Closed-Domain LLM vs. Clinical Teams
- Author
-
Hao, Yuexing, Holmes, Jason M., Hobson, Jared, Bennett, Alexandra, Ebner, Daniel K., Routman, David M., Shiraishi, Satomi, Patel, Samir H., Yu, Nathan Y., Hallemeier, Chris L., Ball, Brooke E., Waddle, Mark R., and Liu, Wei
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computers and Society - Abstract
In-basket message interactions play a crucial role in physician-patient communication, occurring during all phases (pre-, during, and post) of a patient's care journey. However, responding to these patients' inquiries has become a significant burden on healthcare workflows, consuming considerable time for clinical care teams. To address this, we introduce RadOnc-GPT, a specialized Large Language Model (LLM) powered by GPT-4 that has been designed with a focus on radiotherapeutic treatment of prostate cancer with advanced prompt engineering, and specifically designed to assist in generating responses. We integrated RadOnc-GPT with patient electronic health records (EHR) from both the hospital-wide EHR database and an internal, radiation-oncology-specific database. RadOnc-GPT was evaluated on 158 previously recorded in-basket message interactions. Quantitative natural language processing (NLP) analysis and two grading studies with clinicians and nurses were used to assess RadOnc-GPT's responses. Our findings indicate that RadOnc-GPT slightly outperformed the clinical care team in "Clarity" and "Empathy," while achieving comparable scores in "Completeness" and "Correctness." RadOnc-GPT is estimated to save 5.2 minutes per message for nurses and 2.4 minutes for clinicians, from reading the inquiry to sending the response. Employing RadOnc-GPT for in-basket message draft generation has the potential to alleviate the workload of clinical care teams and reduce healthcare costs by producing high-quality, timely responses.
- Published
- 2024
29. Spatially Resolved Plasma Composition Evolution in a Solar Flare -- The Effect of Reconnection Outflow
- Author
-
To, Andy S. H., Brooks, David H., Imada, Shinsuke, French, Ryan J., van Driel-Gesztelyi, Lidia, Baker, Deborah, Long, David M., Ashfield IV, William, and Hayes, Laura A.
- Subjects
Astrophysics - Solar and Stellar Astrophysics - Abstract
Solar flares exhibit complex variations in elemental abundances compared to photospheric values. We examine the spatial and temporal evolution of coronal abundances in the X8.2 flare on 2017 September 10, aiming to interpret the often observed high first ionization potential (FIP) bias at loop tops and provide insights into differences between spatially resolved and Sun-as-a-star flare composition measurements. We analyze 12 Hinode/EIS raster scans spanning 3.5 hours, employing Ca XIV 193.87 A/Ar XIV 194.40 A and Fe XVI 262.98 A/S XIII 256.69 A composition diagnostics to derive FIP bias values. Both diagnostics consistently show that flare loop tops maintain high FIP bias values of >2-6, with peak phase values exceeding 4, over the extended duration, while footpoints exhibit photospheric FIP bias of ~1. We propose that this variation arises from a combination of two distinct processes: high FIP bias plasma downflows from the plasma sheet confined to loop tops, and chromospheric evaporation filling the loop footpoints with low FIP bias plasma. Mixing between these two sources produces the observed gradient. Our observations show that the localized high FIP bias signature at loop tops is likely diluted by the bright footpoint emission in spatially averaged measurements. The spatially resolved spectroscopic observations enabled by EIS prove critical for revealing this complex abundance variation in loops. Furthermore, our observations show clear evidence that the origin of hot flare plasma in flaring loops consists of a combination of both directly heated plasma in the corona and from ablated chromospheric material; and our results provide valuable insights into the formation and composition of loop top brightenings, also known as EUV knots, which are a common feature at the tops of flare loops., Comment: 13 pages, 7 figures, 1 table. Accepted in A&A. Comments and criticisms are welcomed!
- Published
- 2024
30. Visualizing Dynamics of Charges and Strings in (2+1)D Lattice Gauge Theories
- Author
-
Cochran, Tyler A., Jobst, Bernhard, Rosenberg, Eliott, Lensky, Yuri D., Gyawali, Gaurav, Eassa, Norhan, Will, Melissa, Abanin, Dmitry, Acharya, Rajeev, Beni, Laleh Aghababaie, Andersen, Trond I., Ansmann, Markus, Arute, Frank, Arya, Kunal, Asfaw, Abraham, Atalaya, Juan, Babbush, Ryan, Ballard, Brian, Bardin, Joseph C., Bengtsson, Andreas, Bilmes, Alexander, Bourassa, Alexandre, Bovaird, Jenna, Broughton, Michael, Browne, David A., Buchea, Brett, Buckley, Bob B., Burger, Tim, Burkett, Brian, Bushnell, Nicholas, Cabrera, Anthony, Campero, Juan, Chang, Hung-Shen, Chen, Zijun, Chiaro, Ben, Claes, Jahan, Cleland, Agnetta Y., Cogan, Josh, Collins, Roberto, Conner, Paul, Courtney, William, Crook, Alexander L., Curtin, Ben, Das, Sayan, Demura, Sean, De Lorenzo, Laura, Di Paolo, Agustin, Donohoe, Paul, Drozdov, Ilya, Dunsworth, Andrew, Eickbusch, Alec, Elbag, Aviv Moshe, Elzouka, Mahmoud, Erickson, Catherine, Ferreira, Vinicius S., Burgos, Leslie Flores, Forati, Ebrahim, Fowler, Austin G., Foxen, Brooks, Ganjam, Suhas, Gasca, Robert, Genois, Élie, Giang, William, Gilboa, Dar, Gosula, Raja, Dau, Alejandro Grajales, Graumann, Dietrich, Greene, Alex, Gross, Jonathan A., Habegger, Steve, Hansen, Monica, Harrigan, Matthew P., Harrington, Sean D., Heu, Paula, Higgott, Oscar, Hilton, Jeremy, Huang, Hsin-Yuan, Huff, Ashley, Huggins, William J., Jeffrey, Evan, Jiang, Zhang, Jones, Cody, Joshi, Chaitali, Juhas, Pavol, Kafri, Dvir, Kang, Hui, Karamlou, Amir H., Kechedzhi, Kostyantyn, Khaire, Trupti, Khattar, Tanuj, Khezri, Mostafa, Kim, Seon, Klimov, Paul V., Kobrin, Bryce, Korotkov, Alexander N., Kostritsa, Fedor, Kreikebaum, John Mark, Kurilovich, Vladislav D., Landhuis, David, Lange-Dei, Tiano, Langley, Brandon W., Lau, Kim-Ming, Ledford, Justin, Lee, Kenny, Lester, Brian J., Guevel, Loïck Le, Li, Wing Yan, Lill, Alexander T., Livingston, William P., Locharla, Aditya, Lundahl, Daniel, Lunt, Aaron, Madhuk, Sid, Maloney, Ashley, Mandrà, Salvatore, Martin, Leigh S., Martin, Orion, Maxfield, Cameron, McClean, Jarrod R., McEwen, Matt, Meeks, Seneca, Megrant, Anthony, Miao, Kevin C., Molavi, Reza, Molina, Sebastian, Montazeri, Shirin, Movassagh, Ramis, Neill, Charles, Newman, Michael, Nguyen, Anthony, Nguyen, Murray, Ni, Chia-Hung, Niu, Murphy Yuezhen, Oliver, William D., Ottosson, Kristoffer, Pizzuto, Alex, Potter, Rebecca, Pritchard, Orion, Quintana, Chris, Ramachandran, Ganesh, Reagor, Matthew J., Rhodes, David M., Roberts, Gabrielle, Sankaragomathi, Kannan, Satzinger, Kevin J., Schurkus, Henry F., Shearn, Michael J., Shorter, Aaron, Shutty, Noah, Shvarts, Vladimir, Sivak, Volodymyr, Small, Spencer, Smith, W. Clarke, Springer, Sofia, Sterling, George, Suchard, Jordan, Szasz, Aaron, Sztein, Alex, Thor, Douglas, Torunbalci, M. Mert, Vaishnav, Abeer, Vargas, Justin, Vdovichev, Sergey, Vidal, Guifre, Heidweiller, Catherine Vollgraff, Waltman, Steven, Wang, Shannon X., Ware, Brayden, White, Theodore, Wong, Kristi, Woo, Bryan W. K., Xing, Cheng, Yao, Z. Jamie, Yeh, Ping, Ying, Bicheng, Yoo, Juhwan, Yosri, Noureldin, Young, Grayson, Zalcman, Adam, Zhang, Yaxing, Zhu, Ningfeng, Zobris, Nicholas, Boixo, Sergio, Kelly, Julian, Lucero, Erik, Chen, Yu, Smelyanskiy, Vadim, Neven, Hartmut, Gammon-Smith, Adam, Pollmann, Frank, Knap, Michael, and Roushan, Pedram
- Subjects
Quantum Physics ,Condensed Matter - Strongly Correlated Electrons ,High Energy Physics - Lattice - Abstract
Lattice gauge theories (LGTs) can be employed to understand a wide range of phenomena, from elementary particle scattering in high-energy physics to effective descriptions of many-body interactions in materials. Studying dynamical properties of emergent phases can be challenging as it requires solving many-body problems that are generally beyond perturbative limits. We investigate the dynamics of local excitations in a $\mathbb{Z}_2$ LGT using a two-dimensional lattice of superconducting qubits. We first construct a simple variational circuit which prepares low-energy states that have a large overlap with the ground state; then we create particles with local gates and simulate their quantum dynamics via a discretized time evolution. As the effective magnetic field is increased, our measurements show signatures of transitioning from deconfined to confined dynamics. For confined excitations, the magnetic field induces a tension in the string connecting them. Our method allows us to experimentally image string dynamics in a (2+1)D LGT from which we uncover two distinct regimes inside the confining phase: for weak confinement the string fluctuates strongly in the transverse direction, while for strong confinement transverse fluctuations are effectively frozen. In addition, we demonstrate a resonance condition at which dynamical string breaking is facilitated. Our LGT implementation on a quantum processor presents a novel set of techniques for investigating emergent particle and string dynamics.
- Published
- 2024
31. Geometry of the comptonization region of MAXI J1348$-$630 through type-C quasi-periodic oscillations with NICER
- Author
-
Alabarta, Kevin, Méndez, Mariano, García, Federico, Altamirano, Diego, Zhang, Yuexin, Zhang, Liang, Russell, David M., and König, Ole
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
We use the rms and lag spectra of the type-C quasi-periodic oscillation (QPO) to study the properties of the Comptonisation region (aka corona) during the low/hard and hard-intermediate states of the main outburst and reflare of MAXI J1348$-$630. We simultaneously fit the time-averaged energy spectrum of the source and the fractional rms and phase-lag spectra of the QPO with the time-dependent Comptonization model vKompth. The data can be explained by two physically connected coronae interacting with the accretion disc via a feedback loop of X-ray photons. The best-fitting model consists of a corona of $\sim$10$^3$ km located at the inner edge of the disc and a second corona of $\sim$10$^4$ km horizontally extended and covering the inner parts of the accretion disc. The properties of both coronae during the reflare are similar to those during the low/hard state of the main outburst, reinforcing the idea that both the outburst and the reflare are driven by the same physical mechanisms. We combine our results for the type-C QPO with those from previous work focused on the study of type-A and type-B QPOs with the same model to study the evolution of the geometry of the corona through the whole outburst, including the reflare of MAXI J1348$-$630. Finally, we show that the sudden increase in the phase-lag frequency spectrum and the sharp drop in the coherence function previously observed in MAXI J1348$-$630 are due to the type-C QPO during the decay of the outburst and can be explained in terms of the geometry of the coronae., Comment: 18 pages, 8 figures, 1 table. Submitted to ApJ
- Published
- 2024
32. Transforming disaster risk reduction with AI and big data: Legal and interdisciplinary perspectives
- Author
-
Chun, Kwok P, Octavianti, Thanti, Dogulu, Nilay, Tyralis, Hristos, Papacharalampous, Georgia, Rowberry, Ryan, Fan, Pingyu, Everard, Mark, Francesch-Huidobro, Maria, Migliari, Wellington, Hannah, David M., Marshall, John Travis, Calasanz, Rafael Tolosana, Staddon, Chad, Ansharyani, Ida, Dieppois, Bastien, Lewis, Todd R, Ponce, Juli, Ibrean, Silvia, Ferreira, Tiago Miguel, Peliño-Golle, Chinkie, Mu, Ye, Delgado, Manuel, Espinoza, Elizabeth Silvestre, Keulertz, Martin, Gopinath, Deepak, and Li, Cheng
- Subjects
Computer Science - Computers and Society ,Computer Science - Machine Learning - Abstract
Managing complex disaster risks requires interdisciplinary efforts. Breaking down silos between law, social sciences, and natural sciences is critical for all processes of disaster risk reduction. This enables adaptive systems for the rapid evolution of AI technology, which has significantly impacted the intersection of law and natural environments. Exploring how AI influences legal frameworks and environmental management, while also examining how legal and environmental considerations can confine AI within the socioeconomic domain, is essential. From a co-production review perspective, drawing on insights from lawyers, social scientists, and environmental scientists, principles for responsible data mining are proposed based on safety, transparency, fairness, accountability, and contestability. This discussion offers a blueprint for interdisciplinary collaboration to create adaptive law systems based on AI integration of knowledge from environmental and social sciences. Discrepancies in the use of language between environmental scientists and decision-makers in terms of usefulness and accuracy hamper how AI can be used based on the principles of legal considerations for a safe, trustworthy, and contestable disaster management framework. When social networks are useful for mitigating disaster risks based on AI, the legal implications related to privacy and liability of the outcomes of disaster management must be considered. Fair and accountable principles emphasise environmental considerations and foster socioeconomic discussions related to public engagement. AI also has an important role to play in education, bringing together the next generations of law, social sciences, and natural sciences to work on interdisciplinary solutions in harmony., Comment: 20 pages, 2 figures
- Published
- 2024
33. Unravelling compound risks of hydrological extremes in a changing climate: Typology, methods and futures
- Author
-
Chun, Kwok P, Octavianti, Thanti, Papacharalampous, Georgia, Tyralis, Hristos, Sutanto, Samuel J., Terskii, Pavel, Mazzoglio, Paola, Treppiedi, Dario, Rivera, Juan, Dogulu, Nilay, Olusola, Adeyemi, Dieppois, Bastien, Dembélé, Moctar, Moulds, Simon, Li, Cheng, Morales-Marin, Luis Alejandro, Macdonald, Neil, Amoussou, Toundji Olivier, Yonaba, Roland, Obahoundje, Salomon, Massei, Nicolas, Hannah, David M., Chidepudi, Sivarama Krishna Reddy, and Hamududu, Byman
- Subjects
Physics - Physics and Society ,Physics - Atmospheric and Oceanic Physics - Abstract
We have witnessed and experienced increasing compound extreme events resulting from simultaneous or sequential occurrence of multiple events in a changing climate. In addition to a growing demand for a clearer explanation of compound risks from a hydrological perspective, there has been a lack of attention paid to socioeconomic factors driving and impacted by these risks. Through a critical review and co-production approaches, we identified four types of compound hydrological events based on autocorrelated, multivariate, and spatiotemporal patterns. A framework to quantify compound risks based on conditional probability is offered, including an argument on the potential use of generative Artificial Intelligence (AI) algorithms for identifying emerging trends and patterns for climate change. Insights for practices are discussed, highlighting the implications for disaster risk reduction and knowledge co-production. Our argument centres on the importance of meaningfully considering the socioeconomic contexts in which compound risks may have impacts, and the need for interdisciplinary collaboration to effectively translate climate science to climate actions., Comment: 19 pages, 1 figure
- Published
- 2024
34. Post-Match Error Mitigation for Deferred Acceptance
- Author
-
Gale, Abraham, Marian, Amélie, and Pennock, David M.
- Subjects
Computer Science - Computer Science and Game Theory - Abstract
Real-life applications of deferred-acceptance (DA) matching algorithms sometimes exhibit errors or changes to the matching inputs that are discovered only after the algorithm has been run and the results are announced to participants. Mitigating the effects of these errors is a different problem than the original match since the decision makers are often constrained by the offers they already sent out. We propose models for this new problem, along with mitigation strategies to go with these models. We explore three different error scenarios: resource reduction, additive errors, and subtractive errors. For each error type, we compute the expected number of students directly harmed, or helped, by the error, the number indirectly harmed or helped, and the number of students with justified envy due to the errors. Error mitigation strategies need to be selected based on the goals of the administrator, which include restoring stability, avoiding direct harm to any participant, and focusing the extra burden on the schools that made the error. We provide empirical simulations of the errors and the mitigation strategies.
- Published
- 2024
35. CLAIR-A: Leveraging Large Language Models to Judge Audio Captions
- Author
-
Wu, Tsung-Han, Gonzalez, Joseph E., Darrell, Trevor, and Chan, David M.
- Subjects
Computer Science - Computation and Language ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
The Automated Audio Captioning (AAC) task asks models to generate natural language descriptions of an audio input. Evaluating these machine-generated audio captions is a complex task that requires considering diverse factors, among them, auditory scene understanding, sound-object inference, temporal coherence, and the environmental context of the scene. While current methods focus on specific aspects, they often fail to provide an overall score that aligns well with human judgment. In this work, we propose CLAIR-A, a simple and flexible method that leverages the zero-shot capabilities of large language models (LLMs) to evaluate candidate audio captions by directly asking LLMs for a semantic distance score. In our evaluations, CLAIR-A better predicts human judgements of quality compared to traditional metrics, with a 5.8% relative accuracy improvement compared to the domain-specific FENSE metric and up to 11% over the best general-purpose measure on the Clotho-Eval dataset. Moreover, CLAIR-A offers more transparency by allowing the language model to explain the reasoning behind its scores, with these explanations rated up to 30% better by human evaluators than those provided by baseline methods. CLAIR-A is made publicly available at https://github.com/DavidMChan/clair-a., Comment: Code is publicly available at https://github.com/DavidMChan/clair-a
- Published
- 2024
36. Law-based and standards-oriented approach for privacy impact assessment in medical devices: a topic for lawyers, engineers and healthcare practitioners in MedTech
- Author
-
Ladeia, Yuri R. and Pereira, David M.
- Subjects
Computer Science - Computers and Society - Abstract
Background: The integration of the General Data Protection Regulation (GDPR) and the Medical Device Regulation (MDR) creates complexities in conducting Data Protection Impact Assessments (DPIAs) for medical devices. The adoption of non-binding standards like ISO and IEC can harmonize these processes by enhancing accountability and privacy by design. Methods: This study employs a multidisciplinary literature review, focusing on GDPR and MDR intersection in medical devices that process personal health data. It evaluates key standards, including ISO/IEC 29134 and IEC 62304, to propose a unified approach for DPIAs that aligns with legal and technical frameworks. Results: The analysis reveals the benefits of integrating ISO/IEC standards into DPIAs, which provide detailed guidance on implementing privacy by design, risk assessment, and mitigation strategies specific to medical devices. The proposed framework ensures that DPIAs are living documents, continuously updated to adapt to evolving data protection challenges. Conclusions: A unified approach combining European Union (EU) regulations and international standards offers a robust framework for conducting DPIAs in medical devices. This integration balances security, innovation, and privacy, enhancing compliance and fostering trust in medical technologies. The study advocates for leveraging both hard law and standards to systematically address privacy and safety in the design and operation of medical devices, thereby raising the maturity of the MedTech ecosystem., Comment: 20 pages, 1 table
- Published
- 2024
37. Evolving Distributions Under Local Motion
- Author
-
Acharya, Aditya and Mount, David M.
- Subjects
Computer Science - Computational Geometry ,F.2.2 - Abstract
Geometric data sets arising in modern applications are often very large and change dynamically over time. A popular framework for dealing with such data sets is the evolving data framework, where a discrete structure continuously varies over time due to the unseen actions of an evolver, which makes small changes to the data. An algorithm probes the current state through an oracle, and the objective is to maintain a hypothesis of the data set's current state that is close to its actual state at all times. In this paper, we apply this framework to maintaining a set of $n$ point objects in motion in $d$-dimensional Euclidean space. To model the uncertainty in the object locations, both the ground truth and hypothesis are based on spatial probability distributions, and the distance between them is measured by the Kullback-Leibler divergence (relative entropy). We introduce a simple and intuitive motion model where with each time step, the distance that any object can move is a fraction of the distance to its nearest neighbor. We present an algorithm that, in steady state, guarantees a distance of $O(n)$ between the true and hypothesized placements. We also show that for any algorithm in this model, there is an evolver that can generate a distance of $\Omega(n)$, implying that our algorithm is asymptotically optimal.
- Published
- 2024
38. Spline-based solution transfer for space-time methods in 2D+t
- Author
-
Larose, Logan, Anderson, Jude T., and Williams, David M.
- Subjects
Mathematics - Numerical Analysis - Abstract
This work introduces a new solution-transfer process for slab-based space-time finite element methods. The new transfer process is based on Hsieh-Clough-Tocher (HCT) splines and satisfies the following requirements: (i) it maintains high-order accuracy up to 4th order, (ii) it preserves a discrete maximum principle, (iii) it asymptotically enforces mass conservation, and (iv) it constructs a smooth, continuous surrogate solution in between space-time slabs. While many existing transfer methods meet the first three requirements, the fourth requirement is crucial for enabling visualization and boundary condition enforcement for space-time applications. In this paper, we derive an error bound for our HCT spline-based transfer process. Additionally, we conduct numerical experiments quantifying the conservative nature and order of accuracy of the transfer process. Lastly, we present a qualitative evaluation of the visualization properties of the smooth surrogate solution., Comment: 36 pages, 17 figures, 3 tables
- Published
- 2024
39. An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems
- Author
-
Tulsiani, Hitesh, Chan, David M., Ghosh, Shalini, Lalwani, Garima, Pandey, Prabhat, Bansal, Ankish, Garimella, Sri, Rastrow, Ariya, and Hoffmeister, Björn
- Subjects
Electrical Engineering and Systems Science - Audio and Speech Processing ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Sound - Abstract
Dialog systems, such as voice assistants, are expected to engage with users in complex, evolving conversations. Unfortunately, traditional automatic speech recognition (ASR) systems deployed in such applications are usually trained to recognize each turn independently and lack the ability to adapt to the conversational context or incorporate user feedback. In this work, we introduce a general framework for ASR in dialog systems that can go beyond learning from single-turn utterances and learn over time how to adapt to both explicit supervision and implicit user feedback present in multi-turn conversations. We accomplish that by leveraging advances in student-teacher learning and context-aware dialog processing, and designing contrastive self-supervision approaches with Ohm, a new online hard-negative mining approach. We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems, and up to 26% on public synthetic data., Comment: Presented at ICML 2024
- Published
- 2024
40. Rediscovering the Latent Dimensions of Personality with Large Language Models as Trait Descriptors
- Author
-
Suh, Joseph, Moon, Suhong, Kang, Minwoo, and Chan, David M.
- Subjects
Computer Science - Computation and Language - Abstract
Assessing personality traits using large language models (LLMs) has emerged as an interesting and challenging area of research. While previous methods employ explicit questionnaires, often derived from the Big Five model of personality, we hypothesize that LLMs implicitly encode notions of personality when modeling next-token responses. To demonstrate this, we introduce a novel approach that uncovers latent personality dimensions in LLMs by applying singular value de-composition (SVD) to the log-probabilities of trait-descriptive adjectives. Our experiments show that LLMs "rediscover" core personality traits such as extraversion, agreeableness, conscientiousness, neuroticism, and openness without relying on direct questionnaire inputs, with the top-5 factors corresponding to Big Five traits explaining 74.3% of the variance in the latent space. Moreover, we can use the derived principal components to assess personality along the Big Five dimensions, and achieve improvements in average personality prediction accuracy of up to 5% over fine-tuned models, and up to 21% over direct LLM-based scoring techniques.
- Published
- 2024
41. Estimating Wage Disparities Using Foundation Models
- Author
-
Vafa, Keyon, Athey, Susan, and Blei, David M.
- Subjects
Computer Science - Machine Learning ,Economics - Econometrics ,Statistics - Methodology ,Statistics - Machine Learning - Abstract
One thread of empirical work in social science focuses on decomposing group differences in outcomes into unexplained components and components explained by observable factors. In this paper, we study gender wage decompositions, which require estimating the portion of the gender wage gap explained by career histories of workers. Classical methods for decomposing the wage gap employ simple predictive models of wages which condition on a small set of simple summaries of labor history. The problem is that these predictive models cannot take advantage of the full complexity of a worker's history, and the resulting decompositions thus suffer from omitted variable bias (OVB), where covariates that are correlated with both gender and wages are not included in the model. Here we explore an alternative methodology for wage gap decomposition that employs powerful foundation models, such as large language models, as the predictive engine. Foundation models excel at making accurate predictions from complex, high-dimensional inputs. We use a custom-built foundation model, designed to predict wages from full labor histories, to decompose the gender wage gap. We prove that the way such models are usually trained might still lead to OVB, but develop fine-tuning algorithms that empirically mitigate this issue. Our model captures a richer representation of career history than simple models and predicts wages more accurately. In detail, we first provide a novel set of conditions under which an estimator of the wage gap based on a fine-tuned foundation model is $\sqrt{n}$-consistent. Building on the theory, we then propose methods for fine-tuning foundation models that minimize OVB. Using data from the Panel Study of Income Dynamics, we find that history explains more of the gender wage gap than standard econometric models can measure, and we identify elements of history that are important for reducing OVB.
- Published
- 2024
42. Proactive and Reactive Constraint Programming for Stochastic Project Scheduling with Maximal Time-Lags
- Author
-
Houten, Kim van den, Planken, Léon, Freydell, Esteban, Tax, David M. J., and de Weerdt, Mathijs
- Subjects
Computer Science - Artificial Intelligence - Abstract
This study investigates scheduling strategies for the stochastic resource-constrained project scheduling problem with maximal time lags (SRCPSP/max)). Recent advances in Constraint Programming (CP) and Temporal Networks have reinvoked interest in evaluating the advantages and drawbacks of various proactive and reactive scheduling methods. First, we present a new, CP-based fully proactive method. Second, we show how a reactive approach can be constructed using an online rescheduling procedure. A third contribution is based on partial order schedules and uses Simple Temporal Networks with Uncertainty (STNUs). Our statistical analysis shows that the STNU-based algorithm performs best in terms of solution quality, while also showing good relative offline and online computation time.
- Published
- 2024
43. Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT
- Author
-
Lewis, Seth C., Markowitz, David M., and Bunquin, Jon Benedik
- Subjects
Computer Science - Computational Complexity ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
As part of a broader look at the impact of generative AI, this study investigated the emotional responses of journalists to the release of ChatGPT at the time of its launch. By analyzing nearly 1 million Tweets from journalists at major U.S. news outlets, we tracked changes in emotional tone and sentiment before and after the introduction of ChatGPT in November 2022. Using various computational and natural language processing techniques to measure emotional shifts in response to ChatGPT's release, we found an increase in positive emotion and a more favorable tone post-launch, suggesting initial optimism toward AI's potential. This research underscores the pivotal role of journalists as interpreters of technological innovation and disruption, highlighting how their emotional reactions may shape public narratives around emerging technologies. The study contributes to understanding the intersection of journalism, emotion, and AI, offering insights into the broader societal impact of generative AI tools.
- Published
- 2024
44. The Weak Form Is Stronger Than You Think
- Author
-
Messenger, Daniel A., Tran, April, Dukic, Vanja, and Bortz, David M.
- Subjects
Computer Science - Machine Learning ,Computer Science - Computational Engineering, Finance, and Science ,Mathematics - Numerical Analysis ,Statistics - Machine Learning ,26A33, 35D30, 62FXX, 62JXX, 65L09, 65M32, 68Q32 - Abstract
The weak form is a ubiquitous, well-studied, and widely-utilized mathematical tool in modern computational and applied mathematics. In this work we provide a survey of both the history and recent developments for several fields in which the weak form can play a critical role. In particular, we highlight several recent advances in weak form versions of equation learning, parameter estimation, and coarse graining, which offer surprising noise robustness, accuracy, and computational efficiency. We note that this manuscript is a companion piece to our October 2024 SIAM News article of the same name. Here we provide more detailed explanations of mathematical developments as well as a more complete list of references. Lastly, we note that the software with which to reproduce the results in this manuscript is also available on our group's GitHub website https://github.com/MathBioCU .
- Published
- 2024
45. First joint X-ray solar microflare observations with NuSTAR and Solar Orbiter/STIX
- Author
-
Bajnoková, Natália, Hannah, Iain G., Cooper, Kristopher, Krucker, Säm, Grefenstette, Brian W., Smith, David M., Jeffrey, Natasha L. S., and Duncan, Jessie
- Subjects
Astrophysics - Solar and Stellar Astrophysics - Abstract
We present the first joint spectral and imaging analysis of hard X-ray (HXR) emission from 3 microflares observed by the Nuclear Spectroscopic Telescope ARray (NuSTAR) and Solar Orbiter/Spectrometer/Telescope for Imaging X-rays (STIX). We studied 5 joint spectra from GOES A7, B1 and B6 class microflares from active region AR12765 on 2020 June 6 and 7. As these events are very bright for NuSTAR, resulting in extremely low (<1%) livetime, we introduce a pile-up correction method. All five joint spectra were fitted with an isothermal model finding temperatures in the 9-11 MK range. Furthermore, three joint spectra required an additional non-thermal thick-target model finding non-thermal powers of $10^{25}$-$10^{26}$ erg s$^{-1}$. All the fit parameters were within the ranges expected for HXR microflares. The fit results give a relative scaling of STIX and NuSTAR mostly between 6-28% (one outlier at 52%) suggesting each instrument are well calibrated. In addition to spectral analysis, we performed joint HXR imaging of the June 6 and one of the June 7 microflares. In NuSTAR's field of view (FOV), we observed two separate non-thermal sources connected by an elongated thermal source during the June 6 microflares. In STIX's FOV (44 degrees W with respect to NuSTAR), we imaged thermal emission from the hot flare loops which when reprojected to an Earth viewpoint matches the thermal sources seen with NuSTAR and in the hotter EUV channels with the Solar Dynamic Observatory's Atmospheric Imaging Assembly.
- Published
- 2024
- Full Text
- View/download PDF
46. University Engagement with the Community through Physical Activity Opportunities: Lessons Learned from a Community Charter Guaranteeing Access to the University Recreation Complex
- Author
-
David M. Telles-Langdon and Nathan D. Hall
- Abstract
Universities recognize they have a civic responsibility to engage and enrich the community in which they reside. This study looks at a community engagement project undertaken at one university that was intended to address significant recreational needs within the community while also furthering academic initiatives. As part of the appeal to various levels of government for financial support to build a campus recreation complex, senior administration at the University promised to engage marginalized community members through implementing a charter to enshrine open community access. This research was a cross-sectional exploration of one university's engagement with the community through sport and physical activity, with the overall intent of understanding the implementation of the University's Charter and how this has worked to address some of the issues related to marginalized groups living in the inner-city. Interviews were conducted with four university members who, in their various roles, have held some level of responsibility in the implementation of the Charter. Clarke and colleagues' (2017) Situational Analysis was used to deconstruct interview transcripts, code, and develop themes. After listening to the voices of the participants, three major themes, as well as nine sub-themes, emerged from the data.
- Published
- 2024
47. The More, the Merrier? A Phenomenological Investigation of Counselor-in-Training Simultaneous Supervision
- Author
-
William B. Lane, Timothy J. Hakenewerth, Camille D. Frank, Tessa B. Davis-Price, David M. Kleist, and Steven J. Moody
- Abstract
Interpretative phenomenological analysis was used to explore the simultaneous supervision experiences of counselors-in-training. Simultaneous supervision is when a supervisee receives clinical supervision from multiple supervisors. Sometimes this supervision includes a university supervisor and a site supervisor. Other times this supervision occurs when a student has multiple sites in one semester and receives supervision at each site. Counselors-in-training described their experiences with simultaneous supervision during the course of their education. Four superordinate themes emerged: making sense of multiple perspectives, orchestrating the process, supervisory relationship dynamics, and personal dispositions and characteristics. Results indicated that counselors-in-training experienced compounded benefits and challenges. Implications for supervisors, supervisees, and counselor education programs are provided.
- Published
- 2024
48. Documenting and Activating Educational Leadership and Authentic Teaching
- Author
-
Diane Symbaluk, David M. Andrews, Tiffany Potter, and Aleksandra Zecevic
- Abstract
This report describes two integrated projects initiated by the 2020 3M National Teaching Fellowship Award (NTF) cohort on the concepts of educational leadership in Canadian universities and the role of authenticity among exemplary teachers. A thematic analysis of 3M NTF award-winning dossiers identified six prevalent traits characteristic of educational leaders: innovation, persistence, responsiveness, reflectiveness, curiosity, and positive opportunism. The analysis also revealed aspects of educational leadership in practice, including being committed to a cause and being action-oriented, being community-engaged, being multi-disciplinarity, building bridges, freely sharing, trailblazing, and using applied methods. Educational leaders' relationships with others tended to foreground elements of collaboration, empowerment, support, and mentorship, and their actions had an impact beyond their own classrooms or institutions. In the second project, qualitative interviews with cohort members articulated ways in which authentic teaching is expressed by educational leaders. The actions of authentic teachers were viewed as influential and inspiring, and authentic teachers tended to be recognized as instruments of change. These results were shared in an interactive workshop at STHLE 2022, which discussed how educational leadership is currently framed in higher education, and guided participants in self-reflection as educators and leaders to formulate calls to action involving educational leadership and authentic teaching.
- Published
- 2024
49. THE AIR-GROUND LITTORAL AND GREAT POWER CONFLICT
- Author
-
Giffen, David M.
- Published
- 2024
50. Symposium: What Does the Microbiome Tell Us about Prevention and Treatment of AD/ADRD?
- Author
-
Capocchi, Joia K, Figueroa-Romero, Claudia, Dunham, Sage JB, Faraci, Gina, Rothman, Jason A, Whiteson, Katrine L, Seo, Dong-oh, Holtzman, David M, Grabrucker, Stefanie, Nolan, Yvonne M, Kaddurah-Daouk, Rima, and Jett, David A
- Subjects
Medical and Health Sciences ,Psychology and Cognitive Sciences ,Neurology & Neurosurgery ,Neurosciences - Abstract
Alzheimer's disease (AD) and Alzheimer's disease-related dementias (ADRDs) are broad-impact multifactorial neurodegenerative diseases. Their complexity presents unique challenges for developing effective therapies. This review highlights research presented at the 2024 Society for Neuroscience meeting which emphasized the gut microbiome's role in AD pathogenesis by influencing brain function and neurodegeneration through the microbiota–gut–brain axis. This emerging evidence underscores the potential for targeting the gut microbiota to treat AD/ADRD.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.