145 results on '"Matthew R. Johnson"'
Search Results
2. A methodology for quantifying combustion efficiencies and species emission rates of flares subjected to crosswind
- Author
-
Damon C. Burtt, Darcy J. Corbin, Joshua R. Armitage, Brian M. Crosland, A. Melina Jefferson, Gregory A. Kopp, Larry W. Kostiuk, and Matthew R. Johnson
- Published
- 2022
3. Adipose triglyceride lipase promotes prostaglandin-dependent actin remodeling by regulating substrate release from lipid droplets
- Author
-
Michelle S. Giedt, Jonathon M. Thomalla, Roger P. White, Matthew R. Johnson, Zon Weng Lai, Tina L. Tootle, and Michael A. Welte
- Subjects
Molecular Biology ,Developmental Biology - Abstract
Lipid droplets (LDs), crucial regulators of lipid metabolism, accumulate during oocyte development. However, their roles in fertility remain largely unknown. During Drosophila oogenesis, LD accumulation coincides with actin remodeling necessary for follicle development. Loss of the LD-associated Adipose Triglyceride Lipase (ATGL) disrupts both actin bundle formation and cortical actin integrity, an unusual phenotype also seen when the prostaglandin (PG) synthase Pxt is missing. Dominant genetic interactions and PG treatment of follicles indicate ATGL acts upstream of Pxt to regulate actin remodeling. Our data suggest ATGL releases arachidonic acid (AA) from LDs to serve as the substrate for PG synthesis. Lipidomic analysis detects AA-containing triglycerides in ovaries, and these are increased when ATGL is lost. High levels of exogenous AA block follicle development; this is enhanced by impairing LD formation and suppressed by reducing ATGL. Together these data support the model that AA stored in LD triglycerides is released by ATGL to drive the production of PGs, which promote actin remodeling necessary for follicle development. We speculate this pathway is conserved across organisms to regulate oocyte development and promote fertility.
- Published
- 2023
4. Everybody Talks? The Evolution of Political Talk Among Recent College Graduates
- Author
-
Matthew R. Johnson, Symantha Dattilo, and Sarah Williams
- Subjects
Education - Published
- 2022
5. Creating measurement-based oil and gas sector methane inventories using source-resolved aerial surveys
- Author
-
Matthew R. Johnson, Bradley M. Conrad, and David R. Tyner
- Subjects
General Earth and Planetary Sciences ,General Environmental Science - Abstract
Critical mitigation of methane emissions from the oil and gas (OG) sector is hampered by inaccurate official inventories and limited understanding of contributing sources. Here we present a framework for incorporating aerial measurements into comprehensive OG sector methane inventories that achieves robust, independent quantification of measurement and sample size uncertainties, while providing timely source-level insights. This hybrid inventory combines top-down, source-resolved, multi-pass aerial measurements with bottom-up estimates of unmeasured sources leveraging continuous probability of detection and quantification models for a chosen aerial technology. Notably, the technique explicitly considers skewed source distributions and finite facility populations that have not been previously addressed. The protocol is demonstrated to produce a comprehensive upstream OG sector methane inventory for British Columbia, Canada, which while approximately 1.7 times higher than the most recent official bottom-up inventory, reveals a lower methane intensity of produced natural gas (
- Published
- 2023
6. Drosophilaembryos spatially sort their nutrient stores to facilitate their utilization
- Author
-
Marcus D. Kilwein, Matthew R. Johnson, Jonathon M. Thomalla, Anthony P. Mahowald, and Michael A. Welte
- Subjects
Molecular Biology ,Developmental Biology - Abstract
Animal embryos are provided by their mothers with a diverse nutrient supply that is crucial for development. In Drosophila, the three most abundant nutrients (triglycerides, proteins and glycogen) are sequestered in distinct storage structures: lipid droplets (LDs), yolk vesicles (YVs) and glycogen granules (GGs). Using transmission electron microscopy as well as live and fixed sample fluorescence imaging, we find that all three storage structures are dispersed throughout the egg but are then spatially allocated to distinct tissues by gastrulation: LDs largely to the peripheral epithelium, YVs and GGs to the central yolk cell. To confound the embryo's ability to sort its nutrients, we employ Jabba and mauve mutants to generate LD-GG and LD-YV compound structures. In these mutants, LDs are mis-sorted to the yolk cell and their turnover is delayed. Our observations demonstrate dramatic spatial nutrient sorting in early embryos and provide the first evidence for its functional importance.
- Published
- 2023
7. Trial Averaging for Deep EEG Classification
- Author
-
Jacob M. Williams, Ashok Samal, and Matthew R. Johnson
- Abstract
Many signals, particularly of biological origin, suffer from a signal-to-noise ratio sufficiently low that it can be difficult to classify individual examples reliably, even with relatively sophisticated machine-learning techniques such as deep learning. In some cases, the noise can be high enough that it is even difficult to achieve convergence during training. We considered this problem for one data type that often suffers from such difficulties, namely electroencephalography (EEG) data from cognitive neuroscience studies in humans. One solution to increase signal-to-noise is, of course, to perform averaging among trials, which has been employed before in other studies of human neuroscience but not, to our knowledge, investigated rigorously, particularly not in deep learning. Here, we parametrically studied the effects of different amounts of trial averaging during training and/or testing in a human EEG dataset, and compared the results to that of a related algorithm, Mixup. Broadly, we found that even a small amount of averaging could significantly improve classification, particularly when both training and testing data were subjected to averaging. Simple averaging clearly outperformed Mixup, although the benefits of averaging differed across classification categories. Overall, our results confirm the value of averaging during training and testing when single-trial classification is not strictly necessary for the application in question.HighlightsAveraging trials can dramatically improve performance in classification of EEG dataThe benefits can be seen when averaging on both training and test datasetsSimple trial averaging outperformed a popular related algorithm, MixupHowever, effects of averaging differed across different stimulus categories
- Published
- 2023
8. A multifunctional Wnt regulator underlies the evolution of coat pattern in African striped mice
- Author
-
Matthew R. Johnson, Sha Li, Christian F. Guerrero-Juarez, Pearson Miller, Benjamin J. Brack, Sarah A. Mereby, Charles Feigin, Jenna Gaska, Qing Nie, Jaime A. Rivera-Perez, Alexander Ploss, Stanislav Y. Shvartsman, and Ricardo Mallarino
- Abstract
Animal pigment patterns are excellent models to elucidate mechanisms of biological organization. Although theoretical simulations, such as Turing reaction-diffusion systems, recapitulate many animal patterns, they are insufficient to account for those showing a high degree of spatial organization and reproducibility. Here, we compare the coats of the African striped mouse (Rhabdomys pumilio) and the laboratory mouse (Mus musculus) to study the molecular mechanisms controlling stripe pattern formation. By combining transcriptomics, mathematical modeling, and mouse transgenics, we show thatSfrp2regulates the distribution of hair follicles and establishes an embryonic prepattern that foreshadows pigment stripes. Moreover, by developing and employingin vivogene editing experiments in striped mice, we find thatSfrp2knockout is sufficient to alter the stripe pattern. Strikingly, mutants also exhibit changes in coat color, revealing an additional function ofSfrp2in regulating hair color. Thus, a single factor controls coat pattern formation by acting both as an orienting signaling mechanism and a modulator of pigmentation. By uncovering a multifunctional regulator of stripe formation, our work provides insights into the mechanisms by which spatial patterns are established in developing embryos and the molecular basis of phenotypic novelty.
- Published
- 2022
9. Reward impacts visual statistical learning
- Author
-
Leeland L. Rogers, Su Hyoun Park, Matthew R. Johnson, and Timothy J. Vickery
- Subjects
Recall ,Statistical learning ,business.industry ,Middle temporal gyrus ,Cognitive Neuroscience ,05 social sciences ,Hippocampus ,Contrast (statistics) ,Inferior frontal gyrus ,Affect (psychology) ,050105 experimental psychology ,Task (project management) ,Anterior cingulate gyrus ,Superior temporal gyrus ,03 medical and health sciences ,Behavioral Neuroscience ,0302 clinical medicine ,Text mining ,Orbitofrontal cortex ,0501 psychology and cognitive sciences ,business ,Psychology ,Value (mathematics) ,030217 neurology & neurosurgery ,Cognitive psychology - Abstract
Humans automatically and unintentionally detect and remember regularities in the visual environment−a type of learning termed visual statistical learning (VSL). Many aspects of learning from reward resemble statistical learning in some respects, yet whether and how reward learning impacts VSL is largely unexamined. In two studies, we investigated the impact of reward on VSL and examined the neural basis of this interaction using fMRI. Subjects completed a risky choice task, in which they learned the values (high or low) of fractal images through a trial-and-error binary-choice task. Unbeknownst to subjects, we paired images so that some images always predicted other images on the following trial. This led to four types of pairings (High-High, High-Low, Low-High, and Low-Low). In a subsequent recognition task and reward memory task, we asked them to choose the more familiar of two pairs (a target and a foil) and to recall the value of images (high or low). We found better recognition when the first image of a pair was a high-value image, with High-High pairs showing the highest recognition rate. To investigate the neural basis of this effect, we measured brain responses to visual images that were associated with both varying levels of reward and sequential contingencies with event-related fMRI. Subjects completed the same risky choice task and then passively viewed a stream of the images with pairwise relationships intact. Brain responses to images during the risky choice task were affected by both value and statistical contingencies. When we compared responses between the first image of a pair that was high-value and the first image of a pair that was low-value, we found greater activation in regions that included inferior frontal gyrus, left anterior cingulate gyrus, middle temporal gyrus, superior temporal gyrus, hippocampus, orbitofrontal cortex, caudate, nucleus accumbens, hippocampus, and lateral occipital cortex. These findings are not driven solely by the value difference, but rather the interaction between statistically structured information and reward − the same value contrast yielded no regions for either second-image contrasts or for singletons. Our results suggest that the first images of pairs that were associated with high-value, in comparison to those associated with low-value, were involved in greater attentional engagement, potentially enabling better memory for statistically learned pairs and reward information. Additionally, we found neural evidence that when an image contains both statistical structure and reward information, the reward learning may be predicted by the type of the statistical structure it is associated with. We conclude that reward contingencies affect VSL, with high-value associated with stronger behavioral and neural signatures of such learning.
- Published
- 2021
10. Drosophila embryos spatially sort their nutrient stores to facilitate their utilization
- Author
-
Marcus D. Kilwein, Matthew R. Johnson, Jonathon M. Thomalla, Anthony Mahowald, and Michael A. Welte
- Abstract
Animal embryos are provisioned by their mothers with a diverse nutrient supply critical for development. In Drosophila, the three most abundant nutrients (triglycerides, proteins, and glycogen) are sequestered in distinct storage structures, lipid droplets (LDs), yolk vesicles (YVs) and glycogen granules (GGs). Using transmission electron microscopy as well as live and fixed-sample fluorescence imaging, we find that all three storage structures are dispersed throughout the egg but are then spatially allocated to distinct tissues by gastrulation: LDs largely to the peripheral epithelium, YVs and GGs to the central yolk cell. To confound the embryo’s ability to sort its nutrients, we employ mutants in Jabba and Mauve to generate LD:GG or LD:YV compound structures. In these mutants, LDs are missorted to the yolk cell and their turnover is delayed. Our observations demonstrate dramatic spatial nutrient sorting in early embryos and provide the first evidence for its functional importance.
- Published
- 2022
11. Robust probabilities of detection and quantification uncertainty for aerial methane detection: Examples for three airborne technologies
- Author
-
Bradley M. Conrad, David R. Tyner, and Matthew R. Johnson
- Subjects
Soil Science ,Geology ,Computers in Earth Sciences - Published
- 2023
12. A Wavelength Modulation Spectroscopy-Based Methane Flux Sensor for Quantification of Venting Sources at Oil and Gas Sites
- Author
-
Simon A. Festa-Bianchet, Scott P. Seymour, David R. Tyner, and Matthew R. Johnson
- Subjects
methane ,emission spectroscopy ,mass flow ,venting ,oil and gas sector ,hazardous locations ,storage tanks ,Doppler shift ,Electrical and Electronic Engineering ,Biochemistry ,Instrumentation ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry - Abstract
An optical sensor employing tunable diode laser absorption spectroscopy with wavelength modulation and 2f harmonic detection was designed, prototyped, and tested for applications in quantifying methane emissions from vent sources in the oil and gas sector. The methane absorption line at 6026.23 cm–1 (1659.41 nm) was used to measure both flow velocity and methane volume fraction, enabling direct measurement of the methane emission rate. Two configurations of the sensor were designed, tested, and compared; the first used a fully fiber-coupled cell with multimode fibers to re-collimate the laser beams, while the second used directly irradiated photodetectors protected by Zener barriers. Importantly, both configurations were designed to enable measurements within regulated Class I / Zone 0 hazardous locations, in which explosive gases are expected during normal operations. Controlled flows with methane volume fractions of 0 to 100% and a velocity range of 0 to 4 m/s were used to characterize sensor performance at a 1 Hz sampling rate. The measurement error in the methane volume fraction was less than 10,000 ppm (1%) across the studied range for both configurations. The short-term velocity measurement error with pure methane was
- Published
- 2022
13. Response Surface Modeling and Setpoint Determination of Steam- and Air-Assisted Flares
- Author
-
Helen H. Lou, Anan Wang, Arokiaraj Alphones, Christopher B. Martin, Matthew R. Johnson, Daniel Chen, Xianchang Li, and Vijaya Damodara
- Subjects
Setpoint ,Opacity ,law ,Nuclear engineering ,Environmental Chemistry ,Environmental science ,Response surface modeling ,Combustion ,Pollution ,Waste Management and Disposal ,Flare ,law.invention - Abstract
Federal Regulation 40 CFR §63.670 requires flare operators to specify smokeless design capacity for flares with no visible emissions. Alternatively, 96.5% combustion efficiency (CE) or 98% destruct...
- Published
- 2020
14. Outmigration survival of wild Chinook salmon smolts through the Sacramento River during historic drought and high water conditions
- Author
-
Cyril J. Michel, Alex S. McHuron, Matthew R. Johnson, Arnold J. Ammann, Mark J. Henderson, Flora Cordoleani, and Jeremy J. Notch
- Subjects
0106 biological sciences ,Chinook wind ,geography ,geography.geographical_feature_category ,010604 marine biology & hydrobiology ,Aquatic Science ,010603 evolutionary biology ,01 natural sciences ,Spawn (biology) ,Fishery ,Habitat destruction ,Habitat ,Water temperature ,Streamflow ,Threatened species ,Tributary ,Environmental science ,Ecology, Evolution, Behavior and Systematics - Abstract
Populations of wild spring-run Chinook salmon in California’s Central Valley, once numbering in the millions, have dramatically declined to record low numbers. Dam construction, habitat degradation, and altered flow regimes have all contributed to depress populations, which currently persist in only a few tributaries to the Sacramento River. Mill Creek (Tehama County) continues to support these threatened fish, and contains some of the most pristine spawning and rearing habitat available in the Central Valley. Despite this pristine habitat, the number of Chinook salmon returning to spawn has declined to record low numbers, likely due to poor outmigration survival rates. From 2013 to 2017, 334 smolts were captured and acoustic tagged while out-migrating from Mill Creek, allowing for movement and survival rates to be tracked over 250 km through the Sacramento River. During this study California experienced both a historic drought and record rainfall, resulting in dramatic fluctuations in year-to-year river flow and water temperature. Cumulative survival of tagged smolts from Mill Creek through the Sacramento River was 9.5% (±1.6) during the study, with relatively low survival during historic drought conditions in 2015 (4.9% ± 1.6) followed by increased survival during high flows in 2017 (42.3% ± 9.1). Survival in Mill Creek and the Sacramento River was modeled over a range of flow values, which indicated that higher flows in each region result in increased survival rates. Survival estimates gathered in this study can help focus management and restoration actions over a relatively long migration corridor to specific regions of low survival, and provide guidance for management actions in the Sacramento River aimed at restoring populations of threatened Central Valley spring-run Chinook salmon.
- Published
- 2020
15. Development of a Sub-ppb Resolution Methane Sensor Using a GaSb-Based DFB Diode Laser near 3270 nm for Fugitive Emission Measurement
- Author
-
Jalal Norooz Oliaee, Nicaulas A. Sabourin, Simon A. Festa-Bianchet, James A. Gupta, Matthew R. Johnson, Kevin A. Thomson, Greg J. Smallwood, and Prem Lobo
- Subjects
Fluid Flow and Transfer Processes ,multipass cell ,Process Chemistry and Technology ,methane ,Bioengineering ,mid-infrared ,Instrumentation ,wavelength modulation spectroscopy ,fugitive emissions - Abstract
A challenge for mobile measurement of fugitive methane emissions is the availability of portable sensors that feature high sensitivity and fast response times, simultaneously. A methane gas sensor to measure fugitive emissions was developed using a continuous-wave, thermoelectrically cooled, GaSb-based distributed feedback diode laser emitting at a wavelength of 3.27 μm to probe methane in its strong ν3 vibrational band. Direct absorption spectra (DAS) as well as wavelength-modulated spectra (WMS) of pressure-broadened R(3) manifold lines of methane were recorded through a custom-developed open-path multipass cell with an effective optical path length of 6.8 m. A novel metrological approach was taken to characterize the sensor response in terms of the linearity of different WMS metrics, namely, the peak-to-peak amplitude of the X2f component and the peak and/or the integrated area of the background-subtracted quadrature signal (i.e., Q(2f – 2f0)) and the background-subtracted 1f-normalized quadrature signal (i.e., Q(2f/1f – 2f0/1f0)). Comparison with calibration gas concentrations spanning 1.5 to 40 ppmv indicated that the latter WMS metric showed the most linear response, while fitting DAS provides a traceable reference. In the WMS mode, a sensitivity better than 1 ppbv was achieved at a 1 s integration time. The sensitivity and response time are well-suited to measure enhancements in ambient methane levels caused by fugitive emissions.
- Published
- 2022
16. Convolutional neural networks can decode eye movement data: A black box approach to predicting task from eye movements
- Author
-
Michael D. Dodd, Matthew R. Johnson, Karl M. Kuntzelman, and Zachary J. Cole
- Subjects
Eye Movements ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,convolutional neural network ,Convolutional neural network ,eye tracking ,Article ,Task (project management) ,cognitive state ,Black box ,Classifier (linguistics) ,Saccades ,endogenous attention ,Humans ,business.industry ,Deep learning ,Eye movement ,deep learning ,Pattern recognition ,Fixation (psychology) ,Sensory Systems ,Ophthalmology ,Eye tracking ,Artificial intelligence ,Neural Networks, Computer ,business - Abstract
Previous attempts to classify task from eye movement data have relied on model architectures designed to emulate theoretically defined cognitive processes and/or data that have been processed into aggregate (e.g., fixations, saccades) or statistical (e.g., fixation density) features. Black box convolutional neural networks (CNNs) are capable of identifying relevant features in raw and minimally processed data and images, but difficulty interpreting these model architectures has contributed to challenges in generalizing lab-trained CNNs to applied contexts. In the current study, a CNN classifier was used to classify task from two eye movement datasets (Exploratory and Confirmatory) in which participants searched, memorized, or rated indoor and outdoor scene images. The Exploratory dataset was used to tune the hyperparameters of the model, and the resulting model architecture was retrained, validated, and tested on the Confirmatory dataset. The data were formatted into timelines (i.e., x-coordinate, y-coordinate, pupil size) and minimally processed images. To further understand the informational value of each component of the eye movement data, the timeline and image datasets were broken down into subsets with one or more components systematically removed. Classification of the timeline data consistently outperformed the image data. The Memorize condition was most often confused with Search and Rate. Pupil size was the least uniquely informative component when compared with the x- and y-coordinates. The general pattern of results for the Exploratory dataset was replicated in the Confirmatory dataset. Overall, the present study provides a practical and reliable black box solution to classifying task from eye movement data.
- Published
- 2021
17. Reward impacts visual statistical learning
- Author
-
Su Hyoun, Park, Leeland L, Rogers, Matthew R, Johnson, and Timothy J, Vickery
- Subjects
Reward ,Brain ,Humans ,Recognition, Psychology ,Gyrus Cinguli ,Hippocampus ,Magnetic Resonance Imaging - Abstract
Humans automatically detect and remember regularities in the visual environment-a type of learning termed visual statistical learning (VSL). Many aspects of learning from reward resemble VSL in certain respects, yet whether and how reward learning impacts VSL is largely unexamined. In two studies, we found that reward contingencies affect VSL, with high-value associated with stronger behavioral and neural signatures of such learning than low-value images. In Experiment 1, participants learned values (high or low) of images through a trial-and-error risky choice task. Unbeknownst to them, images were paired as four types-High-High, High-Low, Low-High, and Low-Low. In subsequent recognition and reward memory tests, participants chose the more familiar of two pairs (a target and a foil) and recalled the value of images. We found better recognition when the first images of pairs have high-values, with High-High pairs showing the highest recognition rate. In Experiment 2, we provided evidence that both value and statistical contingencies affected brain responses. When we compared responses between the high-value first image and the low-value first image, greater activation in regions that included inferior frontal gyrus, anterior cingulate gyrus, hippocampus, among other regions, were found. These findings were driven by the interaction between statistically structured information and reward-the same value contrast yielded no regions for second-image contrasts and for singletons. Our results suggest that when reward information is embedded in stimulus-stimulus associations, it may alter the learning process; specifically, the higher-value first image potentially enables better memory for statistically learned pairs and reward information.
- Published
- 2021
18. Not-so-working Memory: Drift in Functional Magnetic Resonance Imaging Pattern Representations during Maintenance Predicts Errors in a Visual Working Memory Task
- Author
-
Emily J. Ward, Matthew R. Johnson, Timothy J. Vickery, and Phui Cheng Lim
- Subjects
Adult ,Cerebral Cortex ,Male ,Brain Mapping ,medicine.diagnostic_test ,Working memory ,Brain activity and meditation ,Cognitive Neuroscience ,Speech recognition ,Recognition, Psychology ,Cognition ,Magnetic Resonance Imaging ,Brain mapping ,Young Adult ,Memory, Short-Term ,Pattern Recognition, Visual ,Pattern recognition (psychology) ,medicine ,Feature (machine learning) ,Humans ,Female ,Noise (video) ,Functional magnetic resonance imaging ,Psychology ,Psychomotor Performance - Abstract
Working memory (WM) is critical to many aspects of cognition, but it frequently fails. Much WM research has focused on capacity limits, but even for single, simple features, the fidelity of individual representations is limited. Why is this? One possibility is that, because of neural noise and interference, neural representations do not remain stable across a WM delay, nor do they simply decay, but instead, they may “drift” over time to a new, less accurate state. We tested this hypothesis in a functional magnetic resonance imaging study of a match/nonmatch WM recognition task for a single item with a single critical feature: orientation. We developed a novel pattern-based index of “representational drift” to characterize ongoing changes in brain activity patterns throughout the WM maintenance period, and we were successfully able to predict performance on the match/nonmatch recognition task using this representational drift index. Specifically, in trials where the target and probe stimuli matched, participants incorrectly reported more nonmatches when their activity patterns drifted away from the target. In trials where the target and probe did not match, participants incorrectly reported more matches when their activity patterns drifted toward the probe. On the basis of these results, we contend that neural noise does not cause WM errors merely by degrading representations and increasing random guessing; instead, one means by which noise introduces errors is by pushing WM representations away from the target and toward other meaningful (yet incorrect) configurations. Thus, we demonstrate that behaviorally meaningful drift within representation space can be indexed by neuroimaging.
- Published
- 2019
19. Mass absorption cross-section of flare-generated black carbon: Variability, predictive model, and implications
- Author
-
Bradley M. Conrad and Matthew R. Johnson
- Subjects
Astrophysics::High Energy Astrophysical Phenomena ,Attenuation ,chemistry.chemical_element ,02 engineering and technology ,General Chemistry ,Carbon black ,Radiative forcing ,010402 general chemistry ,021001 nanoscience & nanotechnology ,Atmospheric sciences ,01 natural sciences ,7. Clean energy ,0104 chemical sciences ,law.invention ,Cross section (physics) ,chemistry ,13. Climate action ,law ,Phenomenological model ,Radiative transfer ,Environmental science ,General Materials Science ,0210 nano-technology ,Carbon ,Flare - Abstract
Global gas flaring is an important source of black carbon (BC) emissions with uncertain climate impacts. The link between atmospheric concentration and direct radiative forcing (DRF) by BC is its mass absorption cross-section (MAC). MAC data for flare-generated BC are lacking in the literature and the only known data conflict with generally-accepted BC MAC values, which are assumed to be source-independent. This paper presents the first measurements of BC MAC for large-scale flares, burning globally-representative, industry-relevant flare gas compositions in a controlled facility. BC MAC was calculated with precisely-quantified uncertainties using photoacoustic and thermal-optical instruments. Flare-generated carbon was found to be primarily elemental in composition (typically >92%), and most probably externally-mixed based on detailed analysis of attenuation vs. evolved carbon data and consideration of flare-specific mechanisms for organic carbon emissions. Flare BC MAC was generally larger than well-cited literature values and had statistically significant variations with fuel and operating conditions. Variability in BC MAC was well-predicted by a novel phenomenological model based on flame radiative characteristics and relative BC production. The derived model consolidates previously-unreconciled disparate data from different sources and suggests that flare BC MAC is likely >1.3–2 times standard values, implying an underestimation of DRF by flare-generated BC.
- Published
- 2019
20. Periodic patterns in Rodentia: Development and evolution
- Author
-
Matthew R. Johnson, Ricardo Mallarino, and Gregory S. Barsh
- Subjects
0301 basic medicine ,biology ,food and beverages ,Pattern formation ,Rodentia ,Skin Pigmentation ,Model system ,Dermatology ,biology.organism_classification ,Biological Evolution ,Biochemistry ,Article ,030207 dermatology & venereal diseases ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Evolutionary biology ,Spatial ecology ,Evolutionary developmental biology ,Animals ,Clade ,Molecular Biology ,Developmental biology ,Rhabdomys pumilio - Abstract
Mammalian periodic pigment patterns, such as spots and stripes, have long interested mathematicians and biologists because they arise from non-random developmental processes that are programmed to be spatially constrained, and can therefore be used as a model to understand how organized morphological structures develop. Despite such interest, the developmental and molecular processes underlying their formation remain poorly understood. Here, we argue that Arvicanthines, a clade of African rodents that naturally evolved a remarkable array of coat patterns, represent a tractable model system in which to dissect the mechanistic basis of pigment pattern formation. Indeed, we review recent insights into the process of stripe formation that were obtained using an Arvicanthine species, the African striped mouse (Rhabdomys pumilio), and discuss how these rodents can be used to probe deeply into our understanding of the factors that specify and implement positional information in the skin. By combining naturally evolved pigment pattern variation in rodents with classic and novel experimental approaches, we can substantially advance our understanding of the processes by which spatial patterns of cell differentiation are established during embryogenesis, a fundamental question in developmental biology.
- Published
- 2019
21. Microstructure and Chemical Composition of Particles from Small-scale Gas Flaring
- Author
-
Steven N. Rogak, N. M. Persiantseva, Matthew R. Johnson, Olga Popovicheva, Alberto Baldelli, Melina Jefferson, and M. A. Timofeev
- Subjects
Materials science ,010504 meteorology & atmospheric sciences ,Butane ,Microstructure ,Combustion ,medicine.disease_cause ,01 natural sciences ,Pollution ,Soot ,Methane ,chemistry.chemical_compound ,Diesel fuel ,chemistry ,Chemical engineering ,Propane ,medicine ,Environmental Chemistry ,Chemical composition ,0105 earth and related environmental sciences - Abstract
Among globally relevant combustion sources, such as diesel emission and biomass burning, gas flaring remains the most uncertain. In this study, small-scale turbulent gas flaring was used to characterize particulate emissions produced under different operating conditions, such as various burner diameters and exit velocities. The composition of the fuel was also varied by modifying the percentage of methane, ethane, propane, butane, N2, and CO2, which are the predominant constituents in the upstream oil and gas industry. A broad suite of physical, chemical, and microscopic techniques was employed for analysis, and scanning electron microscopy showed the generated soot agglomerates to be composed of primary spherules that were 30 ± 10 nm in diameter. Additionally, high-resolution transmission electron microscopy, used to determine the length, tortuosity, and separation of individual graphene fringes on the primary particles, revealed a fullerenic, multiple-nuclei internal structure. Single-particle analysis revealed the dominance of elemental carbon vs. oxidized and metal-contaminated particles, and infrared spectroscopy showed the presence of alkanes and aromatics with oxygenated compounds. Intercomparing the microstructure and the composition, we also concluded that the vast majority of particles are hydrophobic.
- Published
- 2019
22. Bedrock Geology of the Southern Half of the Knox 30- x 60-Minute Quadrangle, Indiana
- Author
-
Matthew R. Johnson, Patrick I. McLaughlin, and Alyssa M. Bancroft
- Subjects
Quadrangle ,Bedrock geology ,Archaeology ,Geology ,Devonian - Abstract
This is a map. It has an introduction, but no abstract.
- Published
- 2021
23. Bedrock Topography of the Berne, Domestic, Geneva, and Willshire 7.5-Minute Quadrangles, Indiana-Ohio
- Author
-
Thomas A. Nash, Tyler A. Norris, Henry M. Loope, Donald C. Tripp, José Luis Antinao, Robin R. Rupp, and Matthew R. Johnson
- Subjects
geography ,geography.geographical_feature_category ,Bedrock ,Archaeology ,Geology - Abstract
This map provides updated bedrock topography for the eastern extent of the Lafayette Bedrock Valley System in Indiana.
- Published
- 2021
24. Using Life Cycle Models to Identify Monitoring Gaps for Central Valley Spring-Run Chinook Salmon
- Author
-
Matthew R. Johnson, Miles E. Daniels, William H. Satterthwaite, and Flora Cordoleani
- Subjects
education.field_of_study ,Chinook wind ,Data collection ,business.industry ,Environmental resource management ,Population ,Aquatic Science ,Monitoring program ,Life stage ,Building process ,Threatened species ,Environmental science ,business ,education ,Restoration ecology ,Water Science and Technology - Abstract
Author(s): Cordoleani, Flora; Satterthwaite, William H.; Daniels, Miles E.; Johnson, Matthew R. | Abstract: Life cycle models (LCMs) provide a quantitative framework that allows evaluation of how management actions targeting specific life stages can have population-level impacts on a species. The LCM building process is also a powerful tool that can be used to identify data gaps existing in the knowledge of the target species, and that might strongly influence overall population dynamics. LCMs are particularly useful for species such as salmon that are highly migratory and use multiple aquatic ecosystems throughout their life. Furthermore, they are lacking for threatened Central Valley spring-run Chinook (Oncorhynchus tshawytscha; CVSC). Here, we developed a CVSC LCM to describe the dynamics of Mill, Deer and Butte Creek CVSC populations. We used model construction, calibration and a global sensitivity analysis to highlight important data gaps in the monitoring of those populations. In particular, we found strong model sensitivity and high uncertainty in various egg, juvenile and adult ocean life stages’ biological processes. We concluded that the current CVSC monitoring network is insufficient to support using a LCM to inform how future management actions (e.g., hydrology and habitat restoration) influence CVSC dynamics. We propose a series of monitoring recommendations, such as the development of an enhanced juvenile tracking monitoring program and the implementation of juvenile trapping efficiency methodology combined with genetic identification tools, to help fill highlighted data gaps. These additional data collection efforts will provide critical quantitative information about the status of this imperiled species at key life stages (e.g., CVSC juvenile abundance estimates), and create a more comprehensive monitoring framework fundamental for working on the recovery of the entire stock. Furthermore, additional data collection will strengthen the LCM parameterization and calibration process, and ultimately improve the model’s predictive performance.
- Published
- 2020
25. Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox
- Author
-
Jacob M. Williams, Karl M. Kuntzelman, Ashok Samal, Prahalad K. Rao, Phui Cheng Lim, and Matthew R. Johnson
- Subjects
medicine.diagnostic_test ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Cognitive neuroscience ,Machine learning ,computer.software_genre ,Toolbox ,Field (computer science) ,Variety (cybernetics) ,Neuroimaging ,medicine ,Artificial intelligence ,business ,Functional magnetic resonance imaging ,computer - Abstract
In recent years, multivariate pattern analysis (MVPA) has been hugely beneficial for cognitive neuroscience by making new experiment designs possible and by increasing the inferential power of functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and other neuroimaging methodologies. In a similar time frame, “deep learning” (a term for the use of artificial neural networks with convolutional, recurrent, or similarly sophisticated architectures) has produced a parallel revolution in the field of machine learning and has been employed across a wide variety of applications. Traditional MVPA also uses a form of machine learning, but most commonly with much simpler techniques based on linear calculations; a number of studies have applied deep learning techniques to neuroimaging data, but we believe that those have barely scratched the surface of the potential deep learning holds for the field. In this paper, we provide a brief introduction to deep learning for those new to the technique, explore the logistical pros and cons of using deep learning to analyze neuroimaging data – which we term “deep MVPA,” or dMVPA – and introduce a new software toolbox (the “Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education” package, DeLINEATE for short) intended to facilitate dMVPA for neuroscientists (and indeed, scientists more broadly) everywhere.
- Published
- 2020
26. Editors' Notes
- Author
-
Matthew R. Johnson and Krista M. Soria
- Subjects
Leadership ,Universities ,Humans ,General Medicine ,Curriculum ,Students - Published
- 2020
27. Age-related delay in reduced accessibility of refreshed items
- Author
-
Julie A. Higgins, Marcia K. Johnson, and Matthew R. Johnson
- Subjects
Adult ,Male ,medicine.medical_specialty ,Aging ,Social Psychology ,Adolescent ,media_common.quotation_subject ,Short-term memory ,Audiology ,Attentional bias ,Affect (psychology) ,050105 experimental psychology ,Young Adult ,Perception ,medicine ,Humans ,0501 psychology and cognitive sciences ,Attention ,Young adult ,media_common ,Aged ,Aged, 80 and over ,Working memory ,Long-term memory ,05 social sciences ,Cognition ,Middle Aged ,Female ,Geriatrics and Gerontology ,Psychology - Abstract
Previously, we demonstrated that in young adults, briefly thinking of (i.e., refreshing) a just-seen word impairs immediate (100-ms delay) perceptual processing of the word, relative to words seen but not refreshed. We suggested that such reflective-induced inhibition biases attention toward new information. Here, we investigated whether reduced accessibility of refreshed targets dissipates with a longer delay and whether older adults would show a smaller and/or delayed effect compared with young adults. Young adult and older adult participants saw 2 words, followed by a cue to refresh one of these words. After either a 100-ms or 500-ms delay, participants read a word that was the refreshed word (refreshed probe), the nonrefreshed word (nonrefreshed probe), or a new word (novel probe). Young adults were slower to read refreshed probes than nonrefreshed probes at the 100-ms, but not the 500-ms, delay. Conversely, older adults were slower to read refreshed probes than nonrefreshed probes at the 500-ms, but not the 100-ms, delay. The delayed slowing of responses to refreshed probes was primarily observed in older-old adults (75+ years). A delay in suppressing the target of refreshing may disrupt the fluidity with which attention can be shifted to a new target. Importantly, a long-term memory benefit of refreshing was observed for both ages and delays. These results suggest that a full characterization of age-related memory deficits should consider the time course of effects and how specific component cognitive processes affect both working and long-term memory. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
- Published
- 2020
28. Paired Trial Classification: A Novel Deep Learning Technique for MVPA
- Author
-
Prahalad K. Rao, Jacob M. Williams, Matthew R. Johnson, and Ashok Samal
- Subjects
Computer science ,02 engineering and technology ,Overfitting ,Machine learning ,computer.software_genre ,Field (computer science) ,lcsh:RC321-571 ,cognitive neuroscience ,03 medical and health sciences ,0302 clinical medicine ,MVPA ,0202 electrical engineering, electronic engineering, information engineering ,EEG ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Regularization (linguistics) ,Original Research ,Artificial neural network ,Contextual image classification ,business.industry ,General Neuroscience ,Deep learning ,deep learning ,Class (biology) ,machine learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,Noise (video) ,business ,computer ,030217 neurology & neurosurgery ,Neuroscience - Abstract
Many recent developments in machine learning have come from the field of “deep learning,” or the use of advanced neural network architectures and techniques. While these methods have produced state-of-the-art results and dominated research focus in many fields, such as image classification and natural language processing, they have not gained as much ground over standard multivariate pattern analysis (MVPA) techniques in the classification of electroencephalography (EEG) or other human neuroscience datasets. The high dimensionality and large amounts of noise present in EEG data, coupled with the relatively low number of examples (trials) that can be reasonably obtained from a sample of human subjects, lead to difficulty training deep learning models. Even when a model successfully converges in training, significant overfitting can occur despite the presence of regularization techniques. To help alleviate these problems, we present a new method of “paired trial classification” that involves classifying pairs of EEG recordings as coming from the same class or different classes. This allows us to drastically increase the number of training examples, in a manner akin to but distinct from traditional data augmentation approaches, through the combinatorics of pairing trials. Moreover, paired trial classification still allows us to determine the true class of a novel example (trial) via a “dictionary” approach: compare the novel example to a group of known examples from each class, and determine the final class via summing the same/different decision values within each class. Since individual trials are noisy, this approach can be further improved by comparing a novel individual example with a “dictionary” in which each entry is an average of several examples (trials). Even further improvements can be realized in situations where multiple samples from a single unknown class can be averaged, thus permitting averaged signals to be compared with averaged signals.
- Published
- 2020
29. Experimental Modelling of Black Carbon Emissions from Gas Flares in the Oil and Gas Sector
- Author
-
Parvin Mehr, Bradley M. Conrad, and Matthew R. Johnson
- Abstract
Flares in the upstream oil and gas (UOG) industry are an important and poorly quantified source of black carbon (BC) emissions and may be a dominant source of black carbon deposition in sensitive Arctic regions (Stohl et al. 2013). Accurate estimation of flare BC emissions to support informed policy decisions, accurate climate modeling, and new international reporting regulations under the Gothenburg protocol is a critical challenge. To date few studies have focussed on the primarily buoyancy-dominated turbulent non-premixed flames typical of upstream oil and gas flares, such that existing emission factor models are highly uncertain (see (McEwen and Johnson 2012)). Although recent progress has been made in measuring black carbon from flares in the field (e.g. (Conrad and Johnson 2017; Johnson et al. 2013), data have also shown that emissions of individual flares may vary by more than 4 orders of magnitude. The objective of the current study is to develop a robust data-backed model to predict black carbon emissions from flares considering variations in flare gas composition, flow rates, and stack diameters. Laboratory measurements of black carbon (soot) for a range of turbulent non-premixed jet diffusion flames of up to 3 m in length were performed at the Carleton University Flare Facility in Ottawa, Canada. Two hundred and thirty cases spanning five flare stack diameters (25.4 to 76.2 mm), exit velocities from 0.16 to 15.15 m/s, and a broad range of industrially-relevant multicomponent (C1-C7 hydrocarbons, CO2, N2) flare gas compositions were studied. Emissions were captured in a large (~3.1 m diameter) sampling hood and forwarded to gas- and particulate phase analyzers. Black carbon concentrations were measured via a Sunset Labs thermal-optical instrument using the OCECgo software tool (Conrad and Johnson 2019) to quantify uncertainties via Monte Carlo analysis. BC yields were subsequently calculated using a mass-balance methodology (Corbin and Johnson 2014). Variability in BC yield was well-predicted by an empirical model incorporating both the aerodynamic and chemistry effects. For this range of conditions, it was observed that primary independent variables (such as exit velocity and higher heating value) act as reasonable surrogates for sooting propensity. Further experiments are underway to test the proposed model over a broader range of conditions. However, results to date represent a significant advance in our ability to predict black carbon emissions from flares.
- Published
- 2020
30. A Techno-Economic Analysis of Methane Mitigation Potential from Reported Venting at Oil Production Sites in Alberta
- Author
-
Matthew R. Johnson and David R. Tyner
- Subjects
Methane emissions ,Air Pollutants ,010504 meteorology & atmospheric sciences ,Environmental engineering ,Techno economic ,General Chemistry ,010501 environmental sciences ,Combustion ,7. Clean energy ,01 natural sciences ,Net present value ,Carbon ,Profit (economics) ,Methane ,Alberta ,Pipeline transport ,chemistry.chemical_compound ,chemistry ,13. Climate action ,Oil production ,Environmental Chemistry ,Environmental science ,0105 earth and related environmental sciences - Abstract
The technical and economic potential for reducing methane emissions from reported venting and flaring volumes in 2015 at 9422 upstream oil production sites in Alberta, Canada was evaluated in a comprehensive site-by-site analysis. For each site, up to six different technologies for mitigation were considered, based on conserving gas into pipelines, combusting gas on site, or using gas for on-site fuel. Economic viability of mitigation was calculated using current economic parameters and gas price projections on a net present cost basis. Monte Carlo simulations suggest that a 45% reduction in methane emissions (consistent with current federal and provincial targets) from reported flaring and venting is technically and economically feasible at overall average costs ranging from $–2.98 CAD/tCO2e (i.e., a profit) to $2.51 CAD/tCO2e with no one site paying more than $11.02 CAD/tCO2e. If the reported baseline emissions are augmented to reflect results of recent airborne measurements, overall economics of mitiga...
- Published
- 2018
31. Hydrodearomatization of Distillates and Heavy Naphtha over a Precious Metal Hydrogenation Catalyst and the Determination of Low Aromatic Content
- Author
-
Jeffrey M. Guevremont, Matthew R. Johnson, Mike Kozminski, Joshua D. Ward, and Yassin Al Obaidi
- Subjects
010405 organic chemistry ,business.industry ,Chemistry ,General Chemical Engineering ,Inorganic chemistry ,Precious metal ,General Chemistry ,010402 general chemistry ,medicine.disease_cause ,01 natural sciences ,High-performance liquid chromatography ,Industrial and Manufacturing Engineering ,0104 chemical sciences ,law.invention ,Catalysis ,Petroleum product ,law ,medicine ,Saturation (chemistry) ,business ,Naphtha ,Distillation ,Ultraviolet - Abstract
Because of new regulations on the allowable aromatic content in petroleum products, the saturation of aromatics in distillates and heavy naphtha is gaining more attention. Catalytic hydrodearomatization (HDA) improves the qualities and properties of distillates for health, food, and pharmaceutical applications. In this research, the aromatic content of the four different middle distillate fractions varies from 10 to 12% volume. The goal is to saturate the aromatics to
- Published
- 2018
32. Total Synthesis of Clavatadine B
- Author
-
Michael T. Davenport, Jordan A. Dickson, Matthew R. Johnson, and Stephen Chamberland
- Subjects
Pharmacology ,Complementary and alternative medicine ,Molecular Structure ,Organic Chemistry ,Drug Discovery ,Pharmaceutical Science ,Molecular Medicine ,Anticoagulants ,Indicators and Reagents ,Guanidines ,Homogentisic Acid ,Factor XIa ,Analytical Chemistry - Abstract
The first total synthesis of clavatadine B (
- Published
- 2019
33. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint
- Author
-
Matthew R. Johnson, Carol A. Brereton, Lucy J. Campbell, and Ian M. Joynes
- Subjects
Scheme (programming language) ,Atmospheric Science ,Source characterization ,010504 meteorology & atmospheric sciences ,Series (mathematics) ,Scalar (physics) ,Mechanics ,010501 environmental sciences ,7. Clean energy ,01 natural sciences ,13. Climate action ,Greenhouse gas ,Environmental science ,Transient (oscillation) ,Fugitive emissions ,Reynolds-averaged Navier–Stokes equations ,computer ,0105 earth and related environmental sciences ,General Environmental Science ,computer.programming_language - Abstract
Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.
- Published
- 2018
34. Where the Methane Is—Insights from Novel Airborne LiDAR Measurements Combined with Ground Survey Data
- Author
-
Matthew R. Johnson and David R. Tyner
- Subjects
Methane emissions ,010504 meteorology & atmospheric sciences ,Natural Gas ,010501 environmental sciences ,Atmospheric sciences ,7. Clean energy ,01 natural sciences ,Methane ,chemistry.chemical_compound ,Environmental Chemistry ,0105 earth and related environmental sciences ,Air Pollutants ,British Columbia ,business.industry ,Fossil fuel ,General Chemistry ,Ground survey ,Aerial imagery ,Lidar ,chemistry ,13. Climate action ,Environmental science ,Fugitive emissions ,business ,Count data - Abstract
Airborne LiDAR measurements, parallel controlled releases, and on-site optical gas imaging (OGI) survey and pneumatic device count data from 1 year prior, were combined to derive a new measurement-based methane inventory for oil and gas facilities in British Columbia, Canada. Results reveal a surprising distinction in the higher magnitudes, different types, and smaller number of sources seen by the plane versus OGI. Combined data suggest methane emissions are 1.6-2.2 times current federal inventory estimates. More importantly, analysis of high-resolution geo-located aerial imagery, facility schematics, and equipment counts allowed attribution to major source types revealing key drivers of this difference. More than half of emissions were attributed to three main sources: tanks (24%), reciprocating compressors (15%), and unlit flares (13%). These are the sources driving upstream oil and gas methane emissions, and specifically, where emerging regulations must focus to achieve meaningful reductions. Pneumatics accounted for 20%, but this contribution is lower than recent Canadian and U.S. inventory estimates, possibly reflecting a growing shift toward more low- and zero-emitting devices. The stark difference in the aerial and OGI results indicates key gaps in current inventories and suggests that policy and regulations relying on OGI surveys alone may risk missing a significant portion of emissions.
- Published
- 2021
35. Blinded evaluation of airborne methane source detection using Bridger Photonics LiDAR
- Author
-
David R. Tyner, Alexander J. Szekeres, and Matthew R. Johnson
- Subjects
Data processing ,010504 meteorology & atmospheric sciences ,business.industry ,0208 environmental biotechnology ,Fossil fuel ,Soil Science ,Geology ,02 engineering and technology ,01 natural sciences ,Methane ,Wind speed ,020801 environmental engineering ,chemistry.chemical_compound ,Lidar ,chemistry ,13. Climate action ,Environmental science ,Limit (mathematics) ,Computers in Earth Sciences ,Photonics ,business ,Fugitive emissions ,0105 earth and related environmental sciences ,Remote sensing - Abstract
Controlled, fully-blinded methane releases and ancillary on-site wind measurements were performed during a separate airborne survey of active oil and gas facilities to quantitatively evaluate the capabilities and potential utility of the Bridger Photonics LiDAR-based airborne Gas Mapping LiDAR™ (GML) methane measurement technology under realistic field conditions. Importantly, although Bridger Photonics knew there was a ground team working in the area to deploy wind sensors as part of the broader survey of facilities, they had no knowledge whatsoever that controlled releases were taking place and were not informed of this until all data processing was complete. Thus, the presented data allow a true, fully-blinded assessment of the airborne technology's ability to both detect and locate unknown methane sources within active oil and gas facilities, as well as to quantify their release rates. Results were used to derive a lower-sensitivity limit threshold as a function of wind speed, which matches well with the broader field survey results. Comparison of measurement results with and without the benefit of on-site wind data reveal that uncertainty in the GML source quantification is a direct linear function of the uncertainty in the wind speed. Quantification uncertainties (1σ) of ±31–68% can be expected for sources near the sensitivity limit. The derived sensitivity limit function was incorporated into exploratory simulations using the Fugitive Emissions Abatement Simulation Toolkit (FEAST), which suggest that the Bridger GML technology has comparable performance to optical gas imaging (OGI) camera surveys both in terms of fraction of total emissions detected and anticipated net mitigation. The relative performance of the Bridger GML technology would be expected to improve or worsen as the assumed underlying distribution of source magnitudes becomes more or less positively skewed (i.e. more or less dominated by larger sources such as tank vents). Overall, the Bridger GML technology is shown to be capable of detecting, locating, and quantifying individual sources at or below the magnitudes of recent regulated venting limits. The presented detection sensitivity function will be useful for modelling potential alternate leak detection and repair strategies and interpreting future airborne measurement data.
- Published
- 2021
36. Refreshing and removing items in working memory: Different approaches to equivalent processes?
- Author
-
Matthew R. Johnson and Evan N. Lintz
- Subjects
Cued speech ,Linguistics and Language ,Working memory ,Cognitive Neuroscience ,media_common.quotation_subject ,Perspective (graphical) ,Experimental and Cognitive Psychology ,Motivated forgetting ,Article ,Language and Linguistics ,Test (assessment) ,Surprise ,Memory, Short-Term ,Mental Recall ,Reaction Time ,Developmental and Educational Psychology ,Lexical decision task ,Humans ,Attention ,Cues ,Psychology ,Word (computer architecture) ,Cognitive psychology ,media_common - Abstract
Researchers have investigated “refreshing” of items in working memory (WM) as a means of preserving them, while concurrently, other studies have examined “removal” of items from WM that are irrelevant. However, it is unclear whether refreshing and removal in WM truly represent different processes, or if participants, in an effort to avoid the to-be-removed items, simply refresh alternative items. We conducted two experiments to test whether these putative processes can be distinguished from one another. Participants were presented with sets of three words and then cued to either refresh one item or remove two items from WM, followed by a lexical decision probe containing either one of the just-seen words or a non-word. In Experiment 1, all probes were valid and in Experiment 2, probes were occasionally invalid (the probed word was one of the removed/non-refreshed items). In both experiments, participants also received a subsequent surprise long-term memory test. Results from both experiments suggested the expected advantages for refreshed (or non-removed) items in both short-term response time and long-term recognition, but no differences between refresh and remove instructions that would suggest a fundamental difference in processes. Thus, we argue that a functional distinction between refreshing and removal may not be necessary and propose that both of these putative processes could potentially be subsumed under an overarching conceptual perspective based on the flexible reallocation of mental or reflective attention.
- Published
- 2021
37. Comparisons of Airborne Measurements and Inventory Estimates of Methane Emissions in the Alberta Upstream Oil and Gas Sector
- Author
-
Daniel Zavala-Araiza, David R. Tyner, Matthew R. Johnson, Stefan Schwietzke, and Stephen Conley
- Subjects
Methane emissions ,010504 meteorology & atmospheric sciences ,Meteorology ,Natural Gas ,010501 environmental sciences ,Atmospheric sciences ,7. Clean energy ,01 natural sciences ,Methane ,Alberta ,chemistry.chemical_compound ,Natural gas ,Environmental Chemistry ,Oil and Gas Fields ,0105 earth and related environmental sciences ,Air Pollutants ,Light crude oil ,business.industry ,Fossil fuel ,General Chemistry ,Current (stream) ,chemistry ,13. Climate action ,Cold heavy oil production with sand ,Environmental science ,business - Abstract
Airborne measurements of methane emissions from oil and gas infrastructure were completed over two regions of Alberta, Canada. These top-down measurements were directly compared with region-specific bottom-up inventories that utilized current industry-reported flaring and venting volumes (reported data) and quantitative estimates of unreported venting and fugitive sources. For the 50 × 50 km measurement region near Red Deer, characterized by natural gas and light oil production, measured methane fluxes were more than 17 times greater than that derived from directly reported data but consistent with our region-specific bottom-up inventory-based estimate. For the 60 × 60 km measurement region near Lloydminster, characterized by significant cold heavy oil production with sand (CHOPS), airborne measured methane fluxes were five times greater than directly reported emissions from venting and flaring and four times greater than our region-specific bottom up inventory-based estimate. Extended across Alberta, our results suggest that reported venting emissions in Alberta should be 2.5 ± 0.5 times higher, and total methane emissions from the upstream oil and gas sector (excluding mined oil sands) are likely at least 25-50% greater than current government estimates. Successful mitigation efforts in the Red Deer region will need to focus on the90% of methane emissions currently unmeasured or unreported.
- Published
- 2017
38. Field Measurements of Black Carbon Yields from Gas Flaring
- Author
-
Bradley M. Conrad and Matthew R. Johnson
- Subjects
010504 meteorology & atmospheric sciences ,Field (physics) ,Meteorology ,Astrophysics::High Energy Astrophysical Phenomena ,Monte Carlo method ,010501 environmental sciences ,Atmospheric sciences ,7. Clean energy ,01 natural sciences ,law.invention ,Soot ,Orders of magnitude (specific energy) ,law ,Environmental Chemistry ,Mexico ,Astrophysics::Galaxy Astrophysics ,0105 earth and related environmental sciences ,Physics ,Air Pollutants ,General Chemistry ,Carbon black ,Carbon ,Volumetric flow rate ,Plume ,13. Climate action ,Physics::Space Physics ,Ecuador ,Flare ,Field conditions - Abstract
Black carbon (BC) emissions from gas flaring in the oil and gas industry are postulated to have critical impacts on climate and public health, but actual emission rates remain poorly characterized. This paper presents in situ field measurements of BC emission rates and flare gas volume-specific BC yields for a diverse range of flares. Measurements were performed during a series of field campaigns in Mexico and Ecuador using the sky-LOSA optical measurement technique, in concert with comprehensive Monte Carlo-based uncertainty analyses. Parallel on-site measurements of flare gas flow rate and composition were successfully performed at a subset of locations enabling direct measurements of fuel-specific BC yields from flares under field conditions. Quantified BC emission rates from individual flares spanned more than 4 orders of magnitude (up to 53.7 g/s). In addition, emissions during one notable ∼24-h flaring event (during which the plume transmissivity dropped to zero) would have been even larger than this maximum rate, which was measured as this event was ending. This highlights the likely importance of superemitters to global emission inventories. Flare gas volume-specific BC yields were shown to be strongly correlated with flare gas heating value. A newly derived correlation fitting current field data and previous lab data suggests that, in the context of recent studies investigating transport of flare-generated BC in the Arctic and globally, impacts of flaring in the energy industry may in fact be underestimated.
- Published
- 2017
39. Split Point Analysis and Uncertainty Quantification of Thermal-Optical Organic/Elemental Carbon Measurements
- Author
-
Matthew R. Johnson and Bradley M. Conrad
- Subjects
Aerosols ,Protocol (science) ,Air Pollutants ,General Immunology and Microbiology ,business.industry ,General Chemical Engineering ,General Neuroscience ,Monte Carlo method ,Uncertainty ,Repeatability ,Carbon ,General Biochemistry, Genetics and Molecular Biology ,Aerosol ,Software ,Calibration ,Environmental science ,Uncertainty quantification ,business ,Process engineering ,Uncertainty analysis ,Environmental Monitoring - Abstract
Researchers from myriad fields frequently seek to quantify and classify concentrations of carbonaceous aerosols as organic carbon (OC) or elemental carbon (EC). This is commonly accomplished using thermal-optical OC/EC analyzers (TOAs), which enable measurement via controlled thermal pyrolysis and oxidation under specific temperature protocols and within constrained atmospheres. Several commercial TOAs exist, including a semi-continuous instrument that enables on-line analyses in the field. This instrument employs an in-test calibration procedure that requires relatively frequent calibration. This article details a calibration protocol for this semi-continuous TOA and presents an open-source software tool for data analysis and rigorous Monte Carlo quantification of uncertainties. Notably, the software tool includes novel means to correct for instrument drift and identify and quantify the uncertainty in the OC/EC split point. This is a significant improvement on the uncertainty estimation in the manufacturer's software, which ignores split point uncertainty and otherwise uses fixed equations for relative and absolute errors (generally leading to under-estimated uncertainties and often yielding non-physical results as demonstrated in several example data sets). The demonstrated calibration protocol and new software tool enabling accurate quantification of combined uncertainties from calibration, repeatability, and OC/EC split point are shared with the intent of assisting other researchers in achieving better measurements of OC, EC, and total carbon mass in aerosol samples.
- Published
- 2019
40. Bedrock Geology of the Logansport 30- x 60-Minute Quadrangle, Indiana
- Author
-
Patrick I. McLaughlin, Alyssa M. Bancroft, and Matthew R. Johnson
- Subjects
Quadrangle ,Bedrock geology ,Geochemistry ,Geology - Abstract
This is a map. There is no abstract.
- Published
- 2019
41. A methane emissions reduction equivalence framework for alternative leak detection and repair programs
- Author
-
Thomas E. Barchyn, Thomas A. Fox, Daniel Zimmerle, Tim Taylor, David Lyon, Matthew R. Johnson, Chris H. Hugenholtz, and Arvind P. Ravikumar
- Subjects
Methane emissions ,Atmospheric Science ,Environmental Engineering ,010504 meteorology & atmospheric sciences ,Computer science ,010501 environmental sciences ,Oceanography ,01 natural sciences ,Fugitive emissions ,Leak detection and repair ,Emissions equivalence ,Upstream oil and gas ,Natural gas ,Leak detection ,Equivalence (measure theory) ,lcsh:Environmental sciences ,0105 earth and related environmental sciences ,lcsh:GE1-350 ,Ecology ,business.industry ,Fossil fuel ,Geology ,Geotechnical Engineering and Engineering Geology ,Systems engineering ,business ,Mobile device - Abstract
Fugitive methane emissions from the oil and gas sector are typically addressed through periodic leak detection and repair surveys. These surveys, conducted manually using handheld leak detection technologies, are time-consuming. To improve the speed and cost-effectiveness of leak detection, technology developers are introducing innovative solutions using mobile platforms, close-range portable systems, and permanent installations. Many of these new approaches promise faster, cheaper, or more effective leak detection than conventional methods. However, ensuring mitigation targets are achieved requires demonstrating that alternative approaches are at least as effective in reducing emissions as current approaches – a concept known as emissions reduction equivalence. Here, we propose a five-stage framework for demonstrating equivalence that combines controlled testing, simulation modeling, and field trials. The framework was developed in consultation with operators, regulators, academics, solution providers, consultants, and non-profit groups from Canada and the U.S. We present the equivalence framework and discuss challenges to implementation.
- Published
- 2019
42. RF design of APEX2 two-cell continuous-wave normal conducting photoelectron gun cavity based on multi-objective genetic algorithm
- Author
-
Chad Mitchell, Tianhuan Luo, Matthew R. Johnson, John Staples, Daniele Filippetto, Hua Feng, A. Lambert, Russell Wells, Steve Virostek, Derun Li, and F. Sannibale
- Subjects
Accelerator Physics (physics.acc-ph) ,Nuclear and High Energy Physics ,Brightness ,Photoelectron RF gun ,FOS: Physical sciences ,Electron ,RF cavity design ,Atomic ,Optics ,Particle and Plasma Physics ,Thermal emittance ,Nuclear ,Instrumentation ,Electron gun ,physics.acc-ph ,Physics ,business.industry ,Free-electron laser ,Molecular ,Nuclear & Particles Physics ,Other Physical Sciences ,Multi-objective genetic algorithm ,Cathode ray ,Continuous wave ,Physics - Accelerator Physics ,business ,Beam (structure) ,Astronomical and Space Sciences - Abstract
High brightness, high repetition rate electron beams are key components for optimizing the performance of next generation scientific instruments, such as MHz-class X-ray Free Electron Laser (XFEL) and Ultra-fast Electron Diffraction/Microscopy (UED/UEM). In the Advanced Photo-injector EXperiment (APEX) at Berkeley Lab, a photoelectron gun based on a 185.7 MHz normal conducting re-entrant RF cavity, has been proven to be a feasible solution to provide high brightness, high repetition rate electron beam for both XFEL and UED/UEM. Based on the success of APEX, a new electron gun system, named APEX2, has been under development to further improve the electron beam brightness. For APEX2, we have designed a new 162.5 MHz two-cell photoelectron gun and achieved a significant increase on the cathode launching field and the beam exit energy. For a fixed charge per bunch, these improvements will allow for the emittance reduction and hence to an increased beam brightness. The design of APEX2 gun cavity is a complex problem with multiple design goals and restrictions, some even competing each other. For a systematic and comprehensive search for the optimized cavity geometry, we have developed and implemented a novel optimization method based on the Multi-Objective Genetic Algorithm (MOGA).
- Published
- 2019
- Full Text
- View/download PDF
43. Critical Dialogue to Open Doors for Leadership
- Author
-
Andrew, Blom and Matthew R, Johnson
- Subjects
Adult ,Leadership ,Young Adult ,Universities ,Humans ,Cultural Diversity ,Cooperative Behavior ,Group Processes - Abstract
This chapter explores how the high-impact practice of critical, intergroup dialogue connects leadership with the broader learning objectives in the liberal arts. Based on experiences with building partnerships for leadership learning at their own university, the authors offer lessons for fostering cross-unit collaboration.
- Published
- 2018
44. A new tool for equating lexical stimuli across experimental conditions
- Author
-
Phui Cheng Lim, Matthew R. Johnson, and Evan N. Lintz
- Subjects
Computer science ,Science ,Clinical Biochemistry ,Lexical ,computer.software_genre ,Psycholinguistics ,Lexical item ,Cognitive psychology ,Resampling ,Wordlist ,Equating ,Lexical Item Balancing & Resampling Algorithm (LIBRA) ,business.industry ,English Lexicon Project ,Usability ,Method Article ,Toolbox ,Medical Laboratory Technology ,Genetic algorithm ,Scripting language ,Artificial intelligence ,business ,computer ,Word (computer architecture) ,Natural language processing - Abstract
In cognitive psychology and psycholinguistics, lexical characteristics can drive large effects, which can create confounds when word stimuli are intended to be unrelated to the effect of interest. Thus, it is critical to control for these potential confounds. As an alternative to randomly assigning word bank items to stimulus lists, we present LIBRA (Lexical Item Balancing & Resampling Algorithm), a MATLAB-based toolbox for quickly generating stimulus lists of user-determined length and number that can be closely equated on any number of lexical properties. The toolbox comprises two scripts: a genetic algorithm that performs the inter-list balancing, and a tool for filtering/trimming long omnibus word lists based on simple criteria, prior to balancing. Relying on randomized procedures often results in substantially unbalanced experimental conditions, but our method guarantees that the lists used for each experimental condition contain no meaningful differences. Thus, the lexical characteristics of the specific words used will add an absolute minimum of bias/noise to the experiment in which they are applied.•Our toolbox balances word lists for arbitrary lexical properties to control confounds in cognitive psychology research.•Our toolbox performs more efficiently than pure randomization or balancing manually.•A graphical user interface is provided for ease of use., Graphical abstract Image, graphical abstract
- Published
- 2021
45. Morphology and size of soot from gas flares as a function of fuel and water addition
- Author
-
Joel C. Corbin, Una Trivanovic, Bradley M. Conrad, Timothy A. Sipkens, Steven N. Rogak, Alberto Baldelli, Mohsen Kazemimanesh, Matthew R. Johnson, A. Melina Jefferson, and Jason S. Olfert
- Subjects
water addition ,Materials science ,effective density ,020209 energy ,General Chemical Engineering ,Analytical chemistry ,Energy Engineering and Power Technology ,02 engineering and technology ,medicine.disease_cause ,soot ,symbols.namesake ,020401 chemical engineering ,Scanning mobility particle sizer ,transmission electron microscopy ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0204 chemical engineering ,gas flaring ,Turbulent diffusion ,Organic Chemistry ,primary particle size ,Soot ,Fuel Technology ,Combustor ,symbols ,Particle ,Heat of combustion ,Particle size ,Raman spectroscopy - Abstract
A large-scale, laboratory turbulent diffusion flame was used to study the effects of fuel composition on soot size and morphology. The burner and fuels are typical of those used in the upstream oil and gas industry for gas flaring, a practice commonly used to dispose of excess gaseous hydrocarbons. Fuels were characterized by their carbon-to-hydrogen ratio (from 0.264 to 0.369) and their volumetric higher heating value (HHVv) (from 35.8 to 75.2 MJ/m3). Transmission electron microscopy (TEM) was used to assess primary particle and aggregate size, showing that the scaling of primary particle size to aggregate size was roughly the same for all of the considered fuels (dp = 16.3(da,100 [nm]/100)0.35). However, fuels with higher HHVv produced substantially larger soot aggregates. A scanning mobility particle sizer (SMPS) was also used (i) to measure mobility diameter distributions and (ii) in tandem with a centrifugal particle mass analyzer (CPMA) to determine the two-dimensional mass-mobility and effective density-mobility distributions using a new inversion approach. The new approach was shown to improve internal consistency of inferred morphological parameters, though with a shift relative to median-based analysis of the tandem data. Raman spectroscopy was used to quantify the degree of graphitization in the soot nanostructure. The addition of water to the fuel consistently reduced the soot yields but did not affect other morphological parameters. Larger aggregates also tended to have larger primary particles and higher Raman D/G ratios suggesting larger graphitic domains.
- Published
- 2020
46. Effects of stratification on locally lean, near-stoichiometric, and rich iso-octane/air turbulent V-flames
- Author
-
Patrizio C. Vena, Béatrice Deschamps, Matthew R. Johnson, and Hongsheng Guo
- Subjects
Partially premixed ,Turbulent combustion ,Chemistry(all) ,020209 energy ,General Chemical Engineering ,Analytical chemistry ,General Physics and Astronomy ,Energy Engineering and Power Technology ,Stratification (water) ,02 engineering and technology ,Physics and Astronomy(all) ,Curvature ,01 natural sciences ,010309 optics ,chemistry.chemical_compound ,TRACER ,0103 physical sciences ,V-flame ,0202 electrical engineering, electronic engineering, information engineering ,Equivalence ratio gradient ,Back-support ,Stratified combustion ,Octane ,Turbulence ,Heat release rate ,General Chemistry ,Lewis number ,Fuel Technology ,chemistry ,Planar laser-induced fluorescence ,Chemical Engineering(all) ,Stoichiometry - Abstract
The effects of partial premixing on locally rich, near-stoichiometric, and lean flame regions were investigated in stratified, iso-octane/air turbulent V-flames by varying the mean equivalence ratio gradient along the exit plane of a rectangular slot burner. Instantaneous heat release rate (HRR) images were obtained from the product of spatially registered, near-simultaneously acquired OH and CH 2 O planar laser induced fluorescence (PLIF) images. HRR data were analyzed within a region of interest (ROI) that was determined from separate 3-pentanone tracer PLIF measurements. The ROI was unique to each gradient flame setting, and was configured to ensure the mean range of equivalence ratios being analyzed was constant among gradient conditions. This allowed distinction of the effects of mean equivalence ratio gradient at the flame front from effects associated with having different ranges of equivalence ratios within the flame zone. Individual flame realizations were studied for differences in the local peak HRR and instantaneous flame thickness δ t as they varied with curvature among gradient conditions. While general trends for the fully-premixed cases were consistent with Lewis number theory, subtle changes in the normalized distribution of local peak HRR vs. curvature were observed for locally rich and locally lean flames propagating in different mean ϕ gradients. Negligible changes were observed for near-stoichiometric flames, suggesting that gradient effects may influence the local thermodiffusive stability of off-stoichiometric mixtures more significantly. Ensemble averages of individual peak HRR and δ t values within each ROI were separately evaluated, and differences among gradient conditions were greater than those observed for the normalized distributions with curvature. For all flame settings considered, an increase in either of the peak HRR or δ t led to a decrease in the other. Gradient effects were observed when comparing back- and front-supported locally rich flames, which experienced opposite changes in peak HRR of +10.1% and −5.2% for gradient settings ∂ ϕ /∂ y = −0.014 mm −1 and ∂ ϕ /∂ y = 0.012 mm −1 respectively, coupled with a thinning and thickening of δ t of −7.2% and +2.4%. Similar but weaker trends were observed for near-stoichiometric flame regions, with a decrease in peak HRR of up to −3.5% and a thickening up to +2.8% for the steepest gradient ∂ ϕ /∂ y = 0.029 mm −1 . Locally lean flames showed small increases in peak HRR of up to +3.8%, and decreases in δ t of up to −2.1% for back-supported gradient case ∂ ϕ /∂ y = 0.024 mm −1 , however, variations were not as significant as those observed for back-supported rich flame regions of equivalent gradients. The presented results show that mean gradients of equivalence ratio can alter the local characteristics of partially premixed flames. Through subtle differences in the local distribution of peak HRR with curvature, more pronounced variations in the magnitude of the mean local peak HRR and δ t , and opposing effects in both peak HRR and δ t for back- and front-supported rich flames, the data reveal the specific influence of equivalence ratio gradients in experiments where the mean range of equivalence ratios in the analysis region is fixed.
- Published
- 2015
47. Attentional competition in perceptual and reflective attention: An fMRI study
- Author
-
Matthew R. Johnson, Evan N. Lintz, and Zachary J. Cole
- Subjects
Competition (economics) ,Ophthalmology ,Perception ,media_common.quotation_subject ,Psychology ,Sensory Systems ,Cognitive psychology ,media_common - Published
- 2020
48. The effect of multiple scattering on optical measurement of soot emissions in atmospheric plumes
- Author
-
Matthew R. Johnson, Bradley M. Conrad, and Jeremy N. Thornock
- Subjects
Brightness ,Radiation ,010504 meteorology & atmospheric sciences ,Scattering ,Turbulence ,Monte Carlo method ,medicine.disease_cause ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Soot ,Plume ,Computational physics ,13. Climate action ,Transmittance ,medicine ,Environmental science ,Ray tracing (graphics) ,Spectroscopy ,0105 earth and related environmental sciences - Abstract
Measurements of soot/black carbon emissions via optical observations of atmospheric plume transmittance require a correction to account for bias in perceived plume brightness due to inscatter of ambient light. The ability to accurately correct for inscattering is hampered, however, by the potential for multiple scattering (MS) within the plume, which cannot be directly considered without detailed knowledge of the turbulent plume's structure. MS is thus oft-ignored within such measurement techniques, resulting in an inherent upward bias in calculated emissions. In this work, Monte Carlo “ray tracing” (MCRT) analyses for realistic lines-of-sight through large-eddy simulated, soot-laden atmospheric plumes of gas flares were used as case study data for analysis of MS effects. Through a reverse MCRT procedure, a remarkably simple yet accurate model was derived that relates the quantity of inscattered light under MS conditions to an estimate assuming single-scattering. Case study data from previous field measurements of gas flares using the sky-LOSA technique demonstrate that neglecting MS effects can bias reported soot emission rates by up to and exceeding one-quarter of typical measurement uncertainties. Coupling this model with an additional procedure to correct for minor model biases allows the complex influence of multiple scattering to be directly and accurately considered in optical measurements of soot emissions.
- Published
- 2020
49. Beam steering effects on remote optical measurements of pollutant emissions in heated plumes and flares
- Author
-
Matthew R. Johnson, Jeremy N. Thornock, and Bradley M. Conrad
- Subjects
Radiation ,010504 meteorology & atmospheric sciences ,business.industry ,Monte Carlo method ,Beam steering ,Detector ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,law.invention ,Plume ,Wavelength ,Optics ,law ,Radiative transfer ,Physics::Accelerator Physics ,Environmental science ,business ,Refractive index ,Spectroscopy ,0105 earth and related environmental sciences ,Flare - Abstract
Remote optical measurement techniques are valuable tools for the quantification of combustion-generated, climate-forcing emissions. Leveraging radiometric observations along a detector's line-of-sight, these techniques resolve column density information from which pollutant loading and emission rates can be deduced for an in situ atmospheric plume of a targeted source. One commonly neglected source of uncertainty in such measurements is beam steering – the deflection of light as it traverses the plume due to composition- and temperature-driven gradients in the real refractive index field of the plume. In this work, three correction parameters were derived from the radiative transfer equation to enable consideration of beam steering effects on these measurement techniques. A Monte Carlo procedure was performed to derive realistic optical axes through plumes of large-eddy-simulated gas flares, considered to be an extreme case of beam steering due to elevated temperature and composition gradients near the flame. Deflections of light due to beam steering were quantified at wavelengths in the visible spectrum and within three diagnostic-relevant infrared absorption bands for methane and carbon dioxide. A conservative, empirical model for the degree of beam steering was derived. Moreover, from these data, correction parameters required to account for the impact of beam steering on perceived incident intensity, optical depth, and source intensity were found to be negligible at all studied wavelengths relative to typical instrument noise. Thus, this work demonstrates that even for the extreme case of a turbulent heated flare plume, beam steering has negligible impact on the ability to quantify pollutant loading and emissions.
- Published
- 2020
50. Monitoring Changes in Gait Adaptation to Identify Construction Workers’ Risk Preparedness after Multiple Exposures to a Hazard
- Author
-
Junseo Bae, Cenfei Sun, Changbum R. Ahn, and Matthew R. Johnson
- Subjects
medicine.medical_specialty ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Hazard ,010104 statistics & probability ,Gait (human) ,Physical medicine and rehabilitation ,Preparedness ,021105 building & construction ,medicine ,0101 mathematics ,Psychology ,Adaptation (computer science) - Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.