23 results on '"Kary Myers"'
Search Results
2. How to Host An Effective Data Competition: Statistical Advice for Competition Design and Analysis
- Author
-
Christine M. Anderson-Cook, Lu Lu, Michael L. Fugate, Kevin R. Quinlan, Kary Myers, and Norma H. Pawley
- Subjects
Generalized linear model ,Competition (economics) ,Exploratory data analysis ,Identification (information) ,Computer science ,Econometrics ,Logistic regression ,Host (network) ,Advice (complexity) ,Analysis ,Computer Science Applications ,Information Systems - Published
- 2019
- Full Text
- View/download PDF
3. Structured discrepancy in Bayesian model calibration for ChemCam on the Mars Curiosity rover
- Author
-
Kary Myers, James Colgan, Elizabeth J. Judge, K. Sham Bhat, and Earl Lawrence
- Subjects
Statistics and Probability ,spectroscopy ,Computer science ,Calibration (statistics) ,Bayesian probability ,Mars ,Mars Exploration Program ,Curiosity rover ,Bayesian inference ,Mars rover ,discrepancy modeling ,Modeling and Simulation ,simulations ,Statistics, Probability and Uncertainty ,National laboratory ,Remote sensing ,Bayesian model calibration - Abstract
The Mars rover Curiosity carries an instrument called ChemCam to determine the composition of the soil and rocks via laser-induced breakdown spectroscopy (LIBS). Los Alamos National Laboratory has developed a simulation capability that can predict spectra from ChemCam, but there are major-scale differences between the prediction and observation. This presents a challenge when using Bayesian model calibration to determine the unknown physical parameters that describe the LIBS observations. We present an analysis of LIBS data to support ChemCam based on including a structured discrepancy model in a Bayesian model-calibration scheme. This is both a novel application and an illustration of the importance of setting scientifically informed and constrained discrepancy models within Bayesian model calibration.
- Published
- 2020
4. Data for training and testing radiation detection algorithms in an urban environment
- Author
-
Andrew D. Nicholson, Douglas E. Peplow, Daniel E. Archer, Michael J. Willis, Brian J. Quiter, James Ghawaly, Christine M. Anderson-Cook, and Kary Myers
- Subjects
Statistics and Probability ,Data Descriptor ,Computer science ,Special nuclear material ,Process (computing) ,Training (meteorology) ,Experimental data ,Scientific data ,Nuclear material ,Library and Information Sciences ,Characterization and analytical techniques ,Particle detector ,Computer Science Applications ,Education ,Identification (information) ,lcsh:Q ,Experimental nuclear physics ,Statistics, Probability and Uncertainty ,lcsh:Science ,Algorithm ,Urban environment ,Information Systems - Abstract
The detection, identification, and localization of illicit nuclear materials in urban environments is of utmost importance for national security. Most often, the process of performing these operations consists of a team of trained individuals equipped with radiation detection devices that have built-in algorithms to alert the user to the presence nuclear material and, if possible, to identify the type of nuclear material present. To encourage the development of new detection, radioisotope identification, and source localization algorithms, a dataset consisting of realistic Monte Carlo–simulated radiation detection data from a 2 in. × 4 in. × 16 in. NaI(Tl) scintillation detector moving through a simulated urban environment based on Knoxville, Tennessee, was developed and made public in the form of a Topcoder competition. The methodology used to create this dataset has been verified using experimental data collected at the Fort Indiantown Gap National Guard facility. Realistic signals from special nuclear material and industrial and medical sources are included in the data for developing and testing algorithms in a dynamic real-world background., Measurement(s) gamma ray photon detection events • radiation detection data Technology Type(s) Monte Carlo particle transport model • computational modeling technique Sample Characteristic - Environment city Sample Characteristic - Location State of Tennessee Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.12654065
- Published
- 2020
5. An Initial Exploration of Bayesian Model Calibration for Estimating the Composition of Rocks and Soils on Mars
- Author
-
Kary Myers, James Colgan, Earl Lawrence, Claire-Alice Hébert, and Elizabeth J. Judge
- Subjects
FOS: Computer and information sciences ,Calibration (statistics) ,Computer science ,Mars Exploration Program ,Inverse problem ,Bayesian inference ,Statistics - Applications ,Computer Science Applications ,symbols.namesake ,Latin hypercube sampling ,symbols ,Applications (stat.AP) ,Laser-induced breakdown spectroscopy ,Gaussian process ,Algorithm ,Gaussian process emulator ,Analysis ,Information Systems - Abstract
The Mars Curiosity rover carries an instrument, ChemCam, designed to measure the composition of surface rocks and soil using laser-induced breakdown spectroscopy (LIBS). The measured spectra from this instrument must be analyzed to identify the component elements in the target sample, as well as their relative proportions. This process, which we call disaggregation, is complicated by so-called matrix effects, which describe nonlinear changes in the relative heights of emission lines as an unknown function of composition due to atomic interactions within the LIBS plasma. In this work we explore the use of the plasma physics code ATOMIC, developed at Los Alamos National Laboratory, for the disaggregation task. ATOMIC has recently been used to model LIBS spectra and can robustly reproduce matrix effects from first principles. The ability of ATOMIC to predict LIBS spectra presents an exciting opportunity to perform disaggregation in a manner not yet tried in the LIBS community, namely via Bayesian model calibration. However, using it directly to solve our inverse problem is computationally intractable due to the large parameter space and the computation time required to produce a single output. Therefore we also explore the use of emulators as a fast solution for this analysis. We discuss a proof of concept Gaussian process emulator for disaggregating two-element compounds of sodium and copper. The training and test datasets were simulated with ATOMIC using a Latin hypercube design. After testing the performance of the emulator, we successfully recover the composition of 25 test spectra with Bayesian model calibration., Comment: 10 pages, 5 figures, special issue
- Published
- 2020
- Full Text
- View/download PDF
6. The weighted priors approach for combining expert opinions in logistic regression experiments
- Author
-
Christine M. Anderson-Cook, Kevin R. Quinlan, and Kary Myers
- Subjects
Generalized linear model ,021103 operations research ,Computer science ,business.industry ,Design of experiments ,0211 other engineering and technologies ,Context (language use) ,02 engineering and technology ,Logistic regression ,Machine learning ,computer.software_genre ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Range (mathematics) ,Variable (computer science) ,Prior probability ,Econometrics ,Artificial intelligence ,0101 mathematics ,Safety, Risk, Reliability and Quality ,business ,computer ,Reliability (statistics) - Abstract
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration. While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. We illustrate the method through multiple scenarios and a motivating example. Additional figures for this ar...
- Published
- 2017
- Full Text
- View/download PDF
7. An Adaptive Modeling Framework for Bivariate Data Streams with Applications to Change Detection in Cyber-Physical Systems
- Author
-
Jordan Noble, Kary Myers, and Joshua Plasse
- Subjects
Data stream mining ,Computer science ,Distributed computing ,Cyber-physical system ,02 engineering and technology ,STREAMS ,01 natural sciences ,Data modeling ,010104 statistics & probability ,Data acquisition ,Bivariate data ,020204 information systems ,Adaptive system ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Change detection - Abstract
Cyber-physical systems - systems that incorporate physical devices with cyber components - are appearing in diverse applications, and due to advances in data acquisition, are accompanied with large amounts of data. The interplay between the cyber and the physical components leaves such systems vulnerable to faults and intrusions, motivating the development of a general model that can efficiently and continuously monitor a cyber-physical system. To be of practical value, the model should be adaptive and equipped with the ability to detect changes in the system. This paper makes three contributions: (1) a new adaptive modeling framework for monitoring an arbitrary cyber-physical system in real-time using a flexible statistical distribution called the normal-gamma; (2) a novel streaming validation procedure, demonstrated on data streams from a cyber-physical system at Los Alamos National Laboratory, to justify the use of the normal-gamma and our new adaptive modeling approach; and (3) a new online change detection algorithm demonstrated on synthetic normal-gamma data streams.
- Published
- 2017
- Full Text
- View/download PDF
8. Effective and efficient data sampling using bitmap indices
- Author
-
Yu Su, Kary Myers, Jonathan Woodring, Gagan Agrawal, Joanne Wendelberger, and James Ahrens
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Search engine indexing ,Big data ,Sampling (statistics) ,Sample (statistics) ,computer.file_format ,computer.software_genre ,Metadata ,Parallel processing (DSP implementation) ,Bitmap ,Data mining ,Dimension (data warehouse) ,business ,computer ,Software - Abstract
With growing computational capabilities of parallel machines, scientific simulations are being performed at finer spatial and temporal scales, leading to a data explosion. The growing sizes are making it extremely hard to store, manage, disseminate, analyze, and visualize these datasets, especially as neither the memory capacity of parallel machines, memory access speeds, nor disk bandwidths are increasing at the same rate as the computing power. Sampling can be an effective technique to address the above challenges, but it is extremely important to ensure that dataset characteristics are preserved, and the loss of accuracy is within acceptable levels. In this paper, we address the data explosion problems by developing a novel sampling approach, and implementing it in a flexible system that supports server-side sampling and data subsetting. We observe that to allow subsetting over scientific datasets, data repositories are likely to use an indexing technique. Among these techniques, we see that bitmap indexing can not only effectively support subsetting over scientific datasets, but can also help create samples that preserve both value and spatial distributions over scientific datasets. We have developed algorithms for using bitmap indices to sample datasets. We have also shown how only a small amount of additional metadata stored with bitvectors can help assess loss of accuracy with a particular subsampling level. Some of the other properties of this novel approach include: (1) sampling can be flexibly applied to a subset of the original dataset, which may be specified using a value-based and/or a dimension-based subsetting predicate, and (2) no data reorganization is needed, once bitmap indices have been generated. We have extensively evaluated our method with different types of datasets and applications, and demonstrated the effectiveness of our approach.
- Published
- 2014
- Full Text
- View/download PDF
9. Signatures for several types of naturally occurring radioactive materials
- Author
-
Kary Myers and Tom Burr
- Subjects
Radiation ,Computer science ,business.industry ,Norm (mathematics) ,Pattern recognition (psychology) ,Radioactive waste ,Feature selection ,Pattern recognition ,Artificial intelligence ,Nuclear material ,business ,Particle detector ,Energy (signal processing) - Abstract
Detectors to scan for illicit nuclear material began to be installed at various screening locations in 2002. On the sites considered, each vehicle drives slowly by radiation detectors that scan for neutron and gamma radiation, resulting in a time series profile. One performance limitation is that naturally occurring radioactive materials (NORM), such as cat litter, are routinely shipped across borders, leading to nuisance alarms. One strategy for nuisance alarms is to define and recognize "signatures" of certain types of NORM so that many nuisance alarms can be quickly resolved as being innocent. Here, we consider candidate profile features, such as the peak width and the maximum energy ratio, and use pattern recognition methods to illustrate the extent to which several common types of NORM can be distinguished.
- Published
- 2008
- Full Text
- View/download PDF
10. Alarm criteria in radiation portal monitoring
- Author
-
Tom Burr, George Tompkins, James R. Gattiker, and Kary Myers
- Subjects
Safety Management ,Radiation ,Computer science ,Protective Devices ,Real-time computing ,Detector ,Gamma ray ,Nuclear material ,ALARM ,Gamma Rays ,Radiation Monitoring ,Sequential analysis ,Background suppression ,Radiation monitoring ,Equipment Failure - Abstract
Gamma detectors at border crossings are intended to detect illicit nuclear material. These detectors collect counts that are used to determine whether to trigger an alarm. Several candidate alarm rules are evaluated, with attention to background suppression caused by the vehicle. Because the count criterion leads to many nuisance alarms and because background suppression by the vehicle is smaller for ratios of counts, analysis of a ratio criterion is included. Detection probability results that consider the effects of 5 factors are given for 2 signal-injection studies, 1 for counts, and 1 for count ratios.
- Published
- 2007
- Full Text
- View/download PDF
11. ADR visualization: A generalized framework for ranking large-scale scientific data using Analysis-Driven Refinement
- Author
-
Jonathan Woodring, Kary Myers, Boonthanome Nouanesengsy, James Ahrens, and John Patchett
- Subjects
Prioritization ,Adaptive mesh refinement ,business.industry ,Computer science ,computer.software_genre ,Visualization ,Data modeling ,Data visualization ,Entropy (information theory) ,Data mining ,business ,Sparse data sets ,computer ,Sparse matrix - Abstract
Prioritization of data is necessary for managing large-scale scientific data, as the scale of the data implies that there are only enough resources available to process a limited subset of the data. For example, data prioritization is used during in situ triage to scale with bandwidth bottlenecks, and used during focus+context visualization to save time during analysis by guiding the user to important information. In this paper, we present ADR visualization, a generalized analysis framework for ranking large-scale data using Analysis-Driven Refinement (ADR), which is inspired by Adaptive Mesh Refinement (AMR). A large-scale data set is partitioned in space, time, and variable, using user-defined importance measurements for prioritization. This process creates a prioritization tree over the data set. Using this tree, selection methods can generate sparse data products for analysis, such as focus+context visualizations or sparse data sets.
- Published
- 2014
- Full Text
- View/download PDF
12. LANL CSSE L2: Case Study of In Situ Data Analysis in ASC Integrated Codes
- Author
-
Marcus G. Daniels, John Patchett, Gabriel M. Rockefeller, Boonthanome Nouanesengsy, Hilary M. Abhold, Kary Myers, Patrick O'Leary, James Ahrens, Curtis Vincent Canada, Patricia Fasel, Christopher Sewell, Jonathan Woodring, Christopher Mitchell, Joanne Wendelberger, and Li-Ta Lo
- Subjects
In situ ,business.industry ,Computer science ,Embedded system ,business - Published
- 2013
- Full Text
- View/download PDF
13. Taming massive distributed datasets
- Author
-
Gagan Agrawal, Kary Myers, Joanne Wendelberger, Yu Su, Jonathan Woodring, and James Ahrens
- Subjects
business.industry ,Computer science ,Big data ,Search engine indexing ,Sampling (statistics) ,Sample (statistics) ,computer.file_format ,computer.software_genre ,Metadata ,Bitmap ,Data mining ,Dimension (data warehouse) ,business ,Dissemination ,computer - Abstract
With growing computational capabilities of parallel machines, scientific simulations are being performed at finer spatial and temporal scales, leading to a data explosion. The growing sizes are making it extremely hard to store, manage, disseminate, analyze, and visualize these datasets, especially as neither the memory capacity of parallel machines, memory access speeds, nor disk bandwidths are increasing at the same rate as the computing power. Sampling can be an effective technique to address the above challenges, but it is extremely important to ensure that dataset characteristics are preserved, and the loss of accuracy is within acceptable levels. In this paper, we address the data explosion problems by developing a novel sampling approach, and implementing it in a flexible system that supports server-side sampling and data subsetting. We observe that to allow subsetting over scientific datasets, data repositories are likely to use an indexing technique. Among these techniques, we see that bitmap indexing can not only effectively support subsetting over scientific datasets, but can also help create samples that preserve both value and spatial distributions over scientific datasets. We have developed algorithms for using bitmap indices to sample datasets. We have also shown how only a small amount of additional metadata stored with bitvectors can help assess loss of accuracy with a particular subsampling level. Some of the other properties of this novel approach include: 1) sampling can be flexibly applied to a subset of the original dataset, which may be specified using a value-based and/or a dimension-based subsetting predicate, and 2) no data reorganization is needed, once bitmap indices have been generated. We have extensively evaluated our method with different types of datasets and applications, and demonstrated the effectiveness of our approach.
- Published
- 2013
- Full Text
- View/download PDF
14. A comparison of methods for estimating broadband noise in the frequency domain
- Author
-
Norma H. Pawley, Don Hush, Kary Myers, and Bob Nemzek
- Subjects
Signal processing ,Noise measurement ,Welch's method ,Computer science ,Frequency domain ,Discrete frequency domain ,Phase noise ,Electronic engineering ,Periodogram ,Spectral density estimation ,Time domain ,Cross-spectrum ,Algorithm - Abstract
Estimating the noise component of a signal that consists of sinusoids plus broadband noise is a ubiquitous problem. Most methods work in the time-domain, but frequency-domain methods can be computationally more efficient and invariant to the time-domain noise distribution. We compare two prominent frequency-domain approaches, one that computes statistics over periodograms of multiple time segments, and another that computes statistics over frequency segments from a single periodogram. We explore the accuracy-resolution tradeoff for both approaches and provide comparisons of accuracy, sample and segment size dependence, frequency resolution, and computational complexity for each method.
- Published
- 2011
- Full Text
- View/download PDF
15. Sparse classification of rf transients using chirplets and learned dictionaries
- Author
-
Kary Myers, Norma H. Pawley, Steven P. Brumby, and Daniela I. Moody
- Subjects
Computer science ,business.industry ,Feature extraction ,Image processing ,Pattern recognition ,Sparse approximation ,Machine learning ,computer.software_genre ,symbols.namesake ,Fourier transform ,Discriminative model ,Robustness (computer science) ,symbols ,Artificial intelligence ,business ,computer ,Test data - Abstract
We assess the performance of a sparse classification approach for radiofrequency (RF) transient signals using dictionaries adapted to the data. We explore two approaches: pursuit-type decompositions over analytical, over-complete dictionaries, and dictionaries learned directly from data. Pursuit-type decompositions over analytical, over-complete dictionaries yield sparse representations by design and can work well for target signals in the same function class as the dictionary atoms. Discriminative dictionaries learned directly from data do not rely on analytical constraints or additional knowledge about the signal characteristics, and provide sparse representations that can perform well when used with a statistical classifier. We present classification results for learned dictionaries on simulated test data, and discuss robustness compared to conventional Fourier methods. We draw from techniques of adaptive feature extraction, statistical machine learning, and image processing.
- Published
- 2011
- Full Text
- View/download PDF
16. Radio frequency (RF) transient classification using sparse representations over learned dictionaries
- Author
-
Daniela I. Moody, Kary Myers, Steven P. Brumby, and Norma H. Pawley
- Subjects
Hebbian theory ,K-SVD ,Discriminative model ,Computer science ,business.industry ,Speech recognition ,Feature extraction ,Clutter ,Pattern recognition ,Noise (video) ,Sparse approximation ,Artificial intelligence ,business - Abstract
Automatic classification of transitory or pulsed radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such transients are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. We compare two dictionary learning methods from the image analysis literature, the K-SVD algorithm and Hebbian learning, and extend them for use with RF data. Both methods allow us to learn discriminative RF dictionaries directly from data without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. In this paper we compare the two dictionary learning methods and discuss how their performance changes as a function of dictionary training parameters. We demonstrate that learned dictionary techniques are suitable for pulsed RF analysis and present results with varying background clutter and noise levels.
- Published
- 2011
- Full Text
- View/download PDF
17. Classification of transient signals using sparse representations over adaptive dictionaries
- Author
-
Norma H. Pawley, Daniela I. Moody, Kary Myers, and Steven P. Brumby
- Subjects
K-SVD ,Discriminative model ,business.industry ,Computer science ,Feature extraction ,Wavelet transform ,Feature selection ,Pattern recognition ,Noise (video) ,Sparse approximation ,Artificial intelligence ,business - Abstract
Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in background content and noise levels. The target classification decision is obtained in almost real-time via a parallel, vectorized implementation.
- Published
- 2011
- Full Text
- View/download PDF
18. Capturing dynamics on multiple time scales: a multilevel fusion approach for cluttered electromagnetic data
- Author
-
Norma H. Pawley, Kary Myers, and Steven P. Brumby
- Subjects
Signal processing ,Computer science ,law ,Real-time computing ,Feature extraction ,Clutter ,Radio frequency ,Noise (video) ,Radar ,Hybrid algorithm ,Simulation ,Electric beacon ,law.invention - Abstract
Many problems in electromagnetic signal analysis exhibit dynamics on a wide range of time scales. Further, these dynamics may involve both continuous source generation processes and discrete source mode dynamics. These rich temporal characteristics can present challenges for standard modeling approaches, particularly in the presence of nonstationary noise and clutter sources. Here we demonstrate a hybrid algorithm designed to capture the dynamic behavior at all relevant time scales while remaining robust to clutter and noise at each time scale. We draw from techniques of adaptive feature extraction, statistical machine learning, and discrete process modeling to construct our hybrid algorithm. We describe our approach and present results applying our hybrid algorithm to a simulated dataset based on an example radio beacon identification problem: civilian air traffic control. This application illustrates the multi-scale complexity of the problems we wish to address. We consider a multi-mode air traffic control radar emitter operating against a cluttered background of competing radars and continuous-wave communications signals (radios, TV broadcasts). Our goals are to find a compact representation of the radio frequency measurements, identify which pulses were emitted by the target source, and determine the mode of the source.
- Published
- 2010
- Full Text
- View/download PDF
19. Capturing dynamics on multiple time scales: A hybrid approach for cluttered electromagnetic data
- Author
-
Kary Myers, John Galbraith, Steven P. Brumby, and Norma H. Pawley
- Subjects
Computer science ,business.industry ,Noise (signal processing) ,Feature extraction ,Signal ,Particle detector ,Orders of magnitude (time) ,Chirp ,Clutter ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Electromagnetic pulse - Abstract
Many problems in electromagnetic signal analysis exhibit dynamics on a wide range of time scales against nonstationary clutter and noise. We consider a problem in which the relevant time scales can range from nanoseconds to hours or days (12 or 13 orders of magnitude). We present a hybrid algorithm currently designed to capture the dynamic behavior at scales from nanoseconds to milliseconds (6 orders of magnitude) while remaining robust to clutter and noise. We draw from techniques of adaptive feature extraction, statistical machine learning, and discrete process modeling and present results on a simulated multimode problem. Our goals are to find a representation of the signal that allows us to identify which pulses were produced by a target emitter and to determine the operational mode of the target.
- Published
- 2009
- Full Text
- View/download PDF
20. Effects of background suppression of gamma counts on signal estimation
- Author
-
Kary Myers and Tom Burr
- Subjects
Radiation ,Mean squared error ,Computer science ,business.industry ,Detector ,Signal Width ,Pattern recognition ,Context (language use) ,Signal ,Particle detector ,Statistical power ,ALARM ,Statistics ,Artificial intelligence ,business - Abstract
Gamma detectors at border crossings are intended to detect illicit nuclear material. One of their performance challenges is the fact that vehicles suppress the natural background and, thus, potentially reduce probability of detection of threat items. Here we test several methods to adjust the detection to background suppression in the context of signal estimation. We show that, for the small-to-moderate suppression magnitudes, suppression adjustment leads to higher detection probability. However, for signals triggering alarm without an adjustment, adjustment does not improve estimation of the signal location, only moderately improves estimation of the signal magnitude, and does not improve estimation of the signal width.
- Published
- 2008
21. CoDA 2014 special issue: Exploring data‐focused research across the department of energy
- Author
-
Kary Myers
- Subjects
Computer science ,Data science ,Analysis ,Energy (signal processing) ,Computer Science Applications ,Information Systems ,Coda - Published
- 2015
- Full Text
- View/download PDF
22. State-space models for optical imaging
- Author
-
Anthony Brockwell, Kary Myers, and William F. Eddy
- Subjects
Statistics and Probability ,Heartbeat ,Epidemiology ,Computer science ,Context (language use) ,Signal ,FOS: Mathematics ,Animals ,State space ,Computer vision ,Statistical hypothesis testing ,Probability ,Brain Mapping ,Models, Statistical ,Data collection ,business.industry ,Statistics ,Neurosciences ,Brain ,Statistical model ,Kalman filter ,United States ,Data Interpretation, Statistical ,Cats ,Artificial intelligence ,business ,Photic Stimulation - Abstract
Measurement of stimulus-induced changes in activity in the brain is critical to the advancement of neuroscience. Scientists use a range of methods, including electrode implantation, surface (scalp) electrode placement, and optical imaging of intrinsic signals, to gather data capturing underlying signals of interest in the brain. These data are usually corrupted by artifacts, complicating interpretation of the signal; in the context of optical imaging, two primary sources of corruption are the heartbeat and respiration cycles. We introduce a new linear state-space framework that uses the Kalman filter to remove these artifacts from optical imaging data. The method relies on a likelihood-based analysis under the specification of a formal statistical model, and allows for corrections to the signal based on auxiliary measurements of quantities closely related to the sources of contamination, such as physiological processes. Furthermore, the likelihood-based modeling framework allows us to perform both goodness-of-fit testing and formal hypothesis testing on parameters of interest. Working with data collected by our collaborators, we demonstrate the method of data collection in an optical imaging study of a cat's brain.
- Published
- 2004
- Full Text
- View/download PDF
23. Special Issue: Conference on Data Analysis (CoDA)—Exploring Data-Focused Research across the Department of Energy
- Author
-
Earl Lawrence, Kary Myers, and Hugh A. Chipman
- Subjects
Statistics and Probability ,Work (electrical) ,Computer science ,Applied Mathematics ,Modeling and Simulation ,Data science ,Energy (signal processing) ,Coda - Abstract
The article introduces this special issue of Technometrics featuring some of the challenging, application-driven work presented at the inaugural Conference on Data Analysis (CoDA) in 2012 in Santa Fe, New Mexico.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.