30 results on '"Goodman, Nathan"'
Search Results
2. A Survey of Advances in Botnet Technologies
- Author
-
Goodman, Nathan and Goodman, Nathan
- Abstract
Botnets have come a long way since their inception a few decades ago. Originally toy programs written by network hobbyists, modern-day botnets can be used by cyber criminals to steal billions of dollars from users, corporations, and governments. This paper will look at cutting-edge botnet features and detection strategies from over a dozen research papers, supplemented by a few additional sources. With this data, I will then hypothesize what the future of botnets might hold.
- Published
- 2017
3. Analysis of tumor template from multiple compartments in a blood sample provides complementary access to peripheral tumor biomarkers.
- Author
-
Strauss, William M, Carter, Chris, Simmons, Jill, Klem, Erich, Goodman, Nathan, Vahidi, Behrad, Romero, Juan, Masterman-Smith, Michael, O'Regan, Ruth, Gogineni, Keerthi, Schwartzberg, Lee, Austin, Laura, Dempsey, Paul W, Cristofanilli, Massimo, Strauss, William M, Carter, Chris, Simmons, Jill, Klem, Erich, Goodman, Nathan, Vahidi, Behrad, Romero, Juan, Masterman-Smith, Michael, O'Regan, Ruth, Gogineni, Keerthi, Schwartzberg, Lee, Austin, Laura, Dempsey, Paul W, and Cristofanilli, Massimo
- Abstract
Targeted cancer therapeutics are promised to have a major impact on cancer treatment and survival. Successful application of these novel treatments requires a molecular definition of a patient's disease typically achieved through the use of tissue biopsies. Alternatively, allowing longitudinal monitoring, biomarkers derived from blood, isolated either from circulating tumor cell derived DNA (ctcDNA) or circulating cell-free tumor DNA (ccfDNA) may be evaluated. In order to use blood derived templates for mutational profiling in clinical decisions, it is essential to understand the different template qualities and how they compare to biopsy derived template DNA as both blood-based templates are rare and distinct from the gold-standard. Using a next generation re-sequencing strategy, concordance of the mutational spectrum was evaluated in 32 patient-matched ctcDNA and ccfDNA templates with comparison to tissue biopsy derived DNA template. Different CTC antibody capture systems for DNA isolation from patient blood samples were also compared. Significant overlap was observed between ctcDNA, ccfDNA and tissue derived templates. Interestingly, if the results of ctcDNA and ccfDNA template sequencing were combined, productive samples showed similar detection frequency (56% vs 58%), were temporally flexible, and were complementary both to each other and the gold standard. These observations justify the use of a multiple template approach to the liquid biopsy, where germline, ctcDNA, and ccfDNA templates are employed for clinical diagnostic purposes and open a path to comprehensive blood derived biomarker access.
- Published
- 2016
4. Sequence-Level Analysis of the Major European Huntington Disease Haplotype.
- Author
-
Lee, Jong-Min, Lee, Jong-Min, Kim, Kyung-Hee, Shin, Aram, Chao, Michael J, Abu Elneel, Kawther, Gillis, Tammy, Mysore, Jayalakshmi Srinidhi, Kaye, Julia A, Zahed, Hengameh, Kratter, Ian H, Daub, Aaron C, Finkbeiner, Steven, Li, Hong, Roach, Jared C, Goodman, Nathan, Hood, Leroy, Myers, Richard H, MacDonald, Marcy E, Gusella, James F, Lee, Jong-Min, Lee, Jong-Min, Kim, Kyung-Hee, Shin, Aram, Chao, Michael J, Abu Elneel, Kawther, Gillis, Tammy, Mysore, Jayalakshmi Srinidhi, Kaye, Julia A, Zahed, Hengameh, Kratter, Ian H, Daub, Aaron C, Finkbeiner, Steven, Li, Hong, Roach, Jared C, Goodman, Nathan, Hood, Leroy, Myers, Richard H, MacDonald, Marcy E, and Gusella, James F
- Abstract
Huntington disease (HD) reflects the dominant consequences of a CAG-repeat expansion in HTT. Analysis of common SNP-based haplotypes has revealed that most European HD subjects have distinguishable HTT haplotypes on their normal and disease chromosomes and that ∼50% of the latter share the same major HD haplotype. We reasoned that sequence-level investigation of this founder haplotype could provide significant insights into the history of HD and valuable information for gene-targeting approaches. Consequently, we performed whole-genome sequencing of HD and control subjects from four independent families in whom the major European HD haplotype segregates with the disease. Analysis of the full-sequence-based HTT haplotype indicated that these four families share a common ancestor sufficiently distant to have permitted the accumulation of family-specific variants. Confirmation of new CAG-expansion mutations on this haplotype suggests that unlike most founders of human disease, the common ancestor of HD-affected families with the major haplotype most likely did not have HD. Further, availability of the full sequence data validated the use of SNP imputation to predict the optimal variants for capturing heterozygosity in personalized allele-specific gene-silencing approaches. As few as ten SNPs are capable of revealing heterozygosity in more than 97% of European HD subjects. Extension of allele-specific silencing strategies to the few remaining homozygous individuals is likely to be achievable through additional known SNPs and discovery of private variants by complete sequencing of HTT. These data suggest that the current development of gene-based targeting for HD could be extended to personalized allele-specific approaches in essentially all HD individuals of European ancestry.
- Published
- 2015
5. Sequence-Level Analysis of the Major European Huntington Disease Haplotype.
- Author
-
Lee, Jong-Min, Lee, Jong-Min, Kim, Kyung-Hee, Shin, Aram, Chao, Michael J, Abu Elneel, Kawther, Gillis, Tammy, Mysore, Jayalakshmi Srinidhi, Kaye, Julia A, Zahed, Hengameh, Kratter, Ian H, Daub, Aaron C, Finkbeiner, Steven, Li, Hong, Roach, Jared C, Goodman, Nathan, Hood, Leroy, Myers, Richard H, MacDonald, Marcy E, Gusella, James F, Lee, Jong-Min, Lee, Jong-Min, Kim, Kyung-Hee, Shin, Aram, Chao, Michael J, Abu Elneel, Kawther, Gillis, Tammy, Mysore, Jayalakshmi Srinidhi, Kaye, Julia A, Zahed, Hengameh, Kratter, Ian H, Daub, Aaron C, Finkbeiner, Steven, Li, Hong, Roach, Jared C, Goodman, Nathan, Hood, Leroy, Myers, Richard H, MacDonald, Marcy E, and Gusella, James F
- Abstract
Huntington disease (HD) reflects the dominant consequences of a CAG-repeat expansion in HTT. Analysis of common SNP-based haplotypes has revealed that most European HD subjects have distinguishable HTT haplotypes on their normal and disease chromosomes and that ∼50% of the latter share the same major HD haplotype. We reasoned that sequence-level investigation of this founder haplotype could provide significant insights into the history of HD and valuable information for gene-targeting approaches. Consequently, we performed whole-genome sequencing of HD and control subjects from four independent families in whom the major European HD haplotype segregates with the disease. Analysis of the full-sequence-based HTT haplotype indicated that these four families share a common ancestor sufficiently distant to have permitted the accumulation of family-specific variants. Confirmation of new CAG-expansion mutations on this haplotype suggests that unlike most founders of human disease, the common ancestor of HD-affected families with the major haplotype most likely did not have HD. Further, availability of the full sequence data validated the use of SNP imputation to predict the optimal variants for capturing heterozygosity in personalized allele-specific gene-silencing approaches. As few as ten SNPs are capable of revealing heterozygosity in more than 97% of European HD subjects. Extension of allele-specific silencing strategies to the few remaining homozygous individuals is likely to be achievable through additional known SNPs and discovery of private variants by complete sequencing of HTT. These data suggest that the current development of gene-based targeting for HD could be extended to personalized allele-specific approaches in essentially all HD individuals of European ancestry.
- Published
- 2015
6. Temporal profiling of cytokine-induced genes in pancreatic beta-cells by meta-analysis and network inference.
- Author
-
Lopes, Miguel, Kutlu, Burak, Miani, MICHELA, Bang-Berthelsen, Claus H, Størling, Joachim, Pociot, Flemming, Goodman, Nathan, Hood, Lee, Welsh, Nils, Bontempi, Gianluca, Eizirik, Decio L., Lopes, Miguel, Kutlu, Burak, Miani, MICHELA, Bang-Berthelsen, Claus H, Størling, Joachim, Pociot, Flemming, Goodman, Nathan, Hood, Lee, Welsh, Nils, Bontempi, Gianluca, and Eizirik, Decio L.
- Abstract
Type 1 Diabetes (T1D) is an autoimmune disease where local release of cytokines such as IL-1β and IFN-γ contributes to β-cell apoptosis. To identify relevant genes regulating this process we performed a meta-analysis of 8 datasets of β-cell gene expression after exposure to IL-1β and IFN-γ. Two of these datasets are novel and contain time-series expressions in human islet cells and rat INS-1E cells. Genes were ranked according to their differential expression within and after 24h from exposure, and characterized by function and prior knowledge in the literature. A regulatory network was then inferred from the human time expression datasets, using a time-series extension of a network inference method. The two most differentially expressed genes previously unknown in T1D literature (RIPK2 and ELF3) were found to modulate cytokine-induced apoptosis. The inferred regulatory network is thus supported by the experimental validation, providing a proof-of-concept for the proposed statistical inference approach., JOURNAL ARTICLE, SCOPUS: ar.j, info:eu-repo/semantics/published
- Published
- 2014
7. Temporal profiling of cytokine-induced genes in pancreatic beta-cells by meta-analysis and network inference
- Author
-
Lopes, Miguel, Kutlu, Burak, Miani, Michela, Bang-Berthelsen, Claus H., Storling, Joachim, Pociot, Flemming, Goodman, Nathan, Hood, Lee, Welsh, Nils, Bontempi, Gianluca, Eizirik, Decio L., Lopes, Miguel, Kutlu, Burak, Miani, Michela, Bang-Berthelsen, Claus H., Storling, Joachim, Pociot, Flemming, Goodman, Nathan, Hood, Lee, Welsh, Nils, Bontempi, Gianluca, and Eizirik, Decio L.
- Abstract
Type I Diabetes (T1D) is an autoimmune disease where local release of cytokines such as IL-1 beta and IFN-gamma contributes to beta-cell apoptosis. To identify relevant genes regulating this process we performed a meta-analysis of 8 datasets of beta-cell gene expression after exposure to IL-1 beta and IFN-gamma. Two of these datasets are novel and contain time-series expressions in human islet cells and rat INS-1E cells. Genes were ranked according to their differential expression within and after 24 h from exposure, and characterized by function and prior knowledge in the literature. A regulatory network was then inferred from the human time expression datasets, using a time-series extension of a network inference method. The two most differentially expressed genes previously unknown in T1D literature (RIPK2 and ELF3) were found to modulate cytokine-induced apoptosis. The inferred regulatory network is thus supported by the experimental validation, providing a proof-of-concept for the proposed statistical inference approach.
- Published
- 2014
- Full Text
- View/download PDF
8. Adaptive Waveforms for Automatic Target Recognition and Range-Doppler Ambiguity Mitigation in Cognitive Sensor
- Author
-
Goodman, Nathan A., Gehm, Michael, Djordjevic, Ivan B., Bilgin, Ali, Bae, Junhyeong, Goodman, Nathan A., Gehm, Michael, Djordjevic, Ivan B., Bilgin, Ali, and Bae, Junhyeong
- Abstract
This dissertation shows the performance of adaptive waveforms when applied to two radar applications. One application is automatic target recognition (ATR) and the other application is range-Doppler ambiguity mitigation. The adaptive waveforms are implemented via a feedback loop from receiver to transmitter, such that previous radar measurements affect how the adaptive waveforms proceed. For the ATR application, adaptive transmitter can change the waveform's temporal structure to improve target recognition performance. For range-Doppler ambiguity mitigation application, adaptive transmitter can change the pulse repetition frequency (PRF) to mitigate range and Doppler ambiguity. In the ATR application, commercial electromagnetic software is used to create high-fidelity aircraft target signatures. Realistic waveform constraints are applied to show radar performance. The radar equation is incorporated into the waveform design technique and template-based classification is performed. Translation invariant feature is used for inaccurately known range scenario. The performance of adaptive waveforms is evaluated with not only a monostatic radar, but also widely separated MIMO radar. In MIMO radar, multiple transmit waveforms are used, but spectral leakage caused by constant-modulus constraint shows minimal interference effect. In the range-Doppler ambiguity mitigation application, particle-filter-based track-before-detect for a single target is extended to track and detect multiple low signal-to-noise ratio (SNR) targets, simultaneously. To mitigate ambiguity, multiple PRFs are used and improved PRF selection technique is implemented via predicted entropy computation when both blind and clutter zones are considered.
- Published
- 2013
9. The Ingenious Dr. Franklin: Selected Scientific Letters of Benjamin Franklin
- Author
-
Goodman, Nathan G. and Goodman, Nathan G.
- Published
- 2012
10. Cognitive Radar Network: Cooperative Adaptive Beamsteering for Integrated Search-and-Track Application
- Author
-
Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., Goodman, Nathan A., Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., and Goodman, Nathan A.
- Abstract
Cognitive radar (CR) is a paradigm shift from a traditional radar system in that previous knowledge and current measurements obtained from the radar channel are used to form a probabilistic understanding of its environment. Moreover, CR incorporates this probabilistic knowledge into its task priorities to form illumination and probing strategies, thereby rendering it a closed-loop system. Depending on the hardware’s capabilities and limitations, there are various degrees of freedom that a CR may utilize. Here we concentrate on spatial illumination as a resource, where adaptive beamsteering is used for search-and-track functions. We propose a multiplatform cognitive radar network (CRN) for integrated search-and-track application. Specifically, two radars cooperate in forming a dynamic spatial illumination strategy, where beamsteering is matched to the channel uncertainty to perform the search function. Once a target is detected and a track is initiated, track information is integrated into the beamsteering strategy as part of CR’s task prioritization.
- Published
- 2012
11. Theory and application of SNR and mutual information matched illumination waveforms
- Author
-
Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., Bae, Junhyeong, Goodman, Nathan A., Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., Bae, Junhyeong, and Goodman, Nathan A.
- Abstract
A comprehensive theory of matched illumination waveforms for both deterministic and stochastic extended targets is presented. Design of matched waveforms based on maximization of both signal-to-noise ratio (SNR) and mutual information (MI) is considered. In addition the problem of matched waveform design in signal-dependent interference is extensively addressed. New results include SNR-based waveform design for stochastic targets, SNR-based design for a known target in signal-dependent interference, and MI-based design in signal-dependent interference. Finally we relate MI and SNR in the context of waveform design for stochastic targets.
- Published
- 2011
12. Adaptive Beamsteering for Search-and-Track Application with Cognitive Radar Network
- Author
-
Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., Goodman, Nathan A., Naval Postgraduate School (U.S.), Electrical and Computer Engineering, Romero, Ric A., and Goodman, Nathan A.
- Abstract
In this paper, we introduce the concept of a Cognitive Radar Network (CRN). The goal of the radar platforms in a CRN is to cooperate in illuminating the radar channel in an efficient manner in an effort to search for moving targets. Moreover, when a detection is declared, the CRN should incorporate the tracking requirement into the illumination strategy. That is, the beamsteering strategy must exploit the radar channel uncertainty, which is a function of probabilistic representation of the channel. The radar channel uncertainty is dynamic and as such the CRN’s beamsteering strategy should be dynamic. Here, we demonstrate a CRN by utilizing two static radar platforms that form a dynamic integrated search-and-track beamsteering strategy matched to the radar channel uncertainty.
- Published
- 2011
13. Cognitive Radar
- Author
-
ARIZONA UNIV TUCSON, Goodman, Nathan A., ARIZONA UNIV TUCSON, and Goodman, Nathan A.
- Abstract
Several advances were made toward a foundation for cognitive radar. Several extensions to optimum or matched waveform theory were completed, including formalization of a random-target variance function used in the design methods, extensions to MIMO radar for target identification, information-based waveforms in the presence of ground clutter, incorporation of constant-modulus design techniques, and an adaptive PRK selection technique. These techniques were also applied to spatial waveform design (i.e. beamshaping) in order to develop the fundamentals for a cooperative multiplatform air-to-ground surveillance capability. Two techniques based on the covariance of target track states were developed for integrating detection and tracking into the same Bayesian framework, as well as probability updating techniques in target parameter space for multi-platform detection and tracking. This allowed for beamsteering toward areas in a scene where target presence and/or parameters were most uncertain., The original document contains color images. All DTIC reproductions will be in black and white.
- Published
- 2010
14. MATCHED WAVEFORM DESIGN AND ADAPTIVE BEAMSTEERING IN COGNITIVE RADAR APPLICATIONS
- Author
-
Goodman, Nathan, Gehm, Michael, Ryan, William, Romero, Ric, Goodman, Nathan, Gehm, Michael, Ryan, William, and Romero, Ric
- Abstract
Cognitive Radar (CR) is a paradigm shift from a traditional radar system in that previous knowledge and current measurements obtained from the radar channel are used to form a probabilistic understanding of its environment. Moreover, CR incorporates this probabilistic knowledge into its task priorities to form illumination and probing strategies thereby rendering it a closed-loop system. Depending on the hardware's capabilities and limitations, there are various degrees of freedom that a CR may utilize. Here we will concentrate on two: temporal, where it is manifested in adaptive waveform design; and spatial, where adaptive beamsteering is used for search-and-track functions. This work is divided into three parts. First, comprehensive theory of SNR and mutual information (MI) matched waveform design in signal-dependent interference is presented. Second, these waveforms are used in a closed-loop radar platform performing target discrimination and target class identification, where the extended targets are either deterministic or stochastic. The CR's probabilistic understanding is updated via the Bayesian framework. Lastly, we propose a multiplatform CR network for integrated search-and-track application. The two radar platforms cooperate in developing a four-dimensional probabilistic understanding of the channel. The two radars also cooperate in forming dynamic spatial illumination strategy, where beamsteering is matched to the channel uncertainty to perform the search function. Once a target is detected and a track is initiated, track information is integrated into the beamsteering strategy as part of CR's task prioritization.
- Published
- 2010
15. COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKING
- Author
-
Goodman, Nathan A., Gehm, Michael, Vasic, Bane, Shikhar, Goodman, Nathan A., Gehm, Michael, Vasic, Bane, and Shikhar
- Abstract
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In s
- Published
- 2010
16. INFORMATION-THEORETIC OPTIMIZATION OF WIRELESS SENSOR NETWORKS AND RADAR SYSTEMS
- Author
-
Goodman, Nathan A., Bilgin, Ali, Djordjevic, Ivan, Kim, Hyoung-soo, Goodman, Nathan A., Bilgin, Ali, Djordjevic, Ivan, and Kim, Hyoung-soo
- Abstract
Three information measures are discussed and used as objective functions for optimization of wireless sensor networks (WSNs) and radar systems. In addition, a long-term system performance measure is developed for evaluating the performance of slow-fading WSNs. Three system applications are considered: a distributed detection system, a distributed multiple hypothesis system, and a radar target recognition system.First, we consider sensor power optimization for distributed binary detection systems. The system communicates over slow-fading orthogonal multiple access channels. In earlier work, it was demonstrated that system performance could be improved by adjusting transmit power to maximize the J-divergence measure of a binary detection system. We define outage probability for slow-fading system as a long-term performance measure, and analytically develop the detection outage with the given system model.Based on the analytical result of the outage probability, diversity gain is derived and shown to be proportional to the number of the sensor nodes. Then, we extend the optimized power control strategy to a distributed multiple hypothesis system, and enhance the power optimization by exploiting a priori probabilities and local sensor statistics. We also extend outage probability to the distributed multiple-hypotheses problem. The third application is radar waveform design with a new performance measure: Task-Specific Information (TSI). TSI is an information-theoretic measure formulated for one or more specific sensor tasks by encoding the task(s) directly into the signal model via source variables. For example, we consider the problem of correctly classifying a linear system from a set of known alternatives, and the source variable takes the form of an indicator vector that selects the transfer function of the true hypothesis. We then compare the performance of TSI with conventional waveforms and other information-theoretic waveform designs via simulation. We apply rad
- Published
- 2010
17. IMPROVING THE PERFORMANCE OF ANTENNAS WITH METAMATERIAL CONSTRUCTS
- Author
-
Ziolkowski, Richard W., Xin, Hao, Goodman, Nathan, Jin, Peng, Ziolkowski, Richard W., Xin, Hao, Goodman, Nathan, and Jin, Peng
- Abstract
Metamaterials (MTMs) are artificial materials that can be designed to have exotic properties. Because their unit cells are much smaller than a wavelength, homog-enization leads to effective, macroscopic permittivity and permeability values that can be used to determine the MTM behavior for applications. There are four possible combinations of the signs of permittivity and permeability values. The desired choice of sign depends on the particular application. Inspired by these MTM concepts, several MTM-inspired structures are adopted in this dissertation to improve various performance characteristics of several different classes of antennas. Three different metamaterial-inspired engineering approaches are introduced to achieve enhanced antenna designs. First,the transmission-line (TL) type of MTM is used to modify the dispersion characteristics of a log-periodic dipole array (LPDA) antenna. When LPDA antennas are used for wideband pulse applications, they suffer from severe frequency dispersionbecause the phase center location of each element is frequency dependent. By incorporating MTM-based phase shifters, the LPDA frequency dispersion properties are improved significantly. Both eight and ten element MTM-modified LPDA antennasare designed to enhance the fidelity of the resulting output pulses. Second, epsilon-negative unit cells are used to design several types of electrically small, resonant parasitic elements which, when placed in the very near field of a driven element, leadto nearly complete matching (i.e., reactance and resistance) of the resulting electrically small antenna system to the source and to an enhanced radiation efficiency. However, despite these MTM-inspired electrically small antennas being very efficient radiators, their bandwidth remains very narrow, being constrained by physicallimitations. Third, we introduce an active parasitic element to enhance the band-width performance of the MTM-inspired antennas. The required active parasitic element is
- Published
- 2010
18. Detailed transcriptome atlas of the pancreatic beta cell
- Author
-
Kutlu, Burak, Burdick, David, Baxter, David, Rasschaert, Joanne, Flamez, Daisy, Eizirik, Decio L, Welsh, Nils, Goodman, Nathan, Hood, Leroy, Kutlu, Burak, Burdick, David, Baxter, David, Rasschaert, Joanne, Flamez, Daisy, Eizirik, Decio L, Welsh, Nils, Goodman, Nathan, and Hood, Leroy
- Abstract
BACKGROUND: Gene expression patterns provide a detailed view of cellular functions. Comparison of profiles in disease vs normal conditions provides insights into the processes underlying disease progression. However, availability and integration of public gene expression datasets remains a major challenge. The aim of the present study was to explore the transcriptome of pancreatic islets and, based on this information, to prepare a comprehensive and open access inventory of insulin-producing beta cell gene expression, the Beta Cell Gene Atlas (BCGA). METHODS: We performed Massively Parallel Signature Sequencing (MPSS) analysis of human pancreatic islet samples and microarray analyses of purified rat beta cells, alpha cells and INS-1 cells, and compared the information with available array data in the literature. RESULTS: MPSS analysis detected around 7600 mRNA transcripts, of which around a third were of low abundance. We identified 2000 and 1400 transcripts that are enriched/depleted in beta cells compared to alpha cells and INS-1 cells, respectively. Microarray analysis identified around 200 transcription factors that are differentially expressed in either beta or alpha cells. We reanalyzed publicly available gene expression data and integrated these results with the new data from this study to build the BCGA. The BCGA contains basal (untreated conditions) gene expression level estimates in beta cells as well as in different cell types in human, rat and mouse pancreas. Hierarchical clustering of expression profile estimates classify cell types based on species while beta cells were clustered together. CONCLUSION: Our gene atlas is a valuable source for detailed information on the gene expression distribution in beta cells and pancreatic islets along with insulin producing cell lines. The BCGA tool, as well as the data and code used to generate the Atlas are available at the T1Dbase website (T1DBase.org).
- Published
- 2009
- Full Text
- View/download PDF
19. Generalized Adaptive Radar Signal Processing
- Author
-
TECHNOLOGY SERVICE CORP TRUMBULL CT, Mountcastle, Paul D., Goodman, Nathan A., Morgan, Charles J., TECHNOLOGY SERVICE CORP TRUMBULL CT, Mountcastle, Paul D., Goodman, Nathan A., and Morgan, Charles J.
- Abstract
In this paper 1, the use of adaptive weights over the radar measurement dimensions of pulse time, antenna receive element, and wideband frequency is extended to cover a much broader range of radar detection problems than was supposed in the original formulation of spacetime adaptive signal processing (STAP). These problems include, among others (1) Adaptive beamforming in three dimensions; (2) Detection and 3D ISAR imaging of targets in general torque-free motion immersed in a field of tumbling clutter dipoles; (3) Detection of moving targets in stationary clutter using all three components of velocity and all three components of position; (4) Discrimination of accelerating targets from uniformly moving targets and stationary clutter; and (5) Adaptive signal processing with distributed and moving array elements. The unifying thread among these applications is the use of adaptive weights over the measured radar data to enhance scatterers that follow one class of paths while suppressing scatterers that follow paths not in that class. Applications (1) and (2) above are treated as examples., See also ADM002187. Presented at the Army Science Conference (26th) held in Orlando, Florida on 1-4 December 2008. Published in the Proceedings of the Army Science Conference (26th), 2008.
- Published
- 2008
20. Enhanced Detection of Ground Targets by Airborne Radar
- Author
-
Melde, Kathleen, Goodman, Nathan, Marcellin, Michael W., Bruyere, Donald Patrick, Melde, Kathleen, Goodman, Nathan, Marcellin, Michael W., and Bruyere, Donald Patrick
- Abstract
This dissertation deals with techniques that enhance the detection of ground targets by airborne radar. The methods employed deal with the problem of air to ground detection by breaking the problem into two broad categories. The first category deals with improving detection of moving targets by using space-time adaptive processing (STAP) in a multistatic configuration. Mult-static STAP provides increased detection performance by observing targets from multiple perspectives. Multiple viewing perspectives afford more opportunities to the combined system for observing radial velocity of the target more directly, thus increasing Doppler that helps distinguish the target from background clutter. Detection performance also improves through an increased number of independent observations of a target, which reduces the likelihood of the target fading for the combined system. Increasing detection performance by increasing the number of independent observations is referred to in communications theory as channel diversity. The second part of this dissertation deals with the problem of distinguishing stationary targets from background clutter within a Synthetic Aperture Radar image. Stationary target discrimination is accomplished by exploiting the statistical nature of multifaceted metallic objects within a scene. The performance improvement for both moving and non-moving improvement methods is characterized and compared to other systems that attempt to accomplish the same end using different means.
- Published
- 2008
21. Multiframe Superresolution Techniques For Distributed Imaging Systems
- Author
-
Neifeld, Mark A., Marcellin, Michael W., Kostuk, Raymond, Goodman, Nathan, Shankar, Premchandra M., Neifeld, Mark A., Marcellin, Michael W., Kostuk, Raymond, Goodman, Nathan, and Shankar, Premchandra M.
- Abstract
Multiframe image superresolution has been an active research area for many years. In this approach image processing techniques are used to combine multiple low-resolution (LR) images capturing different views of an object. These multiple images are generally under-sampled, degraded by optical and pixel blurs, and corrupted by measurement noise. We exploit diversities in the imaging channels, namely, the number of cameras, magnification, position, and rotation, to undo degradations. Using an iterative back-projection (IBP) algorithm we quantify the improvements in image fidelity gained by using multiple frames compared to single frame, and discuss effects of system parameters on the reconstruction fidelity. As an example, for a system in which the pixel size is matched to optical blur size at a moderate detector noise, we can reduce the reconstruction root-mean-square-error by 570% by using 16 cameras and a large amount of diversity in deployment.We develop a new technique for superresolving binary imagery by incorporating finite-alphabet prior knowledge. We employ a message-passing based algorithm called two-dimensional distributed data detection (2D4) to estimate the object pixel likelihoods. We present a novel complexity-reduction technique that makes the algorithm suitable even for channels with support size as large as 5x5 object pixels. We compare the performance and complexity of 2D4 with that of IBP. In an imaging system with an optical blur spot matched to pixel size, and four 2x2 undersampled LR images, the reconstruction error for 2D4 is 300 times smaller than that for IBP at a signal-to-noise ratio of 38dB.We also present a transform-domain superresolution algorithm to efficiently incorporate sparsity as a form of prior knowledge. The prior knowledge that the object is sparse in some domain is incorporated in two ways: first we use the popular L1 norm as the regularization operator. Secondly we model wavelet coefficients of natural objects using generaliz
- Published
- 2008
22. Feature-Specific Imaging: Extensions to Adaptive Object Recognition and Active Illumination Based Scene Reconstruction
- Author
-
Marcelin, Michael W., Goodman, Nathan, Kostuk, Raymond, Baheti, Pawan Kumar, Marcelin, Michael W., Goodman, Nathan, Kostuk, Raymond, and Baheti, Pawan Kumar
- Abstract
Computational imaging (CI) systems are hybrid imagers in which the optical and post-processing sub-systems are jointly optimized to maximize the task-specific performance. In this dissertation we consider a form of CI system that measures the linear projections (i.e., features) of the scene optically, and it is commonly referred to as feature-specific imaging (FSI). Most of the previous work on FSI has been concerned with image reconstruction. Previous FSI techniques have also been non-adaptive and restricted to the use of ambient illumination.We consider two novel extensions of the FSI system in this work. We first present an adaptive feature-specific imaging (AFSI) system and consider its application to a face-recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. We present both statistical and information-theoretic adaptation mechanisms for the AFSI system. The sequential hypothesis testing framework is used to determine the number of measurements required for achieving a specified misclassification probability. We demonstrate that AFSI system requires significantly fewer measurements than static-FSI (SFSI) and conventional imaging at low signal-to-noise ratio (SNR). We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage. Experimental results validating the AFSI system are presented.Next we present a FSI system based on the use of structured light. Feature measurements are obtained by projecting spatially structured illumination onto an object and collecting all of the reflected light onto a single photodetector. We refer to this system as feature-specific structured imaging (FSSI). Principal component features are used to define the illumination patterns. The optimal LMMSE operator is used to generate object estimates from the measurements. We demonstrate that this new imaging approach reduces imager complexity and provides improved image qualit
- Published
- 2008
23. Multilevel Methodology For Simulation Of Spatio-Temporal Systems With Heterogeneous Activity: Application To Spread Of Valley Fever Fungus
- Author
-
Zeigler, Bernard P., Tharp, Hal S., Goodman, Nathan A., Jammalamadaka, Rajanikanth, Zeigler, Bernard P., Tharp, Hal S., Goodman, Nathan A., and Jammalamadaka, Rajanikanth
- Abstract
Spatio-temporal systems with heterogeneity in their structure and behavior have two major problems. The first one is that such systems extend over very large spatial and temporal domains and consume a lot of resources to simulate that they are infeasible to study with current platforms. The second one is that the data available for understanding such systems is limited. This also makes it difficult to get the data for validation of their constituent processes while simultaneously considering their global behavior. For example, the valley fever fungus considered in this dissertation is spread over a large spatial grid in the arid Southwest and typically needs to be simulated over several decades of time to obtain useful information. It is also hard to get the temperature and moisture data at every grid point of the spatial domain over the region of study. In order to address the first problem, we develop a method based on the discrete event system specification which exploits the heterogeneity in the activity of the spatio-temporal system and which has been shown to be effective in solving relatively simple partial differential equation systems. The benefit of addressing the first problem is that it now makes it feasible to address the second problem.We address the second problem by making use of a multilevel methodology based on modeling and simulation and systems theory. This methodology helps us in the construction of models with different resolutions (base and lumped models). This allows us to refine an initially constructed lumped model with detailed physics-based process models and assess whether they improve on the original lumped models. For that assessment, we use the concept of experimental frame to delimit where the improvement is needed. This allows us to work with the available data, improve the component models in their own experimental frame and then move them to the overall frame. In this dissertation, we develop a multilevel methodology and apply it
- Published
- 2008
24. Design of Low-Floor Quasi-Cyclic IRA Codes and Their FPGA Decoders
- Author
-
Vasic, Bane, Goodman, Nathan, Zhang, Yifei, Vasic, Bane, Goodman, Nathan, and Zhang, Yifei
- Abstract
Low-density parity-check (LDPC) codes have been intensively studied in the past decade for their capacity-approaching performance. LDPC code implementation complexity and the error-rate floor are still two significant unsolved issues which prevent their application in some important communication systems. In this dissertation, we make efforts toward solving these two problems by introducing the design of a class of LDPC codes called structured irregular repeat-accumulate (S-IRA) codes. These S-IRA codes combine several advantages of other types of LDPC codes, including low encoder and decoder complexities, flexibility in design, and good performance on different channels. It is also demonstrated in this dissertation that the S-IRA codes are suitable for rate-compatible code family design and a multi-rate code family has been designed which may be implemented with a single encoder/decoder.The study of the error floor problem of LDPC codes is very difficult because simulating LDPC codes on a computer at very low error rates takes an unacceptably long time. To circumvent this difficulty, we implemented a universal quasi-cyclic LDPC decoder on a field programmable gate array (FPGA) platform. This hardware platform accelerates the simulations by more than 100 times as compared to software simulations. We implemented two types of decoders with partially parallel architectures on the FPGA: a circulant-based decoder and a protograph-based decoder. By focusing on the protograph-based decoder, different soft iterative decoding algorithms were implemented. It provides us with a platform for quickly evaluating and analyzing different quasi-cyclic LDPC codes, including the S-IRA codes. A universal decoder architecture is also proposed which is capable of decoding of an arbitrary LDPC code, quasi-cyclic or not. Finally, we studied the low-floor problem by focusing on one example S-IRA code. We identified the weaknesses of the code and proposed several techniques to lower the erro
- Published
- 2007
25. Nonlinear Bounded-Error Target State Estimation Using Redundant States
- Author
-
Strickland, Robin N., Goodman, Nathan A., Tharp, Hal S., Covello, James Anthony, Strickland, Robin N., Goodman, Nathan A., Tharp, Hal S., and Covello, James Anthony
- Abstract
When the primary measurement sensor is passive in nature--by which we mean that it does not directly measure range or range rate--there are well-documented challenges for target state estimation. Most estimation schemes rely on variations of the Extended Kalman Filter (EKF), which, in certain situations, suffer from divergence and/or covariance collapse. For this and other reasons, we believe that the Kalman filter is fundamentally ill-suited to the problems that are inherent in target state estimation using passive sensors. As an alternative, we propose a bounded-error (or set-membership) approach to the target state estimation problem. Such estimators are nearly as old as the Kalman filter, but have enjoyed much less attention. In this study we develop a practical estimator that bounds the target states, and apply it to the two-dimensional case of a submarine tracking a surface vessel, which is commonly referred to as Target Motion Analysis (TMA). The estimator is robust in the sense that the true target state does not escape the determined bounds; and the estimator is not unduly pessimistic in the sense that the bounds are not wider than the situation dictates. The estimator is--as is the problem itself--nonlinear and geometric in nature. In part, the simplicity of the estimator is maintained by using redundant states to parameterize the target's velocity. These redundant states also simplify the incorporation of other measurements that are frequently available to the system. The estimator's performance is assessed in a series of simulations and the results are analyzed. Extensions of the algorithm are considered.
- Published
- 2006
26. THE CODING-SPREADING TRADEOFF PROBLEM IN FINITE-SIZED SYNCHRONOUS DS-CDMA SYSTEMS
- Author
-
Ryan, William E., Marcellin, Michael W., Goodman, Nathan, Leonard, John L., Tang, Zuqiang, Ryan, William E., Marcellin, Michael W., Goodman, Nathan, Leonard, John L., and Tang, Zuqiang
- Abstract
This dissertation provides a comprehensive analysis of the coding-spreading tradeoff problem in finite-sized synchronous DS-CDMA systems. In contrast to the large system which has a large number of users, the finite-sized system refers to a system with a small number of users. Much work has been performed in the past on the analysis of the spectral efficiency of synchronous DS-CDMA systems and the associated coding-spreading tradeoff problem. However, most of the analysis is based on the large-system assumptions. In this dissertation, we focused on finite-sized systems with the help of numerical methods and Monte-Carlo simulations.Binary-input achievable information rates for finite-sized synchronous DS-CDMA systems with different detection/decoding schemes on AWGN channel are numerically calculated for various coding/spreading apportionments. We use these results to determine the existence and value of an optimal code rate for a number of different multiuser receivers, where optimality is in the sense of minimizing the SNR required for reliable multiuser communication. Our results are consistent with the well-known fact that all coding (no spreading) is optimal for the maximum a posteriori receiver.Simulations of the LDPC-coded synchronous DS-CDMA systems with iterative multiuser detection/decoding and MMSE multiuser detection/single-user decoding are also presented to show that the binary-input capacities can be closely approached with practical schemes. The coding-spreading tradeoff is examined using these LDPC code simulation results, where agreement with the information-theoretic results is demonstrated.We extend our work to the DS-CDMA systems on two idealized Rayleigh flat-fading channels: the chip-level flat-fading (CLFF) and the (code) symbol-level flat-fading (SLFF). These models represent ideal fast fading and slow fading channels, respectively. Both information-theoretic results and LDPC code simulation results are presented to show the effects of channe
- Published
- 2005
27. An Approach to Updating in a Redundant Distributed Data Base Environment.
- Author
-
COMPUTER CORP OF AMERICA CAMBRIDGE MA, Rothnie,James B, Goodman,Nathan, COMPUTER CORP OF AMERICA CAMBRIDGE MA, Rothnie,James B, and Goodman,Nathan
- Abstract
This paper addresses the following problem: how to perform updates in a redundant distributed database system (RDDBS) in a manner that preserves the consistency of the database, yet does not introduce intolerable inter-computer synchronization delays. We have developed a technique for this problem which permits most update transactions to exhibit the same highly responsive behavior which an RDDBS can offer on retrieval. Other approaches to the problem of updating in a redundant distributed database environment suffer from substantially slower response times (in many cases involving clearly intolerable delays) as well as from problems in system scaling.
- Published
- 1977
28. Query Processing in SDD-1: A System for Distributed Databases
- Author
-
COMPUTER CORP OF AMERICA CAMBRIDGE MA, Goodman, Nathan, Bernstein, Philip A, Wong, Eugene, Reeve, Christopher L, Rothnie, James B, COMPUTER CORP OF AMERICA CAMBRIDGE MA, Goodman, Nathan, Bernstein, Philip A, Wong, Eugene, Reeve, Christopher L, and Rothnie, James B
- Abstract
This paper describes the techniques used to optimize relational queries in the SDD-1, distributed database system. Queries are submitted to SDD- 1 in a high-level procedural language called Datalanguage. Optimization begins by translating each Datalanguage query into a relational calculus form called an envelope, which is essentially an aggregate-free QUEL query. This paper is primarily concerned with the optimization of envelopes. Envelopes are processed in two phases. The first phase executes relational operations at various sites of the distributed database in order to delimit a subset of the database that contains all data relevant to the envelope. This subset is called a reduction of the database. The second phase transmits the reduction to one designated site, and the query is executed locally at that site. The critical optimization problem is to perform the reduction phase efficiently. Success depends on designing a good repertoire of operators to use during this phase, and an effective algorithm for deciding which of these operators to use in processing a given envelope against a given database. The principal reduction operator that we employ is called semi-join. In this paper we define the semi-join operator, explain why semi-join is an effective reduction operator, and present an algorithm that constructs a cost effective program of semi-joins given an envelope and a database.
- Published
- 1979
29. Fundamental Algorithms for Concurrency Control in Distributed Database Systems.
- Author
-
COMPUTER CORP OF AMERICA CAMBRIDGE MA, Bernstein,Philip A, Goodman,Nathan, COMPUTER CORP OF AMERICA CAMBRIDGE MA, Bernstein,Philip A, and Goodman,Nathan
- Published
- 1980
30. The Concurrency Control Mechanism of SDD-1: A System for Distributed Databases (The General Case)
- Author
-
COMPUTER CORP OF AMERICA CAMBRIDGE MA, Bernstein, Philip A, Shipman, David W, Rothnie, James B, Goodman, Nathan, COMPUTER CORP OF AMERICA CAMBRIDGE MA, Bernstein, Philip A, Shipman, David W, Rothnie, James B, and Goodman, Nathan
- Abstract
SDD-1, a System for Distributed Databases, is a distributed database system being developed by CCA. SDD-1 permits data to be stored redundantly at several database sites in order to enhance the reliability and responsiveness of the system and to facilitate upwards scaling of system capacity. This paper describes the algorithm used by SDD-1 for updating that is stored redundantly.
- Published
- 1977
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.