26 results on '"Neil A. Bomberger"'
Search Results
2. COALESCE: A probabilistic ontology-based scene understanding approach.
- Author
-
Majid Zandipour, Bradley J. Rhodes, and Neil A. Bomberger
- Published
- 2008
3. Probabilistic prediction of vessel motion at multiple spatial scales for maritime situation awareness.
- Author
-
Majid Zandipour, Bradley J. Rhodes, and Neil A. Bomberger
- Published
- 2008
4. Probabilistic associative learning of vessel motion patterns at multiple spatial scales for maritime situation awareness.
- Author
-
Bradley J. Rhodes, Neil A. Bomberger, and Majid Zandipour
- Published
- 2007
- Full Text
- View/download PDF
5. Associative Learning of Vessel Motion Patterns for Maritime Situation Awareness.
- Author
-
Neil A. Bomberger, Bradley J. Rhodes, Michael Seibert, and Allen M. Waxman
- Published
- 2006
- Full Text
- View/download PDF
6. Multisensor & Spectral Image Fusion & Mining: From Neural Systems to Applications.
- Author
-
David A. Fay, Richard T. Ivey, Neil A. Bomberger, and Allen M. Waxman
- Published
- 2003
- Full Text
- View/download PDF
7. Automated activity pattern learning and monitoring provide decision support to supervisors of busy environments.
- Author
-
Bradley J. Rhodes, Neil A. Bomberger, Majid Zandipour, Denis Garagic, Lauren H. Stolzar, James R. Dankert, Allen M. Waxman, and Michael Seibert
- Published
- 2009
- Full Text
- View/download PDF
8. A new approach to higher-level information fusion using associative learning in semantic networks of spiking neurons.
- Author
-
Neil A. Bomberger, Allen M. Waxman, Bradley J. Rhodes, and Nathan A. Sheldon
- Published
- 2007
- Full Text
- View/download PDF
9. RF Waveform Synthesis Guided by Deep Reinforcement Learning
- Author
-
Jessee McClelland, Scott Kuzdeba, Andrew Radlbeck, T. Scott Brandes, and Neil A. Bomberger
- Subjects
education.field_of_study ,Identification (information) ,Computer engineering ,Transmission (telecommunications) ,Steganography ,Computer science ,Transmitter ,Population ,Waveform ,Reinforcement learning ,Fingerprint recognition ,education - Abstract
In this work, we demonstrate a system that enhances radio frequency (RF) fingerprints of individual transmitters via waveform modification to uniquely identify them amidst an ensemble of identical transmitters. This has the potential to enable secure identification, even in the presence of stolen and retransmitted unique device identifiers that are present in the transmitted waveforms, and ensures robust communications. This approach also lends itself to steganography as the waveform modifications can themselves encode information. Our system uses Bayesian program learning to learn specific characteristics of a set of emitters, and integrates the learned programs into a reinforcement learning architecture to build a policy for actions applied to the digital waveform before transmission. This allows the system to learn how to modify waveforms that leverage and emphasize inherent differences within RF front-ends to enhance their distinct characteristics while maintaining robust communications. In this ongoing research, we demonstrate our system in a small population, and provide a road map to expand it to larger populations that are expected in today’s interconnected spaces.
- Published
- 2020
- Full Text
- View/download PDF
10. Bayesian Program Learning for Modeling and Classification of RF Emitters
- Author
-
Neil A. Bomberger, T. Scott Brandes, Denis Garagic, Scott Kuzdeba, and Andrew Radlbeck
- Subjects
education.field_of_study ,business.industry ,Computer science ,Bayesian probability ,Population ,SIGNAL (programming language) ,Transmitter ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Transmission (telecommunications) ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,Radio frequency ,Artificial intelligence ,business ,education ,computer ,0105 earth and related environmental sciences - Abstract
In this work, we demonstrate an initial application of Bayesian program learning (BPL) to learn models for individual radio frequency (RF) transmitters based on a single training signal for each transmitter. Once learned, these models are used to classify individual RF transmitters based on one signal observation. BPL improves upon other machine learning techniques by learning and classifying effectively from small amounts of training data. BPL programs represent concepts as probabilistic generative models expressed as structured procedures in an abstract description language. These models explicitly account for both concept-specific and context-dependent mechanisms, allowing them to perform well under dynamic environmental conditions. In this ongoing research, we demonstrate our system using signals from a small population of software-defined radios (SDRs) with known signal encodings in a laboratory environment, and provide a path forward for expanding it to larger populations, more signal types, and challenging transmission environments.
- Published
- 2020
- Full Text
- View/download PDF
11. Automated activity pattern learning and monitoring provide decision support to supervisors of busy environments
- Author
-
Neil A. Bomberger, Lauren H. Stolzar, Michael Seibert, Majid Zandipour, Bradley J. Rhodes, Allen M. Waxman, James R. Dankert, and Denis Garagic
- Subjects
Engineering ,Decision support system ,Situation awareness ,Artificial neural network ,business.industry ,Machine learning ,computer.software_genre ,Shift operator ,Variety (cybernetics) ,Human-Computer Interaction ,Operator (computer programming) ,Artificial Intelligence ,Anomaly detection ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Set (psychology) ,computer ,Software - Abstract
Neurobiologically inspired algorithms for exploiting track data to learn normal patterns of motion behavior, detect deviations from normalcy, and predict future behavior are presented. These capabilities contribute to higher-level fusion situational awareness and assessment objectives. They also provide essential elements for automated scene understanding to shift operator focus from sensor monitoring and activity detection to behavior assessment and response decision-making. Our learning algorithms construct models of normal activity patterns at a variety of conceptual, spatial, and temporal levels to reduce a massive amount of track data to a rich set of information regarding the current status of active entities within an operator's field of regard. Continuous incremental learning enables the models of normal behavior to adapt well to evolving situations while maintaining high levels of performance. Deviations from normalcy result in notification reports that can be published directly to operator displays. Deviation tolerance levels are user settable during system operation to tune alerting sensitivity. Operator responses to anomaly alerts can be fed back into the algorithms to further enhance and refine learned models. These algorithms have been successfully demonstrated to learn vessel behaviors across the maritime domain and to learn vehicle and dismount behavior in land-based settings.
- Published
- 2009
- Full Text
- View/download PDF
12. Predicting failure of secondary batteries
- Author
-
Mirna Urquidi-Macdonald and Neil A. Bomberger
- Subjects
Battery (electricity) ,Engineering ,Artificial neural network ,Renewable Energy, Sustainability and the Environment ,business.industry ,media_common.quotation_subject ,Energy Engineering and Power Technology ,Space (commercial competition) ,Determinism ,Reliability engineering ,Task (project management) ,Quality (business) ,State (computer science) ,Electrical and Electronic Engineering ,Physical and Theoretical Chemistry ,business ,Simulation ,media_common - Abstract
The ability to predict the failure of secondary batteries is important. However, when determinism is not used to make the predictions because the complexity of the problem, difficult questions arise. Data analysts must always determine how much information is available in a given database and how much information can be squeezed from the database. A philosophical question is frequently posed: How long into the future can we predict based on past information? For the prediction of battery cycling life this question can be formulated as: How long must a battery (or cell) be tested to predict when it will fail? The answer to this type of question depends on how many variables define the problem, how much we know of the problem, how effective we are at squeezing information from the database, and how much knowledge and reliable data we have available to build a predictive model. The quality of the model will be measured by its ability to predict the future behavior of the system. The prediction of cycling life of batteries has been until now an impossible task. We are convinced that this is in part because the problem is very difficult, and in part, because the information available in databases has not been manipulated enough to produce a reliable predictive model. Models based on similar techniques are expected to have similar predictive capabilities. The methodology used in this project is now being used on an extensive database (thousand of hours) for NiCd batteries (NASA Goddard Space Flight Center data), and on a complete database (many variables are being controlled and measured) for Li/polymer batteries generated at the Battery Laboratory at Penn State.
- Published
- 1998
- Full Text
- View/download PDF
13. Adaptive Mixture-Based Neural Network Approach for Higher-Level Fusion and Automated Behavior Monitoring
- Author
-
Neil A. Bomberger, Denis Garagic, Bradley J. Rhodes, and Majid Zandipour
- Subjects
Artificial neural network ,Covariance matrix ,Computer science ,business.industry ,Approximation algorithm ,Machine learning ,computer.software_genre ,Stochastic approximation ,Data modeling ,Adaptive system ,Expectation–maximization algorithm ,Metric (mathematics) ,Incremental learning ,Anomaly detection ,Data mining ,Artificial intelligence ,business ,computer - Abstract
A novel adaptive mixture-based neural network is presented for exploiting track data to learn normal patterns of motion behavior and detect deviations from normalcy. We have extended our prior approach by introducing multidimensional probability density components to represent class density using an adaptive mixture of such components. The number of components in the adaptive mixture algorithm, as well as the values of the parameters of the density components, is estimated from the data. The network utilizes a recursive version of the Expectation Maximization (EM) algorithm to minimize the Kullback-Leibler information metric by means of stochastic approximation combined with a rule for creation of new components. Learning occurs incrementally in order to allow the system to take advantage of increasing amounts of data without having to take the system offline periodically to update models. Continuous incremental learning enables the models of normal behavior to adapt well to evolving situations while maintaining high levels of performance. In addition, the adaptive mixtures neural network classifies streaming track data as normal or deviant. These capabilities contribute to higher-level fusion situational awareness and assessment objectives by enabling a shift of operator focus from sensor monitoring and activity detection to assessment and response. Our overall motion pattern learning approach learns behavioral patterns at a variety of conceptual, spatial, and temporal levels to reduce massive amounts of track data to a rich set of information regarding operator field of regard that supports rapid decision-making and timely response initiation.
- Published
- 2009
- Full Text
- View/download PDF
14. Adaptive spatial scale for cognitively-inspired motion pattern learning & analysis algorithms for higher-level fusion and automated scene understanding
- Author
-
Lauren H. Stolzar, James R. Dankert, Denis Garagic, Neil A. Bomberger, G.D. Castanon, Bradley J. Rhodes, Majid Zandipour, and Michael Seibert
- Subjects
business.industry ,Computer science ,Machine learning ,computer.software_genre ,Object detection ,Data modeling ,Motion estimation ,Incremental learning ,Anomaly detection ,Artificial intelligence ,Set (psychology) ,Hidden Markov model ,business ,computer ,Algorithm - Abstract
To date, our neurobiologically inspired algorithms for exploiting track data to learn normal patterns of motion behavior, detect deviations from normalcy, and predict future behavior have operated at fixed spatial scales. Although these models continuously adapted to incoming track data through incremental learning in order to adjust to evolving situations, the fundamental spatial scale of the learned models did not change over time. This constraint necessitates a trade-off between model maturation rate and deviation detection or behavior prediction performance. This paper describes updates to our approach that enable data-driven model scale adaptation. Anomaly detection is based on coarse resolution models during early learning stages and progressively switches to finer resolution models as sufficient data are received. This approach increases speed of model maturation with small amounts of data, while improving model fidelity and anomaly detection sensitivity as increasing amounts of data are received. These capabilities contribute to higher-level fusion situational awareness and assessment objectives. They also provide essential elements for automated scene understanding to shift operator focus from sensor monitoring and activity detection to assessment and response. Our learning algorithms learn behavioral patterns at a variety of conceptual, spatial, and temporal levels to reduce a massive amount of track data to a rich set of information regarding their field of regard that supports decision-making and timely response initiation.
- Published
- 2008
- Full Text
- View/download PDF
15. Cognitively-Inspired Motion Pattern Learning & Analysis Algorithms for Higher-Level Fusion and Automated Scene Understanding
- Author
-
Allen M. Waxman, Neil A. Bomberger, Bradley J. Rhodes, Majid Zandipour, and Michael Seibert
- Subjects
Motion analysis ,Situation awareness ,business.industry ,Computer science ,Behavioral pattern ,Motion detection ,Machine learning ,computer.software_genre ,Motion (physics) ,Component (UML) ,Incremental learning ,Artificial intelligence ,business ,Set (psychology) ,Algorithm ,computer - Abstract
We have developed a suite of neurobiologically inspired algorithms for exploiting track data to learn normal patterns of motion behavior, detect deviations from normalcy, and predict future behavior. These capabilities contribute to higher-level fusion situational awareness and assessment objectives. They also provide essential elements for automated scene understanding to shift operator focus from sensor monitoring and activity detection to assessment and response. Our learning algorithms learn behavioral patterns at a variety of conceptual, spatial, and temporal levels to reduce a massive amount of track data to a rich set of information regarding their field of regard that supports decision-making and timely response initiation. Continuous incremental learning enables the models of normal behavior to adapt well to evolving situations while maintaining high levels of performance. Deviations from normalcy result in reports being published directly to operator displays or to other reasoning components within a larger system. Deviation tolerance levels are user settable during system operation to tune alerting sensitivity. Operator (or other system component) responses to anomaly alerts can be fed back into the algorithms to further enhance and refine learned models. These algorithms have been successfully demonstrated to learn vessel behaviors across the maritime domain and to learn vehicle and dismount behavior in land-based settings.
- Published
- 2007
- Full Text
- View/download PDF
16. Probabilistic associative learning of vessel motion patterns at multiple spatial scales for maritime situation awareness
- Author
-
Neil A. Bomberger, Bradley J. Rhodes, and Majid Zandipour
- Subjects
Training set ,Situation awareness ,Artificial neural network ,business.industry ,Computer science ,Probabilistic logic ,Conditional probability ,Machine learning ,computer.software_genre ,Motion (physics) ,Associative learning ,Domain (software engineering) ,Incremental learning ,Artificial intelligence ,Data mining ,business ,computer - Abstract
An improved neurobiologically inspired algorithm for situation awareness in the maritime domain is presented, which takes real-time tracking information and learns motion pattern models on-the- fly, enabling the models to adapt well to evolving situations while maintaining high levels of performance. The constantly refined models, resulting from concurrent incremental learning, are used to evaluate the behavior patterns of vessels based on their present motion states. Improvement to the associative learning law for learning temporal associations between vessel events enables conditional probabilities between events to be learned incrementally and locally. This allows weights in the learned model to be interpreted more readily, enabling better location prediction performance. Improvement in prediction performance is achieved by using multiple spatial scales to represent position, enabling the most relevant spatial scale to be used for local vessel behavior. Features and performance of these updates to the learning system using recorded data are described.
- Published
- 2007
- Full Text
- View/download PDF
17. SeeCoast: persistent surveillance and automated scene understanding for ports and coastal areas
- Author
-
William Kreamer, Adam C. L’Italien, Bradley J. Rhodes, Chris Stauffer, Neil A. Bomberger, Allen M. Waxman, Linda Kirschner, Michael Seibert, Lauren H. Stolzar, Wendy Mungovan, and Todd M. Freyman
- Subjects
Automatic control ,Machine vision ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Motion detection ,computer.software_genre ,Expert system ,Visualization ,Geography ,Video tracking ,Anomaly detection ,Computer vision ,Artificial intelligence ,Port security ,business ,computer - Abstract
SeeCoast is a prototype US Coast Guard port and coastal area surveillance system that aims to reduce operator workload while maintaining optimal domain awareness by shifting their focus from having to detect events to being able to analyze and act upon the knowledge derived from automatically detected anomalous activities. The automated scene understanding capability provided by the baseline SeeCoast system (as currently installed at the Joint Harbor Operations Center at Hampton Roads, VA) results from the integration of several components. Machine vision technology processes the real-time video streams provided by USCG cameras to generate vessel track and classification (based on vessel length) information. A multi-INT fusion component generates a single, coherent track picture by combining information available from the video processor with that from surface surveillance radars and AIS reports. Based on this track picture, vessel activity is analyzed by SeeCoast to detect user-defined unsafe, illegal, and threatening vessel activities using a rule-based pattern recognizer and to detect anomalous vessel activities on the basis of automatically learned behavior normalcy models. Operators can optionally guide the learning system in the form of examples and counter-examples of activities of interest, and refine the performance of the learning system by confirming alerts or indicating examples of false alarms. The fused track picture also provides a basis for automated control and tasking of cameras to detect vessels in motion. Real-time visualization combining the products of all SeeCoast components in a common operating picture is provided by a thin web-based client.
- Published
- 2007
- Full Text
- View/download PDF
18. SeeCoast: Automated Port Scene Understanding Facilitated by Normalcy Learning
- Author
-
Allen M. Waxman, Neil A. Bomberger, Michael Seibert, and Bradley J. Rhodes
- Subjects
Focus (computing) ,Engineering ,Radar tracker ,business.industry ,Machine vision ,Video tracking ,Computer vision ,Anomaly detection ,Artificial intelligence ,Video processing ,business ,Object detection ,Visualization - Abstract
SeeCoast is a prototype US Coast Guard (USCG) port surveillance system that provides automated scene understanding support for watchstanders. A major SeeCoast objective is to reduce operator workload while maintaining optimal domain awareness by shifting operators' focus from having to detect events to being able to analyze and act upon the knowledge derived from automatically detected anomalous activities. Analyst-defined vessel activities are recognized from pre-scripted patterns and anomalous vessel activities are detected using machine learning techniques. The baseline SeeCoast system interfaces to the USCG Hawkeye prototype and uses (a) machine vision technology to produce target tracks from streaming video data; (b) multi-INT fusion technology to correlate radar, Automatic Identification System (AIS), and/or video track data into a single coherent track picture; (c) vessel activity analysis and learning technology to provide alerts for events of interest according to user-defined criteria; and (d) visualization of those alerts embedded within the common operating picture. The video processing component tasks and controls Hawkeye cameras to detect vessels in motion and generates vessel track and classification (based on vessel length) information. SeeCoast detects unsafe, illegal, and threatening vessel activities using a rule-based pattern recognizer and detects anomalous vessel activities on the basis of automatically learned behavior normalcy models. Operators can optionally guide the learning system in the form of examples and counter-examples of activities of interest, and refine the performance of the learning system by confirming alerts or indicating examples of false alarms. This paper focuses on the learning-based activity analysis capabilities of SeeCoast.
- Published
- 2006
- Full Text
- View/download PDF
19. SeeCoast port surveillance
- Author
-
Neil A. Bomberger, Robert Tillson, Linda Kirschner, Jason J. Sroka, Michael Seibert, Wendy Kogel, Michael Bosse, Bradley J. Rhodes, Patricia O Beane, William Kreamer, Edmond Chalom, and Chris Stauffer
- Subjects
Feature extraction ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sensor fusion ,Computer security ,computer.software_genre ,Port (computer networking) ,law.invention ,Geography ,law ,Pattern recognition (psychology) ,Anomaly detection ,Radar ,Port security ,computer ,Transponder - Abstract
SeeCoast extends the US Coast Guard Port Security and Monitoring system by adding capabilities to detect, classify, and track vessels using electro-optic and infrared cameras, and also uses learned normalcy models of vessel activities in order to generate alert cues for the watch-standers when anomalous behaviors occur. SeeCoast fuses the video data with radar detections and Automatic Identification System (AIS) transponder data in order to generate composite fused tracks for vessels approaching the port, as well as for vessels already in the port. Then, SeeCoast applies rule-based and learning-based pattern recognition algorithms to alert the watch-standers to unsafe, illegal, threatening, and other anomalous vessel activities. The prototype SeeCoast system has been deployed to Coast Guard sites in Virginia. This paper provides an overview of the system and outlines the lessons learned to date in applying data fusion and automated pattern recognition technology to the port security domain.
- Published
- 2006
- Full Text
- View/download PDF
20. Maritime Situation Monitoring and Awareness Using Learning Mechanisms
- Author
-
Michael Seibert, Neil A. Bomberger, Bradley J. Rhodes, and Allen M. Waxman
- Subjects
Situation awareness ,business.industry ,Computer science ,Behavioral pattern ,Machine learning ,computer.software_genre ,Motion (physics) ,Variety (cybernetics) ,Operator (computer programming) ,Pattern recognition (psychology) ,Unsupervised learning ,Artificial intelligence ,business ,computer - Abstract
This paper addresses maritime situation awareness by using cognitively inspired algorithms to learn behavioral patterns at a variety of conceptual, spatial, and temporal levels. The algorithms form the basis for a system that takes real-time tracking information and uses continuous on-the-fly learning that enables concurrent recognition of patterns of current motion states of single vessels in local vicinity. Learned patterns include routine behaviors as well as illegal, unsafe, threatening, and anomalous behaviors. Continuous learning enables the models to adapt well to evolving situations while maintaining high levels of performance. The learning combines two components: an unsupervised clustering algorithm, and a supervised mapping and labeling algorithm. Operator input can guide system learning. Event-level features of our learning system using simulated and recorded data are described
- Published
- 2006
- Full Text
- View/download PDF
21. Multisensor image fusion and mining: learning targets across extended operating conditions
- Author
-
Richard T. Ivey, David A. Fay, Allen M. Waxman, Neil A. Bomberger, and Marianne Chiarella
- Subjects
Image fusion ,Artificial neural network ,Color image ,business.industry ,Multispectral image ,Image processing ,Sensor fusion ,Panchromatic film ,Geography ,Night vision ,Computer vision ,Artificial intelligence ,business ,Remote sensing - Abstract
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light Visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused Visible/MWIR/LWIR imagery.
- Published
- 2004
- Full Text
- View/download PDF
22. Spiking neural networks for higher-level information fusion
- Author
-
Felipe M. Pait, Neil A. Bomberger, and Allen M. Waxman
- Subjects
Spiking neural network ,Artificial neural network ,Neural ensemble ,Knowledge representation and reasoning ,business.industry ,Computer science ,Artificial intelligence ,business ,Semantic network ,Random neural network ,Synchronization ,Associative learning - Abstract
This paper presents a novel approach to higher-level (2+) information fusion and knowledge representation using semantic networks composed of coupled spiking neuron nodes. Networks of spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in feature binding. The approach in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated with a simulated scenario involving the tracking of suspected criminal vehicles between meeting places in an urban environment.
- Published
- 2004
- Full Text
- View/download PDF
23. Multisensor image fusion and mining in a COTS exploitation environment
- Author
-
Richard T. Ivey, Neil A. Bomberger, David A. Fay, and Allen M. Waxman
- Subjects
Image fusion ,Artificial neural network ,business.industry ,Multispectral image ,Sensor fusion ,Panchromatic film ,Geography ,Lidar ,Night vision ,Pattern recognition (psychology) ,Computer vision ,Artificial intelligence ,business ,Remote sensing - Abstract
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladar. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine . In this paper, we will summarize the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.
- Published
- 2003
- Full Text
- View/download PDF
24. Image fusion & mining tools for a COTS environment
- Author
-
Allen M. Waxman, Neil A. Bomberger, D.A. Fay, and R.T. Ivey
- Subjects
Image fusion ,Color vision ,business.industry ,Computer science ,Multispectral image ,Sensor fusion ,Panchromatic film ,Lidar ,Night vision ,Pattern recognition (psychology) ,Computer vision ,Artificial intelligence ,Image sensor ,business - Abstract
We have continued development of a system for multisensor image fision and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for colorfirsed night vision (low-light visible and uncooled thermal imagery) and'later extended it to multispectral IR and 30 ladar. We also developed a proof-of-concept system for EO, IR, SAR firsion and mining. Over the last year we have generalized this approach and developed a user- Ji-iendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we will summarize the approach and the neural networh used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/S WIRhWINL WIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. Zn addition, we will demonstrate how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This will be illustrated for the detection of small boats in coastal waters using firsed visible/MW.IIRL WIR imagery.
- Published
- 2003
- Full Text
- View/download PDF
25. Anomaly Detection & Behavior Prediction: Higher-Level Fusion Based on Computational Neuroscientific Principles
- Author
-
Bradley J. Rhodes, Neil A. Bomberger, Majid Zandipour, Lauren H. Stolzar, Denis Garagic, James R. Dankert, Michael Seibert, Bradley J. Rhodes, Neil A. Bomberger, Majid Zandipour, Lauren H. Stolzar, Denis Garagic, James R. Dankert, and Michael Seibert
- Published
- 2009
- Full Text
- View/download PDF
26. The structure of cortical hypercolumns: Receptive field scatter may enhance rather than degrade boundary contour representation in V1
- Author
-
Eric L. Schwartz and Neil A. Bomberger
- Subjects
Communication ,Computational neuroscience ,Offset (computer science) ,Artificial neural network ,Computer science ,business.industry ,Sensory Systems ,Pinwheel ,Ophthalmology ,Receptive field ,Optical recording ,Gravitational singularity ,Slowness ,business ,Algorithm - Abstract
The spatial relationship of orientation mapping, ocularity, and receptive field (RF) position provides an operational definition of the term “hypercolumn” in V1. Optical recording suggests that pinwheel centers and blobs are spatially uncorrelated. However, error analysis indicates a 100–150 micron systematic pinwheel center positional offset. This analysis suggests that pinwheel singularities and cytochrome oxidase blobs in primate V1 may in fact be coterminous. The only model to date that accounts for this detailed spatial relationship of ocularity, orientation mapping, and RF position is the columnar shear model (Wood and Schwartz, Neural Networks, 12:205–210, 1999). Here, we generalize this model to include RF scatter, which is observed to be in the range of one third to one half of the local RF size. This model provides a computational basis to address the following question: How is the existence of RF scatter consistent with accurate edge localization? We show that scatter of about one half the average RF size can provide an accurate representation of region and edge structure in an image based on a simple form of local inhibition between the blob (spatially lowpass) and interblob (spatially band-pass) neurons resulting in a process equivalent to nonlinear diffusion. The advantages afforded by this mechanism for edge preservation and noise suppression are that it avoids the slowness of diffusion (where time is proportional to distance squared) and is fully consistent with a correct understanding of the structure of the cortical hypercolumn. We demonstrate the effectiveness of this algorithm, known in the computer vision literature as the offset filter (Fischl and Schwartz, IEEE PAMI 22:42–48, 1999), by providing results on natural images corrupted with noise. This work emphasizes the importance of an un-normalized, low-pass response to accurate edge-representation—a function usually attributed to the intensity normalized, band-pass response of extra-blob neurons. Presented at unknown. Abstract number 894. Support Contributed By: NIH/NIBIB EB001550 Contact info: Neil A. Bomberger, Computer Vision and Computational Neuroscience Lab, 677 Beacon St., Boston, MA, 02215. URL: http://eslab.bu.edu, Email: nbomberg@cns.bu.edu
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.