33 results on '"Sari-Sarraf, Hamed"'
Search Results
2. An explainable deep vision system for animal classification and detection in trail-camera images with automatic post-deployment retraining
- Author
-
Moallem, Golnaz, Pathirage, Don D., Reznick, Joel, Gallagher, James, and Sari-Sarraf, Hamed
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
This paper introduces an automated vision system for animal detection in trail-camera images taken from a field under the administration of the Texas Parks and Wildlife Department. As traditional wildlife counting techniques are intrusive and labor intensive to conduct, trail-camera imaging is a comparatively non-intrusive method for capturing wildlife activity. However, given the large volume of images produced from trail-cameras, manual analysis of the images remains time-consuming and inefficient. We implemented a two-stage deep convolutional neural network pipeline to find animal-containing images in the first stage and then process these images to detect birds in the second stage. The animal classification system classifies animal images with overall 93% sensitivity and 96% specificity. The bird detection system achieves better than 93% sensitivity, 92% specificity, and 68% average Intersection-over-Union rate. The entire pipeline processes an image in less than 0.5 seconds as opposed to an average 30 seconds for a human labeler. We also addressed post-deployment issues related to data drift for the animal classification system as image features vary with seasonal changes. This system utilizes an automatic retraining algorithm to detect data drift and update the system. We introduce a novel technique for detecting drifted images and triggering the retraining procedure. Two statistical experiments are also presented to explain the prediction behavior of the animal classification system. These experiments investigate the cues that steers the system towards a particular decision. Statistical hypothesis testing demonstrates that the presence of an animal in the input image significantly contributes to the system's decisions.
- Published
- 2020
- Full Text
- View/download PDF
3. Deep learning assisted holography microscopy for in-flow enumeration of tumor cells in blood
- Author
-
Gangadhar, Anirudh, primary, Sari-Sarraf, Hamed, additional, and Vanapalli, Siva A., additional
- Published
- 2023
- Full Text
- View/download PDF
4. Detection of live breast cancer cells in bright-field microscopy images containing white blood cells by image analysis and deep learning
- Author
-
Moallem, Golnaz, primary, Pore, Adity A., additional, Gangadhar, Anirudh, additional, Sari-Sarraf, Hamed, additional, and Vanapalli, Siva A., additional
- Published
- 2022
- Full Text
- View/download PDF
5. On the creation of a segmentation library for digitized cervical and lumbar spine radiographs
- Author
-
Gururajan, Arunkumar, Kamalakannan, Sridharan, Sari-Sarraf, Hamed, Shahriar, Muneem, Long, Rodney, and Antani, Sameer
- Published
- 2011
- Full Text
- View/download PDF
6. Double-edge detection of radiographic lumbar vertebrae images using pressurized open DGVF snakes
- Author
-
Kamalakannan, Sridharan, Gururajan, Arunkumar, Sari-Sarraf, Hamed, and Long, Rodney
- Subjects
Vertebrae, Lumbar -- Diagnosis ,Morphometrics (Biology) -- Analysis ,Biological sciences ,Business ,Computers ,Health care industry - Published
- 2010
7. Recognition of cotton contaminants via X-Ray microtomographic image analysis
- Author
-
Pai, Ajay, Sari-Sarraf, Hamed, and Hequet, Eric F.
- Subjects
Radiography -- Image quality ,Radiography -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Technologies currently used for cotton contaminant assessment suffer from some fundamental limitations. These limitations result in the misassessment of the cotton quality, and have a serious impact on its economic value. Through our research, we have shown that X-ray microtomographic image analysis may be applied with a high degree of success to noninvasive evaluation of cotton for the recognition of contaminants. We believe that this procedure, when realized in real time, will have a serious impact on the cotton cleaning process, and indeed on the economic value of cotton. Index Terms--Cotton trash, fuzzy classifier, image analysis, pattern classification, volumetric data, X-ray tomography.
- Published
- 2004
8. A shift-invariant discrete wavelet transform
- Author
-
Sari-Sarraf, Hamed and Brzakovic, Dragana
- Subjects
Signal processing -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A unifying approach to the derivation and implementation of a shift-invariant wavelet transform of one- and two-dimensional discrete signals is presented. An analytical process generates a shift-invariant orthogonal, discrete wavelet transform called the multiscale wavelet representation (MSWAR). The derived MSWAR is invertible which makes it particularly useful in image registration and fusion and object recognition. Its implementation is equivalent to filter upsampling technique while its computational complexity is quantified in terms of required convolution.
- Published
- 1997
9. Customized algorithm automates textile shrinkage measurement; a recently developed machine-vision system provides accurate and reliable shrinkage or growth measurements, independent of changes in texture or color of fabric samples, changes in benchmark colors, and slight sample rotations during the scanning process. (Machine Vision)
- Author
-
Sari-Sarraf, Hamed
- Subjects
Cotton Inc. ,Textile industry ,Business ,Electronics and electrical industries - Abstract
In the textile industry, the degree to which a fabric can be expected to shrink or expand during laundering is an important factor in determining its value. To evaluate a [...]
- Published
- 2002
10. Computational approaches to drug sensitivity prediction and personalized cancer therapy
- Author
-
Sari-Sarraf, Hamed, Nutter, Brian, Pal, Ranadip, Berlow, Noah Everett, Sari-Sarraf, Hamed, Nutter, Brian, Pal, Ranadip, and Berlow, Noah Everett
- Abstract
This dissertation represents the accumulated research in the field of Personalized Cancer Therapy performed as a Graduate Student at Texas Tech University. This research has focused on the design, implementation, and validation of computational, data-driven models of drug sensitivity and their application to personalized cancer therapy. This has resulted in projects of varying depth in the following subjects. Probabilistic Computational Modeling of Tumor Sensitivity to Targeted Therapeutic Compounds: A key open problem in the field of Systems Medicine is the drug sensitivity prediction problem: given a new cancer patient and a list of drugs or drug combinations, what will the patient response (sensitivity) to the different compounds be. Robust solutions to this problem translate to viable approaches to Personlized Therapy, where therapy assignment to cancer patients is based on the underlying patient and cancer biology, instead of a one-size-fits-all approach. The transition to personalized therapy is a primary need for the clinical oncologist community, who are often faced with a dearth of viable treatment options for relapsed, unresponsive, or high risk patients. An integrative model of drug sensitivity, focusing on functional drug screen data and informed by available genetic data, was developed to address this issue; in silico modeling results are presented in this section. Regression modeling of drug sensitivity from the CCLE Database: As another form of in silico validation, the change in drug sensitivity prediction following integration of drug-target inhibition data to an existing dataset was tested. The Cancer Cell Line Encyclopedia (CCLE) database consists of 24 anticancer drugs profiled across 479 hum-origin cancer cell lines. These cell lines underwent thorough genetic characterization, with exome sequencing, copy number variation, and gene expression sequencing data available. A few of these anti-cancer agents also have known drug-target inhibitions pro
- Published
- 2018
11. Training a New Instrument to Measure Cotton Fiber Maturity Using Transfer Learning
- Author
-
Turner, Chris, primary, Sari-Sarraf, Hamed, additional, and Hequet, Eric, additional
- Published
- 2017
- Full Text
- View/download PDF
12. Vision System for On-Loom Fabric Inspection
- Author
-
Sari-Sarraf, Hamed and Goddard, James S. Jr.
- Subjects
Textile fabrics -- Quality management ,Textile industry -- Equipment and supplies ,Quality control equipment -- Evaluation ,Software -- Evaluation ,Algorithms -- Evaluation ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
This paper describes a vision-based fabric inspection system that accomplishes on-loom inspection of the fabric under construction with 100% coverage. The inspection system, which offers a scalable open architecture, can be manufactured at relatively low cost using off-the-shelf components. While synchronized to the motion of the loom, the developed system first acquires very high-quality vibration-free images of the fabric using either front or backlighting. Then, the acquired images are subjected to a novel defect segmentation algorithm, which is based on the concepts of wavelet transform, image fusion, and the correlation dimension. The essence of this segmentation algorithm is the localization of those events (i.e., defects) in the input images that disrupt the global homogeneity of the background texture. The efficacy of this algorithm, as well as the overall inspection system, has been tested thoroughly under realistic conditions. The system was used to acquire and to analyze more than 3700 images of fabrics that were constructed with two different types of yarn. In each case, the performance of the system was evaluated as an operator introduced defects from 26 categories into the weaving process. The overall detection rate of the presented approach was found to be 89% with a localization accuracy of 0.2 in (i.e., the minimum defect size) and a false alarm rate of 2.5%. Index Terms--Computer vision, defect detection, fabric inspection, wavelet transform.
- Published
- 1999
13. A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
- Author
-
Dema, Mesfin, primary and Sari-Sarraf, Hamed, additional
- Published
- 2014
- Full Text
- View/download PDF
14. A MAXIMUM ENTROPY BASED DATA-DRIVEN 3D SCENE GENERATION
- Author
-
DEMA, MESFIN A., primary and SARI-SARRAF, HAMED, additional
- Published
- 2013
- Full Text
- View/download PDF
15. Method for non-referential defect characterization using fractal encoding and active contours
- Author
-
Sari-Sarraf, Hamed [Lubbock, TX]
- Published
- 2007
16. Context-based automated defect classification system using multiple morphological masks
- Author
-
Sari-Sarraf, Hamed [Lubbock, TX]
- Published
- 2002
17. Automated defect spatial signature analysis for semiconductor manufacturing process
- Author
-
Sari-Sarraf, Hamed [Knoxville, TN]
- Published
- 1999
18. High-throughput in Situ 3-D phenotyping of cotton height, leaf area index, and boll distribution
- Author
-
Dube, Nothabo, Deb, Sanjit K., Kelly, Brendan Robert, Mahan, James R., Sari-Sarraf, Hamed, and Ritchie, Glen L.
- Subjects
Phenotyping ,Height ,High-throughput ,Boll Distribution ,LAI - Abstract
Hands-on measurements are often used in agronomic studies of cotton (Gossypium hirsutum L.) to determine crop growth, architecture, and maturity; which, in turn, may indicate the effects of genetics, environment, and production practices on crop productivity. Many of these growth characteristics can be tied directly to yield and cotton fiber quality. Traditionally, these methods have been done by hand, requiring extensive labor and often involving destructive sampling methods. The ability of in-field high-throughput phenotyping systems to measure 3-D canopy phenology would benefit plant research and breeding programs by providing a rapid, non-destructive method of determining in-season crop growth and development. In this study, a field-based high-throughput plant phenotyping robotic system mounted with Intel® RealSense™ red-green-blue-distance (RGB-D) cameras was used to generate high-density point clouds in cotton in 2016 and 2017. The three RGB-D sensors, use structured near-Infrared (NIR) laser projectors. Each sensor projected and analyzed a binary search tree to create a depth map while also capturing color images. Two sensors were mounted facing into either side of the canopy, and the third faced downward. Information extracted from the point clouds was used to generate measurements of cotton plant height, leaf area, and cotton boll distribution. Plant height is an important phenotyping trait used for in-season measurements. It is used for the analysis of overall plant growth, is closely related to leaf area index, and has been compared to yield, boll distribution, and maturity in cotton cultivars. Although high-throughput methods of measuring plant height exist, the addition of plant height measurements to 3-D canopy structure measurements would have two added benefits: a decrease in the number of instruments required and the ability to standardize 3-D measurements to a common plane. Height histograms for each plot were extracted from the generated 3-D point clouds using CloudCompare, from which maximum plot height and the 90th percentile value of the plot heights were obtained. The digital measurements were correlated with manual measurements. Pooled seasonal coefficients of determination for each cultivar ranged from .30 -.87, though individual plot coefficients of determination were as high as .99. Measurements of leaf area, were generated both for the entire plant canopy and for individual slices based on 20-mm height segments within the plant canopy. Regression models were obtained from the manual and digital leaf area measurements. These models were then used for the computation of LAI from sensor data. High coefficients of determination between LAI values from sensor and manual measurements for all three cameras were obtained, r2 values ranging from .90 to .97. Coefficients of determination between LAI and dry leaf weight ranged from .89-.94 for all cameras, while those between canopy height and LAI ranged from .88 to .97 for all cameras. Node-by-node boll mapping has been used to determine the effects of irrigation amount and cultivar on cotton boll distribution. Node-specific boll distribution and an overall estimate of boll accumulation using line plots and a vertical box and whisker was used to determine the effects of irrigation and cultivar using a 3-D sensor system to rapidly detect open cotton bolls over a two-year study. Results obtained indicated that both irrigation and cultivar responses were identifiable using the sensor system during both years of this research study. Differences could be observed between the low irrigation and high irrigation plots in terms of boll distribution and boll accumulation for each cultivar. The low irrigation tended to produce bolls more towards the bottom of the plant, while the high irrigation produced bolls towards the top of the plant. Cultivar specific growth characteristics were also observed. The two cultivars, FM2322 and ST4747, showed differences in terms of boll distribution and boll accumulation. ST4747 tended to produce bolls more towards the bottom of the plant, while FM2322 produced bolls towards the top of the plant. Digital and manual measurements were highly correlated with r2 values ranging from .62 to .97, suggesting that the 3-D sensor system can be effectively used for the rapid detection of open cotton bolls. The high-throughput phenotyping system developed in this study measured cotton plant height, leaf area, and cotton boll distribution at the plot level under field conditions, with high accuracy.
- Published
- 2018
19. Detection and segmentation of overlapping red blood cells in microscopic medical images of stained peripheral blood smears
- Author
-
Moallem, Golnaz, Nutter, Brian, Gale, Richard O., and Sari-Sarraf, Hamed
- Subjects
Meanshift clustering algorithm ,Cell detection ,Overlapping cells ,Overlapping red blood cells ,Cell segmentation ,Medical image processing ,Red blood cells ,Snakes active contour models ,GVF snakes ,Thin blood smears - Abstract
Automated image analysis of slides of stained peripheral blood smears assists with early diagnosis of blood disorders. Automated detection and segmentation of the cells is a prerequisite for any subsequent quantitative analysis. Overlapping cell regions introduce considerable challenges to detection and segmentation techniques. Throughout this thesis, we propose a novel algorithm that can successfully detect and segment overlapping cells in microscopic images of stained peripheral blood smears. The algorithm is composed of three steps. In the first step, the input image is binarized to obtain the binary mask of the image. The second step accomplishes a reliable cell center localization approach that employs adaptive mean-shift clustering. The third step fulfills the cell segmentation purpose by obtaining the boundary of each cell utilizing a Gradient Vector Flow (GVF) driven snake algorithm. We compare the experimental results of our methodology with those reported in the most current literature. Additionally, performance of the proposed method is evaluated by comparing both cell detection and cell segmentation results with those produced manually. The method is systematically tested on 100 image patches comprising overlapping cell regions and containing more than 3800 cells. We evaluate the performance of the proposed cell detection step using precision/ TP/FP and FN rates. Moreover, the cell segmentation step is assessed employing sensitivity, specificity and Jaccard index.
- Published
- 2017
20. Training a new instrument to measure cotton fiber maturity using Transfer Learning
- Author
-
Turner, Christopher, Hequet, Eric F., Pal, Ranadip, and Sari-Sarraf, Hamed
- Subjects
Machine learning ,Feature selection ,Cotton fiber maturity ,Regression ,Transfer learning - Abstract
This dissertation presents novel transfer learning feature selection and regression methods that utilizes data from an older instrument to train a new instrument to assess the same measurement. The method assumes that the instruments measure the same property but by different methodologies, and that samples presented to one apparatus are not available to the other. The algorithm makes use of a single feature common to both instruments to create a link with which to transfer information regarding the distribution of the resulting measurements, or labels. The goal is to generate a model in the domain of the new instrument that maps data from analyzed samples to an output measurement. This modeling process is accomplished through an iterative algorithm that supports many types of regression schemes. Results are shown using both synthetic and real world data sets, which demonstrate the effectiveness of the proposed method. Finally, we present how this technique is used to train a new instrument designed to measure cotton fiber maturity.
- Published
- 2016
21. Computational approaches to drug sensitivity prediction and personalized cancer therapy
- Author
-
Berlow, Noah Everett, Sari-Sarraf, Hamed, Nutter, Brian, and Pal, Ranadip
- Subjects
Targeted therapy ,Computational modeling ,Probabilistic models ,Personalized therapy ,Personalized medicine ,Cancer - Abstract
This dissertation represents the accumulated research in the field of Personalized Cancer Therapy performed as a Graduate Student at Texas Tech University. This research has focused on the design, implementation, and validation of computational, data-driven models of drug sensitivity and their application to personalized cancer therapy. This has resulted in projects of varying depth in the following subjects. Probabilistic Computational Modeling of Tumor Sensitivity to Targeted Therapeutic Compounds: A key open problem in the field of Systems Medicine is the drug sensitivity prediction problem: given a new cancer patient and a list of drugs or drug combinations, what will the patient response (sensitivity) to the different compounds be. Robust solutions to this problem translate to viable approaches to Personlized Therapy, where therapy assignment to cancer patients is based on the underlying patient and cancer biology, instead of a one-size-fits-all approach. The transition to personalized therapy is a primary need for the clinical oncologist community, who are often faced with a dearth of viable treatment options for relapsed, unresponsive, or high risk patients. An integrative model of drug sensitivity, focusing on functional drug screen data and informed by available genetic data, was developed to address this issue; in silico modeling results are presented in this section. Regression modeling of drug sensitivity from the CCLE Database: As another form of in silico validation, the change in drug sensitivity prediction following integration of drug-target inhibition data to an existing dataset was tested. The Cancer Cell Line Encyclopedia (CCLE) database consists of 24 anticancer drugs profiled across 479 hum-origin cancer cell lines. These cell lines underwent thorough genetic characterization, with exome sequencing, copy number variation, and gene expression sequencing data available. A few of these anti-cancer agents also have known drug-target inhibitions profiles; these commonalities are utilized to show that integration of dataset improves sensitivity prediction; in silico modeling results are presented in this section. Model-driven combination therapy design: in vitro and in vivo validation: The in silico validation of the tumor sensitivity modeling constituted the first step in development of this computational approach. The next key step was translation from in silico validation to in vitro (in glass) and in vivo (in life) validation. Biological experimentation was required to move the computational approach closer to clinical viablity. As part of this research, a year was spent in the laboratory of key collaborator, Dr. Charles Keller. The biological validations performed showed that functional data-based modeling was capable of translating to successful biological outcomes. Design of Dynamic Network Models from Static Models and Expression experiments: The computational approach presented here is based on data that acts as a single timepoint snapshot of a biological system. However, tumor cells are never at rest; they are constantly undergoing a myriad of necessary biological processes. The cellular processes exist on numerous biological pathways and have a vast number of potential ways to interact. Because of this, there are upstream and downstream biological processes; by intervening in upstream processes, the downstream processes may respond without need for intervention. The static computational model was informed with a small set of gene expression experiments to construct dynamic, upstream-downstream and parallel process models of tumor sensitivity and performed in silico valdiations of the approach. Analysis of Drug Screen Information Gain and Drug Screen Design : The monetary and time cost of producing functional drug screens for high-throughput screening of new patient cancer samples, as well as the limited population of patient cancer cells available for testing, are key practical constraints in preclinical testing scenarios. As such, maximizing the usable information gained from a functional drug screen is extremely important when the data is used to inform clinical decisions for patients. This work establishes a metric for comparing expected information gain from a drug screen of an arbitrary size, and establishes a framework for drug selection for new drug screens.
- Published
- 2015
22. A Maximum Entropy based data-driven synthetic image and 3D scene generation
- Author
-
Dema, Mesfin, Pal, Ranadip, Sridharan, Mohan, and Sari-Sarraf, Hamed
- Subjects
Maximum entropy ,And-or graphs ,Scene generation - Abstract
Due to overwhelming use of 3D models in video games and virtual environments, there is a growing interest in 3D scene generation, scene understanding and 3D model retrieval. In this paper, we introduce a data-driven 3D scene generation approach from a Maximum Entropy (MaxEnt) model selection perspective. Using this model selection criterion, new scenes can be sampled by matching a set of contextual constraints that are extracted from training and synthesized scenes. Starting from a set of randomly synthesized configurations of objects in 3D, the MaxEnt distribution is iteratively sampled and updated until the constraints between training and synthesized scenes match, indicating the generation of plausible synthesized 3D scenes. To illustrate the proposed methodology, we use 3D training desk scenes that are composed of seven predefined objects with different position, scale and orientation arrangements. After applying the MaxEnt framework, the synthesized scenes show that the proposed strategy can generate reasonably similar scenes to the training examples without any human supervision during sampling.
- Published
- 2014
23. Algorithm development and implementation of activity recognition system utilizing wearable MEMS sensors
- Author
-
Gupta, Piyush, Sari-Sarraf, Hamed, Pal, Ranadip, Gale, Richard O., and Dallas, Timothy E. J.
- Subjects
Patient monitoring ,Sensors ,Wearable technology ,Leave one out ,Scalability analysis ,Unsupervised learning ,Bootstrap methodology ,Accelerometer ,Algorithm ,Elderly ,Cross-fold ,Online learning ,Activity recognition ,Resubstitution ,Feature selection ,Fall ,Microelectromechanical systems ,Activities of daily living (ADL) ,Gait ,Real-time ,Geriatric ,Supervised learning - Abstract
Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Medical and gerontology literature often associates lack of physical activity with fall. Therefore, an autonomous activity recognition system can help elderly people and their care givers to track their level of activities performed in day-to-day life. Moreover, activity recognition is also required in other applications such as medical monitoring and post-fall/injury rehabilitation. Though many researchers have shown the utility of several different sensors or sensor networks for achieving activity recognition, MEMS-based sensors are leading the race because of the advantages they have in terms of cost, form-factor, and being easily made into mobile units. Previously developed activity recognition systems utilizing MEMS-based tri-axial accelerometers have provided mixed results, with subject-to- subject variability. This work presents an accurate activity recognition system utilizing a body worn wireless accelerometer, to be used in the real-life application of patient monitoring. The system was developed in a fashion such that user-comfort and accuracy is maximized, while reducing the level of user training. The goal is not only to help the system attain high accuracy, but also to achieve high user-acceptance such that the system is practically implementable. Different test methodologies were also investigated and implemented so as to estimate errors effectively in a relatively small set of samples. The algorithm presented in this work utilizes data from a single, waist- mounted tri-axial accelerometer to classify gait events into six daily living activities and transitional events. The accelerometer can be worn at any location around the circumference of the waist, thereby, reducing user training. Activity recognition results on seven subjects with leave-one-person-out error estimates showed overall accuracy of about 98%. Accuracy for each of the individual activity was also more than 95%. Error estimates calculated using Bootstrapping methodology also confirmed high accuracy for the system.
- Published
- 2014
24. Integrating answer set programming and POMDPs for knowledge representation and reasoning in robotics
- Author
-
Zhang, Shiqi, Gelfond, Michael, Sari-Sarraf, Hamed, Wyatt, Jeremy, and Sridharan, Mohan
- Subjects
Answer set programming ,POMDPs ,Robotics ,Knowledge representation and reasoning - Abstract
Mobile robots equipped with multiple sensors and deployed in real-world domains frequently find it difficult to efficiently process all sensor inputs, operate without any human input and have possibly-relevant domain knowledge in advance. At the same time, robots cannot be equipped with all relevant domain knowledge in advance, and humans are unlikely to have the time and expertise to provide elaborate and accurate feedback. This dissertation presents a novel architecture for knowledge representation and reasoning in robotics. These challenges are addressed by integrating high-level logical inference with low-level probabilistic sequential decision-making. Answer Set Programming (ASP), a non-monotonic logic programming paradigm, is used to represent, reason with and revise domain knowledge obtained from sensor inputs and high-level human feedback. In parallel, a novel hierarchical decomposition of partially observable Markov decision processes (POMDPs) uses adaptive observation functions, constrained convolutional policies and automatic belief propagation to automatically adapt visual sensing and information processing to the task at hand. This POMDP hierarchy serves as the first key contribution of this dissertation. The second key contribution is the merging strategy of ASP-based logical inference with POMDP-based probabilistic belief. This dissertation presents a principled generation of prior beliefs from the knowledge base represented by ASP and the prior beliefs are then merged with POMDP beliefs using Bayesian updates to adapt sensing and acting to the tasks at hand. In addition, the entropy of belief states is used to determine the need for human feedback and hence robots ask questions only when needed. At last, robots are enabled to learn from positive and negative observations to identify the situations where the current task should no longer be pursed. As a result, mobile robots are able to represent and reason with domain knowledge, retain capabilities for many different tasks, direct sensing to relevant locations and determine the sequence of sensing and processing algorithms best suited to any given task, using human feedback based on need and availability. Furthermore, the architecture is augmented with a communication layer to enable belief sharing and collaboration in a team of robots. All algorithms are evaluated in simulation and on physical robots localizing target objects in indoor domains.
- Published
- 2013
25. Autonomous learning of object models on mobile robots using visual cues
- Author
-
Li, Xiang, Rushton, J. Nelson, Sari-Sarraf, Hamed, Stone, Peter, and Sridharan, Mohan
- Subjects
Wheeled robots ,Visual learning ,Object recognition - Abstract
Mobile robots are increasingly being used in real-world application domains such as disaster rescue, surveillance, health care and navigation. These application domains are typically characterized by partial observability, non-deterministic action outcomes and unforeseen changes. A major challenge to the widespread deployment of robots in such domains is the ability to learn models of domain objects automatically and efficiently, and to adapt the learned models in response to changes. Although sophisticated algorithms have been developed for modeling and recognizing objects using different visual cues, existing algorithms are predominantly computationally expensive, and require considerable prior knowledge or many labeled training samples of desired objects to learn object models. Enabling robots to learn object models and recognize objects with minimal human supervision thus continues to be an open problem. The above-mentioned challenges are offset by some observations. First, many objects have distinctive characteristics, locations, and motion patterns, although these parameters may not be known in advance and may change over time. Second, images encode information about objects in the form of many different visual cues. Third, any specific task performed by robots typically requires accurate models of only a small number of domain objects. This dissertation describes an algorithm that exploits these observations to achieve the following objectives: 1. Investigate learning of object models from a small (3 - 8) number of images. Robots consider objects that move to be interesting, efficiently identifying corresponding image regions using motion cues. 2. Exploit complementary strengths of appearance-based and contextual visual cues to efficiently learn representative models of these objects from relevant image regions. 3. Use learned object models in generative models of information fusion and energy minimization algorithms for reliable and efficient recognition of stationary and moving objects in novel scenes with minimal human supervision. These objectives promote incremental learning, enabling robots can acquire and use sensor inputs and human feedback based on need and availability. The object models consist of: spatial arrangements of gradient features, graph-based models of neighborhoods of gradient features, parts-based models of image segments, color distributions, and local context models. Although the visual cues underlying individual components of the object model have been used in other algorithms, our representation of these cues fully exploits their complementary strengths, resulting in reliable and efficient learning and recognition in indoor and outdoor domains. All algorithms are evaluated on wheeled robots in indoor and outdoor domains and on images drawn from benchmark datasets.
- Published
- 2013
26. IT-SNAPS multi-user implementation utilizing NVIDIA's compute-unified device architecture and Java XML Web Services
- Author
-
Bryant, Benjamin, Saed, Mohammed, and Sari-Sarraf, Hamed
- Subjects
Cross-platform software development ,Shared virtual environments ,Java - Abstract
This thesis describes a multi-user implementation of the Interactive Texture-Snapping System (IT-SNAPS) as a Java XML web service. The system uses a Java applet as the client interface, while the server side is composed of Java, C, and compiled Matlab code. NVIDIA’s Compute Unified Device Architecture (CUDA) is also used on the server, and is capable of managing one or more graphics processing units (GPUs). The server creates and stores data within the GPU(s), pushing most of the workload onto them to reduce processing times, and increases the number of potential users per server. This thesis discusses the hardware, software, and concepts used in its creation.
- Published
- 2012
27. Energy minimization schemes customized to industrial applications and conceptualized for interactive segmentation
- Author
-
Kamalakannan, Sridharan, Hequet, Eric F., Pal, Ranadip, and Sari-Sarraf, Hamed
- Subjects
Energy minimization ,Image segmentation ,Intelligent scissors ,Active contours ,Interactive segmentation ,Level sets ,Stain segmentation ,Firearm identification - Abstract
This dissertation focuses on energy minimizing techniques customized to two industrial applications and conceptualized in order to develop a hybrid interactive segmentation tool box. The first industrial application focuses on developing a machine vision system for simultaneous and objective evaluation of an important functional attribute of a fabric; namely, soil/stain release. Soil release corresponds to the efficacy of the fabric in releasing stains after laundering. Within the framework of the proposed machine vision scheme, the samples are prepared using a prescribed procedure and subsequently digitized using a commercially available off-the-shelf scanner. A customized adaptive statistical snake, which evolves based on region statistics, is employed in order to segment the stain. Once the stain is localized, appropriate measurements can be extracted from the stain and the background image that can help in objectively quantifying stain release. A sizeable data set is employed to test the efficacy of the proposed approach. The second application comprises of a machine vision system for automatic identification of the class of firearms by extracting and analyzing two significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area (PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI. Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of the proposed approach. For the interactive segmentation, two intrinsic properties namely shape and symmetry is integrated into the intelligent scissors framework. Currently, all the interactive segmentation methods are driven by the lower level image features, whereas the higher level knowledge of the application is imparted by the user. The burden on the user increases significantly during instances of occlusions, broken edges, noise and spurious boundaries. As a first step towards incorporating shape feature, an offline training procedure is performed in which a mean shape and the corresponding shape variance is computed by registering training shapes up to a rigid transform in a level-set framework. The user starts the interactive segmentation procedure by providing a training segment, which is a part of the target boundary. A partial shape matching scheme based on a scale-invariant curvature signature is employed in order to extract shape correspondences and subsequently predict the shape of the unsegmented target boundary. A ‘zone of confidence’ is generated for the predicted boundary to accommodate shape variations. The method is evaluated on segmentation of digital chest x-ray images for lung annotation which is a crucial step in developing algorithms for screening for Tuberculosis. Symmetry feature is incorporated into the intelligent scissors framework in order to predict one symmetric half of a bilaterally symmetric object distorted by a projective transform, from the other symmetric half. Accordingly, the proposed work utilizes the fundamental relationship between a distorted symmetrical object and its symmetrical counterpart in order to establish a mathematical relationship between the two symmetric halves. The user starts the segmentation procedure by providing training segments from the symmetric halves of the target object. Based on the provided training segments, a relationship is established between the two symmetric halves by incorporating a collinearity based parameterization method. Subsequently, based on this established relationship and the user generated segment, the other symmetric half is predicted. This predicted segment is used to generate a ‘zone of confidence’ which mitigates the influence of spurious edges during the segmentation procedure, thus reducing the burden on the user. In addition, the predicted segment is also used to fill up the occluded parts of the target object, thus mitigating the burden and the subjectivity involved. Synthetic examples are employed to prove the efficacy of the proposed work.
- Published
- 2012
- Full Text
- View/download PDF
28. Machine vision system for simultaneous measurement of dimensional changes and soil release in printed fabric
- Author
-
Hill, Matthew, Sari-Sarraf, Hamed, and Hequet, Eric F.
- Subjects
Printed fabric ,Soil release ,Dimensional changes ,Neighborhood-based subtraction ,Shrinkage ,Image registration - Abstract
This thesis presents a protocol for creating digital images of printed fabric swatches and an algorithm that will automatically measure dimensional changes and segment stains so that soil release could be evaluated. The dimensional changes measured here are shrinkage and skew. Current methods for evaluating dimensional changes on printed fabrics are manual. There are no current methods for evaluation of soil release on printed fabrics and the segmentation that the proposed algorithm provides is a vital first step to such a system. This thesis proposes a system that could become a standard for making both measurements simultaneously. To make these measurements, printed fabric swatches are scanned before and after wash using an off-the-shelf scanner. Reference points (called shrinkage dots) are placed on the fabric swatches and then located in the scanned images. This is done using image registration and subtraction to remove the influence of the pattern followed by cross-correlation to then locate the shrinkage dots. The locations of the shrinkage dots are used both to calculate the dimensional changes and in locating the stains. Before a snake-based method segments the stain, the influence of the background pattern is removed using the same registration and subtraction method used for shrinkage dot detection. In an experiment involving 240 images and 10 different printed patterns, the algorithm was able to correctly identify 98.8% of the shrinkage dots and identify stains in 93.3% of stains that were determined to exist according to technicians. The segmentation accuracy is quantified by an average dice metric of .87 in a set of 50 potential stains when comparing to manual segmentations.
- Published
- 2010
29. Machine vision system for quantification of cotton fiber length and maturity
- Author
-
Shahriar, Muneem, Saed, Mohammed, Sari-Sarraf, Hamed, and Hequet, Eric F.
- Subjects
Fiber length ,Fiber maturity ,Cotton fiber - Abstract
Cotton is an important cash crop in the United States (third producer, first exporter). There is a constant demand for high quality cotton fibers in the export market, especially for fiber length and maturity. This requires an ever moving research into methodologies that can measure these qualities from cotton fibers accurately and quickly. In a previous study, a fiber length algorithm was developed to measure the length of cotton fibers with good accuracy (+/- 1% of true length) and was validated on 20 cotton samples totaling 10,000 fibers. The objective of this thesis is to develop a machine vision system for the quantification of both fiber length and maturity. To achieve this, an improved image acquisition system is proposed which acquires high-resolution (25,400 dpi) longitudinal scans of complete cotton fibers without breaking the fibers into individual segments or applying any physical stress to straighten them. Software algorithms are implemented on these scans to extract features related to fiber length and maturity. For length measurement, Wang~{!/~}s length algorithm is employed because it is invariant to fiber shapes, intra-fiber crimps and inter-fiber intersections. However, modifications have been made primarily to enhance the computational speed of the algorithm so that length measurements are close to real-time. The modified algorithm has also been validated on the original 20 cotton samples. An indirect method of estimating fiber maturity based on the evaluation of cotton fiber characteristics is also proposed. The maturity algorithm measures changes in fiber width, fiber convolutions, and fiber translucency along the length of a fiber and creates features that are pertinent to study these characteristics in more detail. The proposed algorithm has been applied to a sample of 50 mature and 50 immature cotton fibers. The results indicate that all three fiber haracteristics show statistically significant differences between the two samples. Further analysis has also shown that some features such as thin places per-unit-length and intensity differences are excellent predictors of maturity. The least deterministic characteristic found is changes in fiber width. To conclude, the findings imply that the system is fully capable of measuring fiber length, and capable of quantifying maturity differences between cotton samples.
- Published
- 2008
30. Energy-based deformable contours in computer vision: Recent advances and customization for two applications
- Author
-
Kamalakannan, Sridharan, Hequet, Eric F., Mansouri, Hossein, and Sari-Sarraf, Hamed
- Abstract
This thesis explains, in detail, the various kinds of active contour models that have attracted the attention of many in the computer vision community in the recent years. It gives a detailed description of the energy formulations and the derivation of force equations using a calculus of variations method. These snake models are combined and customized for two applications: (1) detection of double edges in x-ray images of lumbar vertebrae using pressurized open DGVF snakes, and (2) fabric stain detection using statistical balloons. The detection of double edges in x-ray images of lumbar vertebrae is of prime importance in the assessment of injury or vertebral collapse, possibly due to osteoporosis or other spine pathology. Manual segmentation is prone to errors due to subjective judgment and, hence, computer vision methods, such as snakes, are an attractive alternative to providing an automatic means of segmenting the double edges. The proposed algorithm uses a pressurized open model of DGVF snakes, customized to this application. This algorithm is applied to a set of over 30 lumbar images thus far, and the double-edge detection results have been deemed promising enough to set up a quantitative measurement for the assessment of injury or vertebral collapse. The goal in the second application is the automatic quantification of stain release in fabrics, which is an important property, impacting the fabrics’ pricing in the marketplace. Of course, to quantify stain release, one must first detect and segment the stains. This thesis proposes a balloon model with embedded statistical information in order to detect and segment the stains. A set of 15 stain images are used thus far to test the algorithm with near perfect detection and segmentation results.
- Published
- 2007
31. Fiber property characterization by image processing
- Author
-
Wang, Huapeng, Hequet, Eric F., and Sari-Sarraf, Hamed
- Subjects
Medial axis ,Curve fitting with adaptive control - Abstract
In this thesis, we intended to design and realize an imaging system for the accurate measurement of cotton fiber length. A secondary objective was to incorporate fiber maturity estimation to the system. Commercial systems measure fiber length in a reasonable time span, however, the accuracy is questionable. Image processing might be seen as an alternative method to the conventional systems, but the processing time is an issue. Our prototype system is composed of an off-the-shelf scanner that generates a grayscale image of multiple fibers, followed by customized image processing algorithms that compute the length of each fiber in the image. Although the system requires some degree of separation between the individual fibers, it is shown to produce highly accurate length measurements that are invariant to fiber orientation, shape, inter-fiber intersections, and intra-fiber crimps and crossovers. Hence, in its present state, the proposed system serves as an excellent reference method for assessing the efficacy of commercially available length measurement systems.
- Published
- 2007
32. Segmentation of radiographs of cervical spine using level sets
- Author
-
Raju, Rama Krishna, Karp, Tanja, Sari-Sarraf, Hamed, and Hequet, Eric F.
- Subjects
Image segmentation ,Level sets - Abstract
This thesis proposes a novel level set segmentation technique to segment medical radiographs of cervical spine. In the past, the level set technique has been used alongside shape information of the target object. To the best of our knowledge, in most methods the curve is evolved towards the boundary using shape information complimented by region based or edge information. However, in many applications like ours, the region based information is non-existent and level set evolution starting from an arbitrary initial curve is difficult.Thus, we propose a method that uses the shape estimate as the initial curve and then evolve it towards the nearest edges in the image. We also present the performance analysis of our approach.
- Published
- 2006
33. Sequence matching on holographicaly stored genetic strings
- Author
-
Merkling, Joseph Legarde, Sari-Sarraf, Hamed, Watson, Richard, and Gale, Richard O.
- Subjects
NP complete ,One dimensional fourier transform - Abstract
In the field of computational biology one of the basic tasks is string matching. This provides the basis for both the sequencing and assembly of chromosomes but also the alignment of homologues strings in order to determine evolutionary and functional relationships. Over the last fifteen years the amount of data available has grown exponentially, but the ability to analyze this data has not kept pace. This paper examines the feasibility of leveraging the power of holographic storage and optical data processing to handle the data flood. The focus of the discussion is on algorithmic and data formatting issues.
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.