154 results on '"Ishwar K. Sethi"'
Search Results
2. A Fine-Tuned Convolution Neural Network Based Approach For Phenotype Classification Of Zebrafish Embryo
- Author
-
Nilesh Patel, Ishwar K. Sethi, and Gaurav Tyagi
- Subjects
Computer science ,ved/biology.organism_classification_rank.species ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,03 medical and health sciences ,0302 clinical medicine ,Medical imaging ,Model organism ,Zebrafish ,0105 earth and related environmental sciences ,General Environmental Science ,biology ,Artificial neural network ,Contextual image classification ,ved/biology ,business.industry ,Drug discovery ,Embryo ,biology.organism_classification ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery - Abstract
In the area of medical imaging, automation of medical image classification or recognition is an active field of research. Over the past decade, due to the popularity and usability of artificial neural networks, it is becoming the norm to achieve this automation by deep neural networks. In toxicology based research, zebrafish has become a key model organism in phenotypical imaging and drug discovery. Detection of complex patterns and phenotypes in zebrafish embryo during preclinical or clinical trials is a standard part of the drug discovery cycle. Currently this classification task of phenotypes is mostly experts based. In this work, we propose a fine-tuned convolution neural network (CNN) based model for automated classification of different phenotypical changes observed due to the toxic substance in the zebrafish embryo. We demonstrated the ability of CNN model as well as a fine-tuned CNN based model to classify different deformation in an embryo with high accuracy. Such automated medical imaging model can be used extensively by experts in the area of toxicology and drug discovery.
- Published
- 2018
- Full Text
- View/download PDF
3. Evolutionary Optimization Based on Biological Evolution in Plants
- Author
-
Neeraj Gupta, Ishwar K. Sethi, Nilesh Patel, and Mahdi Khosravy
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Probabilistic logic ,Inheritance (genetic algorithm) ,Particle swarm optimization ,02 engineering and technology ,Consistency (database systems) ,020901 industrial engineering & automation ,Mutation (genetic algorithm) ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cuckoo search ,business ,General Environmental Science - Abstract
This paper presents a binary coded evolutionary computational method inspired from the evolution in plant genetics. It introduces the concept of artificial DNA which is an abstract idea inspired from inheritance of characteristics in plant genetics through transmitting dominant and recessive genes and Epimutaiton. It involves a rehabilitation process which similar to plant biology provides further evolving mechanism against environmental mutation for being better and better. Test of the effectiveness, consistency, and efficiency of the proposed optimizer have been demonstrated through a variety of complex benchmark test functions. Simulation results and associated analysis of the proposed optimizer in comparison to Self-learning particle swarm optimization (SLPSO), Shuffled Frog Leap Algorithm (SFLA), Multi-species hybrid Genetic Algorithm (MSGA), Gravitational search algorithm (GSA), Group Search Optimization (GSO), Cuckoo Search (CS), Probabilistic Bee Algorithm (PBA), and Hybrid Differential PSO (HDSO) approve its applicability in solving complex problems. In this paper, we have shown effective results on thirty variables benchmark test problems of different classes.
- Published
- 2018
- Full Text
- View/download PDF
4. Value assessment method for expansion planning of generators and transmission networks: a non-iterative approach
- Author
-
Neeraj Gupta, Ninoslav Marina, Kumar Saurav, Mahdi Khosravy, and Ishwar K. Sethi
- Subjects
Power transmission ,Engineering ,Mathematical optimization ,business.industry ,020209 energy ,Applied Mathematics ,Reliability (computer networking) ,Value (computer science) ,Context (language use) ,02 engineering and technology ,Wheeling ,Electric power system ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Energy (signal processing) ,Simulation - Abstract
In the context of efficient generation expansion planning (GEP) and transmission expansion planning (TEP), value assessment method (VAM) is the critical topic to discuss. Presently, two well-known VAMs, min-cut max-flow (MCMF) and load curtailment strategy (LCS), are used for GEP and TEP. MCMF does not follow electrical laws and is unable to calculate congestion cost (CC) and re-dispatch cost (RDC). LCS calculates both, but in iterative way, thus takes a long time to provide solution. In the constrained network, multiple quantities like demand/energy not served (D/ENS) and generation not served (GNS), wheeling loss (WL), CC and RDC are existing together and thus have to be calculated together to encounter the loss in all aspects. Existing methods show limitations in this regard and do not calculate all above described quantities simultaneously. Thus, in this paper, a non-iterative VAM (NVAM) is presented based on electrical laws, which calculates value of the present and the planned systems by incorporating all system quantities of D/ENS, GNS, WL, CC and RDC together. Due to non-iterative batch approach, it is quite faster compared to the above-mentioned traditional VAMs, i.e., MCMF and LCS. Furthermore, comparative results on IEEE-5 bus and IEEE-24 bus power systems show its higher efficiency. The MATLAB code of the introduced NVAM is provided in “Appendix” for further development by the researchers.
- Published
- 2017
- Full Text
- View/download PDF
5. Perceptual Adaptation of Image Based on Chevreul–Mach Bands Visual Phenomenon
- Author
-
Ishwar K. Sethi, Mohammad Reza Asharif, Mahdi Khosravy, Neeraj Gupta, and Ninoslav Marina
- Subjects
Morphological gradient ,Image quality ,business.industry ,Applied Mathematics ,Binary image ,020206 networking & telecommunications ,Image processing ,02 engineering and technology ,Image texture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Median filter ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image restoration ,Feature detection (computer vision) ,Mathematics - Abstract
The perceptual adaptation of the image (PAI) is introduced by inspiration from Chevreul–Mach Bands (CMB) visual phenomenon. By boosting the CMB assisting illusory effect on boundaries of the regions, PAI adapts the image to the perception of the human visual system and thereof increases the quality of the image. PAI is proposed for application to standard images or the output of any image processing technique. For the implementation of the PAI on the image, an algorithm of morphological filters (MFs) is presented, which geometrically adds the model of CMB effect. Numerical evaluation by improvement ratios of four no-reference image quality assessment (NR-IQA) indexes approves PAI performance where it can be noticeably observed in visual comparisons. Furthermore, PAI is applied as a postprocessing block for classical morphological filtering, weighted morphological filtering, and median morphological filtering in cancelation of salt and pepper, Gaussian, and speckle noise from MRI images, where the above specified NR-IQA indexes validate it. PAI effect on image enhancement is benchmarked upon morphological image sharpening and high-boost filtering.
- Published
- 2017
- Full Text
- View/download PDF
6. Image Quality Assessment: A Review to Full Reference Indexes
- Author
-
Neeraj Gupta, Nilesh Patel, Mahdi Khosravy, and Ishwar K. Sethi
- Subjects
Similarity (geometry) ,Image quality ,business.industry ,Computer science ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Pattern recognition ,Image processing ,02 engineering and technology ,Benchmarking ,Image (mathematics) ,symbols.namesake ,Speckle pattern ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Linear filter - Abstract
An image quality index plays an increasingly vital role in image processing applications for dynamic monitoring and quality adjustment, optimization and parameter setting of the imaging systems, and finally benchmarking the image processing techniques. All the above goals highly require a sustainable quantitative measure of image quality. This manuscript analytically reviews the popular reference-based metrics of image quality which have been employed for the evaluation of image enhancement techniques. The efficiency and sustainability of eleven indexes are evaluated and compared in the assessment of image enhancement after the cancellation of speckle, salt and pepper, and Gaussian noises from MRI images separately by a linear filter and three varieties of morphological filters. The results indicate more clarity and sustainability of similarity-based indexes. The direction of designing a universal similarity-based index based on information content of the image is suggested as a future research direction.
- Published
- 2018
- Full Text
- View/download PDF
7. A Zoned Image Patch Permutation Descriptor
- Author
-
Nilesh Patel, Tian Tian, Delie Ming, and Ishwar K. Sethi
- Subjects
Brightness ,business.industry ,Applied Mathematics ,GLOH ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Monotonic function ,Pattern recognition ,Image representation ,Robustness (computer science) ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,Signal Processing ,Signal processing algorithms ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Mathematics - Abstract
Image representation through local descriptors is a research hotspot in computer vision. In this letter, we propose a novel local image descriptor based on intensity permutation and zone division. The oFAST detector is first employed to detect keypoints with orientations, and then steered patterns are applied to sample rotation-invariant points within the local keypoint patch. In the step of local patch description, intensity permutation and zone division are implemented to construct our descriptor, with the advantages of inherent robustness and invariance to monotonic brightness changes. Our proposed algorithm performed well in the experiments on benchmark dataset for descriptor evaluation.
- Published
- 2015
- Full Text
- View/download PDF
8. Evaluating Throughput and Delay in 3G and 4G Mobile Architectures
- Author
-
Ye Zhu, Ishwar K. Sethi, Ivan Ivanov, Huirong Fu, and Eralda Caushaj
- Subjects
Focus (computing) ,Computer science ,business.industry ,Network delay ,Cellular network ,Graph (abstract data type) ,Maximum throughput scheduling ,business ,Throughput (business) ,Processing delay ,Computer network - Abstract
All mobile carriers these days have provided 3G and 4G services to their customers. The evolution of cellular networks from 3G to 4G has improved several performance metrics of the data communications. The focus of this paper is to evaluate two of the most important performance metrics: throughput and delay. We consider the cellular network as an integrate infrastructure that includes mobile and fixed nodes. Based on this model we calculate and analyze the throughput and delay. Our results illustrate that the throughput is increased while the delay is decreased in 4G data network compared to the previous 3G architecture. In addition, we evaluate how the delay affects the security of the network.
- Published
- 2014
- Full Text
- View/download PDF
9. Theoretical Analysis and Experimental Study
- Author
-
Ishwar K. Sethi, Eralda Caushaj, Haissam Badih, Huirong Fu, Ye Zhu, Supeng Leng, and Dion Watson
- Subjects
Information privacy ,business.industry ,Network packet ,Computer science ,Internet privacy ,General Medicine ,Computer security ,computer.software_genre ,Cellular communication ,Countermeasure ,Monitoring data ,Wireless ,The Internet ,Android (operating system) ,business ,computer - Abstract
The importance of wireless cellular communication in our daily lives has grown considerably in the last decade. The smartphones are widely used nowadays, besides voice communication; the authors routinely use them to access the internet, conduct monetary transactions, send text messages and query a lot of useful information regarding the location of specific places of interest. The use of smartphones in their day-to-day communication raises many unresolved security and privacy issues. In this paper they identify relevant security attacks in Wireless Cellular Network. The authors conduct experiments in four different platforms such as Iphone, Android, Windows and Blackberry. The packets captured through Wireshark for approximately 24 minutes, giving them a lot of information regarding security and privacy issues involving the users. A lot of useful apps installed and used by the end-users have serious issues in terms of privacy and the information exposed. Which is the better platform comparing all four and what exactly do they expose from the user’s information? What are the threats and countermeasures that the users should be aware of? The aim of the authors’ paper is to give answers to the above questions based on the data captured by conducting real-life scenarios.
- Published
- 2013
- Full Text
- View/download PDF
10. Morphological Filters: An Inspiration from Natural Geometrical Erosion and Dilation
- Author
-
Mohammad Reza Asharif, Ishwar K. Sethi, Ninoslav Marina, Neeraj Gupta, and Mahdi Khosravy
- Subjects
Computer science ,Structuring element ,business.industry ,Binary number ,Grayscale ,Operator (computer programming) ,Dilation (morphology) ,Computer vision ,Artificial intelligence ,Well-defined ,business ,Morphological operators ,Algorithm ,Intuition - Abstract
Morphological filters (MFs) are composed of two basic operators: dilation and erosion, inspired by natural geometrical dilation and erosion. MFs locally modify geometrical features of the signal/image using a probe resembling a segment of a function/image that is called structuring element. This chapter analytically explains MFs and their inspirational features from natural geometry. The basic theory of MFs in the binary domain is illustrated, and at the sequence, it has been shown how it is extended to the domain of multivalued functions. Each morphological operator is clarified by intuitive geometrical interpretations. Creative natural inspired analogies are deployed to give a clear intuition to readers about the process of each of them. In this regard, binary and grayscale morphological operators and their properties are well defined and depicted via many examples.
- Published
- 2017
- Full Text
- View/download PDF
11. Brain Action Inspired Morphological Image Enhancement
- Author
-
Mohammad Reza Asharif, Mahdi Khosravy, Ishwar K. Sethi, Ninoslav Marina, and Neeraj Gupta
- Subjects
Visual perception ,business.industry ,Computer science ,Image quality ,Optical illusion ,Physical reality ,media_common.quotation_subject ,05 social sciences ,Illusion ,050109 social psychology ,Image enhancement ,050105 experimental psychology ,Sight ,Human visual system model ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,media_common - Abstract
The image perception by human brain through the eyes is not exactly what the eyes receive. In order to have an enhanced view of the received image and more clarity in detail, the brain naturally modifies the color tones in adjacent neighborhoods of colors. A very famous example of this human sight natural modification to the view is the famous Chevreul–Mach bands. In this phenomenon, every bar is filled with one solid level of gray, but human brain perceives narrow bands at the edges with increased contrast which does not reflect the physical reality of solid gray bars. This human visual system action in illusion, highlighting the edges, is inspired here in visual illusory image enhancement (VIIE). An algorithm for the newly introduced VIIE by deploying morphological filters is presented as morphological VIIE (MVIIE). It deploys morphological filters for boosting the same effect on the image edges and aiding human sight by increasing the contrast of the sight. MVIIE algorithm is explained in this chapter. Significant image enhancement, by MVIEE, is approved through the experiments in terms of image quality metrics and visual perception.
- Published
- 2017
- Full Text
- View/download PDF
12. Multiview Centroid Based Fuzzy Classification of Large Data
- Author
-
Ishwar K. Sethi, Nilesh V. Patel, and Gaurav Tyagi
- Subjects
Similarity (geometry) ,Fuzzy classification ,Computer science ,Feature extraction ,Centroid ,02 engineering and technology ,010501 environmental sciences ,Object (computer science) ,computer.software_genre ,01 natural sciences ,Data set ,Hyperplane ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,computer ,0105 earth and related environmental sciences - Abstract
Modern data is increasingly complex. High dimensionality, heterogeneity and independent multiple representations are the basic properties of today's data. With increasing sources of data collection, a single object can have multiple representations, which we call views. In this paper we propose a multiview classification technique, which uses fuzzy mapping to obtain maximum similarity between an object and nearest multiview centroids. Our fuzzy mapping based approach obtains a unit L1 hyperplane as a common space for each view. To establish the efficacy of our proposed method we present experimental comparisons with number of baselines on two synthetic and two real-world data sets.
- Published
- 2016
- Full Text
- View/download PDF
13. From centralized to distributed decision tree induction using CHAID and fisher's linear discriminant function algorithms
- Author
-
Jie Ouyang, Nilesh Patel, and Ishwar K. Sethi
- Subjects
Incremental decision tree ,Binary tree ,business.industry ,Computer science ,Decision tree learning ,ID3 algorithm ,Decision tree ,Machine learning ,computer.software_genre ,Linear discriminant analysis ,CHAID ,Human-Computer Interaction ,Tree (data structure) ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,Algorithm ,computer ,Software - Abstract
The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper extends two well-known decision tree methods for centralized data to distributed data settings. The first method is an extension of CHAID algorithm and generates single feature based multi-way split decision trees. The second method is based on Fisher's linear discriminant (FLO) function and generates multifeature binary trees. Both methods aim to generate compact trees and are able to handle multiple classes. The suggested extensions for distributed environment are compared to their centralized counterparts and also to each other. Theoretical analysis and experimental tests demonstrate the effectiveness of the extensions. In addition, the side-by-side comparison highlights the advantages and deficiencies of these methods under different settings of the distribution environments.
- Published
- 2011
- Full Text
- View/download PDF
14. Multi-Label Classification Method for Multimedia Tagging
- Author
-
Nilesh Patel, Ishwar K. Sethi, and Aiyesha Ma
- Subjects
Multi-label classification ,Information retrieval ,Multimedia ,Computer science ,Decision tree ,Binary number ,computer.software_genre ,New media ,ComputingMethodologies_PATTERNRECOGNITION ,Scalability ,Data mining ,Tag cloud ,Classifier (UML) ,Single label ,computer - Abstract
Community tagging offers valuable information for media search and retrieval, but new media items are at a disadvantage. Automated tagging may populate media items with few tags, thus enabling their inclusion into search results. In this paper, a multi-label decision tree is proposed and applied to the problem of automated tagging of media data. In addition to binary labels, the proposed Iterative Split Multi-label Decision Tree (IS-MLT) is easily extended to the problem of weighted labels (such as those depicted by tag clouds). Several datasets of differing media types show the effectiveness of the proposed method relative to other multi-label and single label classifier methods and demonstrate its scalability relative to single label approaches.Keywords: Automated Multimedia Tagging; Community Tagging; Multi-label Classification; Multi-label Decision Tree; Pattern Classification
- Published
- 2010
- Full Text
- View/download PDF
15. Induction of multiclass multifeature split decision trees from distributed data
- Author
-
Jie Ouyang, Ishwar K. Sethi, and Nilesh Patel
- Subjects
Incremental decision tree ,Training set ,Distributed database ,business.industry ,Computer science ,Decision tree learning ,Decision tree ,Pattern recognition ,computer.software_genre ,Linear discriminant analysis ,Tree (data structure) ,Discriminant function analysis ,Artificial Intelligence ,Signal Processing ,Alternating decision tree ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Data mining ,business ,computer ,Software - Abstract
The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one such method that generates compact trees using multifeature splits in place of single feature split decision trees generated by most existing methods for distributed data. Our method is based on Fisher's linear discriminant function, and is capable of dealing with multiple classes in the data. For homogeneously distributed data, the decision trees produced by our method are identical to decision trees generated using Fisher's linear discriminant function with centrally stored data. For heterogeneously distributed data, a certain approximation is involved with a small change in performance with respect to the tree generated with centrally stored data. Experimental results for several well-known datasets are presented and compared with decision trees generated using Fisher's linear discriminant function with centrally stored data.
- Published
- 2009
- Full Text
- View/download PDF
16. Blind components processing a novel approach to array signal processing: A research orientation
- Author
-
Faramarz Asharif, Mohammad Reza Asharif, Ishwar K. Sethi, Mahdi Khosravy, Neeraj Gupta, and Ninoslav Marina
- Subjects
Harmonic analysis ,Engineering ,Signal processing ,business.industry ,Orientation (computer vision) ,Process (computing) ,Electronic engineering ,business ,Signal ,Blind signal separation ,Active noise control ,Blind equalization - Abstract
Blind Components Processing (BCP), a novel approach in processing of data (signal, image, etc.) components, is introduced as well some applications to information communications technology (ICT) are proposed. The newly introduced BCP is with capability of deployment orientation in a wider range of applications. The fundamental of BCP is based on Blind Source Separation (BSS), a methodology which searches for unknown sources of mixtures without a prior knowledge of either the sources or the mixing process. Most of the natural, biomedical as well as industrial observed signals are mixtures of different components while the components and the way they mixed are unknown. If we decompose the signal into its components by BSS, then we can process the components separately without interfering the other components signal/data. Such internal access to signal components leads to extraction of plenty of information as well more efficient processing compared to normal signal processing wherein all the structure of the signal is gone under processing and modification. This manuscript besides the introducing BCP, proposes a practical applications of BCP with technical merit for harmonic noise cancellation as well stock pricing model.
- Published
- 2015
- Full Text
- View/download PDF
17. Soft-Hard Clustering for Multiview Data
- Author
-
Gaurav Tyagi, Ishwar K. Sethi, and Nilesh Patel
- Subjects
Clustering high-dimensional data ,Fuzzy clustering ,business.industry ,Computer science ,Correlation clustering ,Conceptual clustering ,Machine learning ,computer.software_genre ,Data stream clustering ,CURE data clustering algorithm ,Canopy clustering algorithm ,Artificial intelligence ,Data mining ,business ,Cluster analysis ,computer - Abstract
With rapid advances in technology and connectivity, the capability to capture data from multiple sources has given rise to multiview learning wherein each object has multiple representations and a learned model, whether supervised or unsupervised, needs to integrate these different representations. Multiview learning has shown to yield better predictive and clustering models, it also is able to provide a better insight into relationships between different views for making better decisions. In this paper, we consider the problem of multiview clustering and present a soft-hard clustering approach. In our approach, all object views are first independently mapped into a unit hypercube via soft clustering. The mapped views are next integrated via a hard clustering approach to yield the final results. Both soft and hard clustering stages utilize k-means or its variant c-means, which makes our method suitable for large-scale data problems. Furthermore, additional parallelization of the view mapping stage in parallel is possible, thus making the method more attractive for large-scale data applications. The performance of the method using three benchmark data sets is demonstrated and a comparison with other published results shows our method mostly yields a slightly better performance.
- Published
- 2015
- Full Text
- View/download PDF
18. Confidence-based active learning
- Author
-
Ishwar K. Sethi and Mingkun Li
- Subjects
Models, Statistical ,Computational complexity theory ,business.industry ,Computer science ,Applied Mathematics ,Information Storage and Retrieval ,computer.software_genre ,Machine learning ,Pattern Recognition, Automated ,Support vector machine ,Computational Theory and Mathematics ,Artificial Intelligence ,Robustness (computer science) ,Active learning ,Confidence Intervals ,Computer Simulation ,Computer Vision and Pattern Recognition ,Data mining ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Algorithms ,Software - Abstract
This paper proposes a new active learning approach, confidence-based active learning, for training a wide range of classifiers. This approach is based on identifying and annotating uncertain samples. The uncertainty value of each sample is measured by its conditional error. The approach takes advantage of current classifiers' probability preserving and ordering properties. It calibrates the output scores of classifiers to conditional error. Thus, it can estimate the uncertainty value for each input sample according to its output score from a classifier and select only samples with uncertainty value above a user-defined threshold. Even though we cannot guarantee the optimality of the proposed approach, we find it to provide good performance. Compared with existing methods, this approach is robust without additional computational effort. A new active learning method for support vector machines (SVMs) is implemented following this approach. A dynamic bin width allocation method is proposed to accurately estimate sample conditional error and this method adapts to the underlying probabilities. The effectiveness of the proposed approach is demonstrated using synthetic and real data sets and its performance is compared with the widely used least certain active learning method.
- Published
- 2006
- Full Text
- View/download PDF
19. Mining HIV dynamics using independent component analysis
- Author
-
Frank M. Graziano, George Towfic, Sorin Draghici, Samira Y. Kettoola, and Ishwar K. Sethi
- Subjects
CD4-Positive T-Lymphocytes ,Statistics and Probability ,Self-organizing map ,Computer science ,Information Storage and Retrieval ,Value (computer science) ,HIV Infections ,CD8-Positive T-Lymphocytes ,computer.software_genre ,Models, Biological ,Biochemistry ,Pattern Recognition, Automated ,Humans ,Computer Simulation ,Molecular Biology ,Principal Component Analysis ,Models, Statistical ,HIV ,Viral Load ,Independent component analysis ,Computer Science Applications ,Data set ,Computational Mathematics ,Treatment Outcome ,ComputingMethodologies_PATTERNRECOGNITION ,Nonlinear Dynamics ,Computational Theory and Mathematics ,Database Management Systems ,Noise (video) ,Data mining ,computer ,Algorithms - Abstract
Motivation: We implement a data mining technique based on the method of Independent Component Analysis (ICA) to generate reliable independent data sets for different HIV therapies. We show that this technique takes advantage of the ICA power to eliminate the noise generated by artificial interaction of HIV system dynamics. Moreover, the incorporation of the actual laboratory data sets into the analysis phase offers a powerful advantage when compared with other mathematical procedures that consider the general behavior of HIV dynamics. Results: The ICA algorithm has been used to generate different patterns of the HIV dynamics under different therapy conditions. The Kohonen Map has been used to eliminate redundant noise in each pattern to produce a reliable data set for the simulation phase. We show that under potent antiretroviral drugs, the value of the CD4+ cells in infected persons decreases gradually by about 11% every 100 days and the levels of the CD8+ cells increase gradually by about 2% every 100 days. Availability: Executable code and data libraries are available by contacting the corresponding author. Implementation: Mathematica 4 has been used to simulate the suggested model. A Pentium III or higher platform is recommended. Contact: gtowfic@clarke.edu. * To whom correspondence should be addressed.
- Published
- 2003
- Full Text
- View/download PDF
20. eID: a system for exploration of image databases
- Author
-
Ishwar K. Sethi and Daniela Stan
- Subjects
Information retrieval ,Database ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,k-means clustering ,Library and Information Sciences ,Management Science and Operations Research ,computer.software_genre ,Database design ,Computer Science Applications ,Visualization ,Automatic image annotation ,Feature (computer vision) ,Media Technology ,computer ,Image retrieval ,Information Systems ,Feature detection (computer vision) - Abstract
The goal of this paper is to describe an exploration system for large image databases in order to help the user understand the database as a whole, discover hidden relationships, and formulate insights with minimum effort. While the proposed system works with any type of low-level feature representation of images, we describe our system using color information. The system is built in three stages: the feature extraction stage in which images are represented in a way that allows efficient storage and retrieval results closer to the human perception; the second stage consists of building a hierarchy of clusters in which the cluster prototype, as the electronic identification ( e ID) of that cluster, is designed to summarize the cluster in a manner that is suited for quick human comprehension of its components. In a formal definition, an electronic IDentification ( e ID) is the most similar image to the other images from a corresponding cluster; that is, the image in the cluster that maximizes the sum of the squares of the similarity values to the other images of that cluster. Besides summarizing the image database to a certain level of detail, an e ID image will be a way to provide access either to another set of e ID images on a lower level of the hierarchy or to a group of perceptually similar images with itself. As a third stage, the multi-dimensional scaling technique is used to provide us with a tool for the visualization of the database at different levels of details. Moreover, it gives the capability for semi-automatic annotation in the sense that the image database is organized in such a way that perceptual similar images are grouped together to form perceptual contexts. As a result, instead of trying to give all possible meanings to an image, the user will interpret and annotate an image in the context in which that image appears, thus dramatically reducing the time taken to annotate large collection of images.
- Published
- 2003
- Full Text
- View/download PDF
21. Computational Vision and Robotics
- Author
-
Ishwar K. Sethi
- Subjects
medicine.medical_specialty ,Future of robotics ,business.industry ,Computer science ,Human–computer interaction ,Geography of robotics ,medicine ,Robotics ,Artificial intelligence ,business ,Computational vision - Published
- 2015
- Full Text
- View/download PDF
22. Multi-label Classification with a Constrained Minimum Cut Model
- Author
-
Craig T. Hartrick, Ishwar K. Sethi, Guangzhi Qu, and Hui Zhang
- Subjects
Multi-label classification ,ComputingMethodologies_PATTERNRECOGNITION ,Minimum cut ,business.industry ,Computer science ,Graph (abstract data type) ,Pattern recognition ,Artificial intelligence ,business ,Classifier (UML) - Abstract
Multi-label classification has received more attention recently in the fields of data mining and machine learning. Though many approaches have been proposed, the critical issue of how to combine single labels to form a multi-label remains challenging. In this work, we propose a novel multi-label classification approach that each label is represented by two exclusive events: the label is selected or not selected. Then a weighted graph is used to represent all the events and their correlations. The multi-label learning is transformed into finding a constrained minimum cut of the weighted graph. In the experiments, we compare the proposed approach with the state-of-the-art multi-label classifier ML-KNN, and the results show that the new approach is efficient in terms of all the popular metrics used to evaluate multi-label classification performance.
- Published
- 2014
- Full Text
- View/download PDF
23. Traffic sign recognition using a novel permutation-based local image feature
- Author
-
Ishwar K. Sethi, Nilesh Patel, and Tian Tian
- Subjects
Pixel ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Interval (mathematics) ,Permutation ,Feature (computer vision) ,Benchmark (computing) ,Traffic sign recognition ,Artificial intelligence ,business ,Feature detection (computer vision) ,Mathematics - Abstract
Traffic sign recognition (TSR) is an essential research issue in the design of driving support system and smart vehicles. In this paper, we propose a permutation-based image feature to describe traffic signs, which has an inherent advantage of illumination invariance and fast implementation. Our proposed feature LIPID (local image permutation interval descriptor) employs interval division and zone number assignment on order permutation of pixel intensities, and takes the zone numbers as the descriptor. A comprehensive performance evaluation on German Traffic Sign Recognition Benchmark (GTSRB) dataset is carried out, which reveals the great performance of our proposed method. Experiment results exhibit that our feature outperforms some state-of-the-art descriptors, showing a potential prospect in TSR applications.
- Published
- 2014
- Full Text
- View/download PDF
24. Classification of general audio data for content-based retrieval
- Author
-
Dongge Li, Nevenka Dimitrova, Thomas Mcgee, and Ishwar K. Sethi
- Subjects
Audio mining ,Scheme (programming language) ,Computer science ,business.industry ,Speech recognition ,media_common.quotation_subject ,Speech coding ,Linear prediction ,Pattern recognition ,Silence ,Noise ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Perception ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Mel-frequency cepstrum ,business ,Environmental noise ,computer ,Software ,media_common ,computer.programming_language - Abstract
In this paper, we address the problem of classification of continuous general audio data (GAD) for content-based retrieval, and describe a scheme that is able to classify audio segments into seven categories consisting of silence, single speaker speech, music, environmental noise, multiple speakers' speech, simultaneous speech and music, and speech and noise. We studied a total of 143 classification features for their discrimination capability. Our study shows that cepstral-based features such as the Mel-frequency cepstral coefficients (MFCC) and linear prediction coefficients (LPC) provide better classification accuracy compared to temporal and spectral features. To minimize the classification errors near the boundaries of audio segments of different type in general audio data, a segmentation–pooling scheme is also proposed in this work. This scheme yields classification results that are consistent with human perception. Our classification system provides over 90% accuracy at a processing speed dozens of times faster than the playing rate.
- Published
- 2001
- Full Text
- View/download PDF
25. Face detection for image annotation
- Author
-
Ishwar K. Sethi and Gang Wei
- Subjects
Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image (mathematics) ,Task (project management) ,Object-class detection ,Automatic image annotation ,Artificial Intelligence ,Feature (computer vision) ,Face (geometry) ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Face detection ,business ,Software - Abstract
The problem of face detection in images – locating image areas corresponding to human faces – has received a considerable amount of attention in recent years due to numerous possible applications. In this paper, we describe a face detection system for an image annotation task that requires detection of faces at multiple scales against a complex background. The proposed system is a top-down system; it uses the presence of skin tone pixels coupled with shape and face-specific features to locate faces in images. The main distinguishing feature of the proposed system is its use of an iterative region partitioning procedure to generate candidate face regions. To date, the performance of the system has been tested with over 200 test images of varying complexity with promising results.
- Published
- 1999
- Full Text
- View/download PDF
26. Image retrieval using hierarchical self-organizing feature maps
- Author
-
Ioana L. Coman and Ishwar K. Sethi
- Subjects
Similarity (geometry) ,Information retrieval ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Automatic image annotation ,Image texture ,Artificial Intelligence ,Feature (computer vision) ,Signal Processing ,Computer Vision and Pattern Recognition ,Visual Word ,Artificial intelligence ,Cluster analysis ,business ,Image retrieval ,Software ,Feature detection (computer vision) - Abstract
This paper presents a scheme for image retrieval that lets a user retrieve images either by exploring summary views of the image collection at diAerent levels or by similarity retrieval using query images. The proposed scheme is based on image clustering through a hierarchy of self-organizing feature maps. While the suggested scheme can work with any kind of low-level feature representation of images, our implementation and description of the system is centered on the use of image color information. Experimental results using a database of 2100 images are presented to show the eAcacy of the suggested scheme. ” 1999 Published by Elsevier Science B.V. All rights reserved.
- Published
- 1999
- Full Text
- View/download PDF
27. Adaptive motion-vector resampling for compressed video downscaling
- Author
-
Bhaskaran Vasudev, Ishwar K. Sethi, and Bo Shen
- Subjects
Video post-processing ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Data_CODINGANDINFORMATIONTHEORY ,Transcoding ,computer.software_genre ,Smacker video ,Motion estimation ,Media Technology ,Discrete cosine transform ,Computer vision ,Electrical and Electronic Engineering ,Image resolution ,Transform coding ,Block-matching algorithm ,Motion compensation ,business.industry ,Digital video ,Video server ,Signal compression ,computer.file_format ,Motion vector ,JPEG ,Scalable Video Coding ,Video compression picture types ,Uncompressed video ,Motion JPEG ,Video tracking ,Bit rate ,Video browsing ,Artificial intelligence ,Multiview Video Coding ,business ,computer ,Data compression - Abstract
Digital video is becoming widely available in compressed form, such as a motion JPEG or MPEG coded bitstream. In applications such as video browsing or picture-in-picture, or in transcoding for a lower bit rate, there is a need to downscale the video prior to its transmission. In such instances, the conventional approach to generating a downscaled video bitstream at the video server would be to first decompress the video, perform the downscaling operation in the pixel domain, and then recompress it as, say, an MPEG, bitstream for efficient delivery. This process is computationally expensive due to the motion-estimation process needed during the recompression phase. We propose an alternative compressed domain-based approach that computes motion vectors for the downscaled (N/2xN/2) video sequence directly from the original motion vectors for the N/spl times/N video sequence. We further discover that the scheme produces better results by weighting the original motion vectors adaptively. The proposed approach can lead to significant computational savings compared to the conventional spatial (pixel) domain approach. The proposed approach is useful for video severs that provide quality of service in real time for heterogeneous clients.
- Published
- 1999
- Full Text
- View/download PDF
28. DCT convolution and its application in compressed domain
- Author
-
V. Bhaskaran, Bo Shen, and Ishwar K. Sethi
- Subjects
Motion compensation ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Signal compression ,Image processing ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,JPEG ,Convolution ,Video editing ,Computer Science::Multimedia ,Media Technology ,Discrete cosine transform ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer ,Transform coding ,Data compression - Abstract
Conventional processing of JPEG or MPEG compressed image or video data involves first decompressing the data and applying the desired processing function, and then the processed data are recompressed for the purposes of transmission or storage. We propose an alternate processing pipeline which involves direct manipulation of the JPEG or MPEG compressed domain representation to achieve the desired spatial domain processing. For direct manipulation in the compressed domain, we develop a discrete cosine transform (DCT)-domain convolution theorem which besides exploiting the sparseness of the DCT-domain representation also exploits the orthogonality and symmetry in the DCT-domain representation. These properties lead to efficient compressed domain-based processing methods unlike their spatial domain counterparts, where such properties are not available. This theorem can be used in a variety of image and video editing functions when the image and video data are available only as a JPEG or MPEG bitstream. We illustrate the use of the DCT-domain convolution theorem in a typical video editing application such as video bluescreen editing.
- Published
- 1998
- Full Text
- View/download PDF
29. Structure-driven induction of decision tree classifiers through neural learning
- Author
-
Ishwar K. Sethi and Jae Hung Yoo
- Subjects
Incremental decision tree ,Artificial neural network ,Computer science ,business.industry ,Competitive learning ,Decision tree learning ,Decision tree ,ID3 algorithm ,Pattern recognition ,Machine learning ,computer.software_genre ,Data set ,Tree structure ,Artificial Intelligence ,Signal Processing ,Alternating decision tree ,Computer Vision and Pattern Recognition ,Decision stump ,Artificial intelligence ,business ,computer ,Software - Abstract
The decision tree classifiers represent a nonparametric classification methodology that is equally popular in pattern recognition and machine learning. Such classifiers are also popular in neural networks under the label of neural trees. This paper presents a new approach for designing these classifiers. Instead of following the common top-down approach to generate a decision tree, a structure-driven approach for induction of decision trees, SDIDT, is proposed. In this approach, a tree structure of fixed size with empty internal nodes, i.e. nodes without any splitting function, and labeled terminal nodes is first assumed. Using a collection of training vectors of known classification, a neural learning scheme combining backpropagation and soft competitive learning is then used to simultaneously determine the splits for each decision tree node. The advantage of the SDIDT approach is that it generates compact trees that have multifeature splits at each internal node which are determined on global rather than local basis; consequently it produces decision trees yielding better classification and interpretation of the underlying relationships in the data. Several well-known examples of data sets of different complexities and characteristics are used to demonstrate the strengths of the SDIDT method.
- Published
- 1997
- Full Text
- View/download PDF
30. An Off-Line Cursive Handwritten Word Recognition System and Its Application to Legal Amount Interpretation
- Author
-
Ke Han and Ishwar K. Sethi
- Subjects
Vocabulary ,business.industry ,Computer science ,Speech recognition ,media_common.quotation_subject ,Search engine indexing ,Set (abstract data type) ,Artificial Intelligence ,Handwriting recognition ,Word recognition ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Cursive ,Software ,Word (computer architecture) ,media_common - Abstract
Off-line cursive script recognition has got increasing attention during the last three decades since it is of interest in several areas such as banking and postal service. An off-line cursive handwritten word recognition system is described in this paper and is used for legal amount interpretation in personal checks. The proposed recognition system uses a set of geometric and topologic features to characterize each word. By considering the spatial distribution of these features in a word image, the proposed system maps each word into two strings of finite symbols. A local associative indexing scheme is then used on these strings to organize a vocabulary. When presented with an unknown word, the system uses the same indexing scheme to retrieve a set of candidate words likely to match the input word. A verification process is then carried out to find the best match among the candidate set. The performance of the proposed system has been tested with a legal amount image database from real bankchecks. The results obtained indicate that the proposed system is able to recognize legal amounts with great accuracy.
- Published
- 1997
- Full Text
- View/download PDF
31. Video shot detection and characterization for video databases
- Author
-
Ishwar K. Sethi and Nilesh V. Patel
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.software_genre ,Smacker video ,Artificial Intelligence ,Histogram ,Motion estimation ,Computer vision ,Segmentation ,Block-matching algorithm ,Motion compensation ,Database ,business.industry ,computer.file_format ,Video processing ,Video compression picture types ,Uncompressed video ,Video tracking ,Signal Processing ,Video denoising ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Multiview Video Coding ,business ,computer ,Software - Abstract
The organization of video information for video databases requires segmentation of a video into its constituent shots and their subsequent characterization in terms of content and camera work. In this paper, we look at these two steps using compressed video data directly. For shot detection, we suggest a scheme consisting of comparing intensity, row, and column histograms of successive I frames of MPEG video using the chi-square test. For characterization of segmented shots, we address the problem of classifying shot motion into different categories using a set of features derived from motion vectors of P and B frames of MPEG video. The central component of the proposed shot motion characterization scheme is a decision tree classifier built through a process of supervised learning. Experimental results using a variety of videos are presented to demonstrate the effectiveness of performing shot detection and characterization directly on compressed video.
- Published
- 1997
- Full Text
- View/download PDF
32. LIPID: Local Image Permutation Interval Descriptor
- Author
-
Yun Zhang, Ishwar K. Sethi, Delie Ming, Tian Tian, and Jiayi Ma
- Subjects
Computer science ,business.industry ,Local binary patterns ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-invariant feature transform ,Pattern recognition ,Permutation ,Image texture ,Robustness (computer science) ,Benchmark (computing) ,Computer vision ,Artificial intelligence ,business ,Feature detection (computer vision) - Abstract
Image representation through local descriptors is the basis of numerous computer vision applications. In the past decade, many local image descriptors such as SIFT and SURF have been proposed, yet algorithms requiring low memory and computation complexity are still preferred. Binary descriptors such as BRIEF have been suggested to satisfy this demand, showing a comparable performance but much faster computation speed. In this paper, we propose a novel local image descriptor, LIPID, which employs intensity permutation and interval division to yield an effective performance in terms of speed and recognition. Our method is inspired by LUCID, proposed by Ziegler and Christiansen [8]. An extensive evaluation on the well-known benchmark datasets reveals the robustness and effectiveness of LIPID as well as its capability to handle illumination changes and texture images.
- Published
- 2013
- Full Text
- View/download PDF
33. Preface to Data Mining in Biomedical Informatics and Healthcare
- Author
-
Juan F. Gomez, Rosa L. Figueroa, Christopher Gillies, Hamidreza Chitsaz, Jesse Lingeman, Gautam B. Singh, Adam E. Gaweda, Paul Bradley, Szilárd Vajda, Hamid Soltanian-Zadeh, Kourosh Jafari-Khouzani, Claudia Amato, Mohammad Reza Siadat, Cynthia Brandt, Paulo J. G. Lisboa, José D. Martín-Guerrero, Sameer Antani, Samah Jamal Fodeh, Flavio Mari, Doug Redd, Daniela Raicu, Maryellen L. Giger, Theophilus Ogunyemi, Ali Haddad, Carlo Barbieri, Ishwar K. Sethi, Emilio Soria, and Jacob D. Furst
- Subjects
Engineering ,Health Administration Informatics ,business.industry ,Health care ,Translational research informatics ,Data mining ,business ,computer.software_genre ,Health informatics ,Data science ,computer - Published
- 2013
- Full Text
- View/download PDF
34. Convolution-Based Edge Detection for Image/Video in Block DCT Domain
- Author
-
Bo Shen and Ishwar K. Sethi
- Subjects
Theoretical computer science ,Speedup ,Computation ,Real image ,Edge detection ,Convolution ,Signal Processing ,Media Technology ,Canny edge detector ,Discrete cosine transform ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Mathematics ,Block (data storage) - Abstract
This paper presents a scheme for performing convolution operation directly on compressed images without decompressing them first. The use of such a scheme is demonstrated and discussed by showing the implementation of the Laplacian-of-Gaussian operator for edge detection. We present a complete evaluation of the different parameters involved in this process and show edge detection results on several real images through our proposed scheme. In each case, it is shown that the proposed scheme of directly performing convolution on the compressed data leads to not only a significant computation speedup but also yields better edges.
- Published
- 1996
- Full Text
- View/download PDF
35. Symbolic mapping of neurons in feedforward networks
- Author
-
Ishwar K. Sethi and Jae H. Yoo
- Subjects
Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Computer science ,business.industry ,Computer Science::Neural and Evolutionary Computation ,Representation (systemics) ,Feed forward ,Connection (mathematics) ,medicine.anatomical_structure ,Artificial Intelligence ,Signal Processing ,medicine ,Feedforward neural network ,Computer Vision and Pattern Recognition ,Neuron ,Artificial intelligence ,business ,Software - Abstract
It is common to view multiple-layer feedforward neural networks as black boxes since the knowledge embedded in the connection weights of these networks is generally considered incomprehensible. This paper proposes a solution to this deficiency of neural networks by suggesting a mapping procedure for converting the weights of a neuron into a symbolic representation and demonstrating its use towards understanding the internal representation and the input-output mapping learned by a feedforward neural network. Several examples are presented to illustrate the proposed symbolic mapping of neurons.
- Published
- 1996
- Full Text
- View/download PDF
36. Handwritten signature retrieval and identification
- Author
-
Ke Han and Ishwar K. Sethi
- Subjects
business.industry ,Search engine indexing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,computer.software_genre ,Signature (logic) ,Similitude ,Set (abstract data type) ,Automatic image annotation ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,Computer Vision and Pattern Recognition ,Visual Word ,Data mining ,Artificial intelligence ,business ,Image retrieval ,computer ,Software ,Mathematics - Abstract
Many applications in image science require similarity retrieval of an image from a large collection of images. In such cases, image indexing becomes important for efficient organization and retrieval of images. This paper addresses this issue in the context of a database of handwritten signature images and describes a system for similarity retrieval and identification of handwritten signature images. The proposed system uses a set of geometric and topologic features to map a signature image into two strings of finite symbols. A local associative indexing scheme is then used on the strings to organize and search the signature image database. The advantage of the local associative indexing is that it is tolerant of missing features and allows queries even with partial signatures. The performance of the system has been tested with an image database of 120 signatures. The results obtained indicate that the proposed system is able to identify signatures with great accuracy even when a part of a signature is missing.
- Published
- 1996
- Full Text
- View/download PDF
37. Book reviews
- Author
-
Albert Maydeu-Olivares, Ishwar K. Sethi, Phipps Arabie, A. S. Tanguiane, K. C. Klauer, Pierre Hansen, Klaas Sijtsma, and M. P. Windham
- Subjects
Mathematics (miscellaneous) ,Psychology (miscellaneous) ,Library and Information Sciences ,Statistics, Probability and Uncertainty - Published
- 1995
- Full Text
- View/download PDF
38. Neural implementation of tree classifiers
- Author
-
Ishwar K. Sethi
- Subjects
Incremental decision tree ,Artificial neural network ,Computer science ,Entropy (statistical thermodynamics) ,business.industry ,Decision tree learning ,General Engineering ,ID3 algorithm ,Decision tree ,Pattern recognition ,Machine learning ,computer.software_genre ,Thresholding ,Data set ,Random subspace method ,Entropy (classical thermodynamics) ,Feedforward neural network ,Entropy (information theory) ,Artificial intelligence ,Entropy (energy dispersal) ,business ,Entropy (arrow of time) ,computer ,Entropy (order and disorder) - Abstract
Tree classifiers represent a popular non-parametric classification methodology that has been successfully used in many pattern recognition and learning tasks. However, "is feature-value/spl ges/thrsh" type of tests used in tree classifiers are often found sensitive to noise and minor variations in the data. This has led to the use of soft thresholding in decision trees. Following the decision tree to feedforward neural network mapping of the entropy net, three neural implementation schemes for tree classifiers, that allow soft thresholding, are presented in this paper. Results of several experiments using well-known data sets are described to compare the performance of the proposed implementations with respect to decision trees with hard thresholding. >
- Published
- 1995
- Full Text
- View/download PDF
39. Tensor-Based Temporal Behavior Analysis in Pain Medicine
- Author
-
Andy Hall, Guangzhi Qu, Craig T. Hartrick, and Ishwar K. Sethi
- Subjects
Sequence ,business.industry ,Computer science ,media_common.quotation_subject ,Pain medicine ,Medical record ,Data structure ,Data science ,Data set ,Health care ,Quality (business) ,Tensor ,Dimension (data warehouse) ,business ,media_common - Abstract
Electronic medical records provide us with an enormous amount of data with vast potential. If properly analyzed, medical data can be converted to knowledge that improves treatment, uncovers unexpected associations, and supports the personal experience of doctors and nurses, allowing them to make more informed decisions. Medical data is often generated by monitoring across a period of time, whether new data arrives quickly or slowly, consistently or sporadically, but many data mining methods are not designed to consider the temporal aspect of a data set. Besides the extra dimension of time, medical processes often involve interaction between many attributes at once, complicating the discovery of relevant patterns and associations. Specialized methods to interpret medical data can improve the quality of knowledge extracted from it. Tensors are appropriate data structures to represent our data in a multi-dimensional format, taking into account the relationship between many dimensions at once. We can further segment our data into discrete temporal chunks, creating a sequence of tensors. By applying dynamic tensor analysis to our tensor sequence, we can reveal patterns and associations within our data set and capture their change over time. This information can be developed into medical knowledge that can be used to support future treatment.
- Published
- 2012
- Full Text
- View/download PDF
40. Attacks and countermeasures in wireless cellular networks
- Author
-
Ye Zhu, Supeng Leng, Huirong Fu, Haissam Badih, Eralda Caushaj, and Ishwar K. Sethi
- Subjects
business.industry ,Computer science ,Wireless cellular networks ,Wireless WAN ,Computer security ,computer.software_genre ,Voice communication ,Cellular communication ,Wireless ,The Internet ,Architecture ,business ,Mobile device ,computer ,Computer network - Abstract
The importance of wireless cellular communication in our daily lives has grown considerably in the last decade. Besides using cell phones for voice communication, we routinely use them to access the internet, conduct monetary transactions, send text messages and query a lot of useful information regarding the location of specific places of interest. The use of mobile devices in our day-to-day communication raises many unresolved security issues. In this paper we identify the possible attacks on the wireless cellular network architecture and propose some appropriate countermeasures. Several attacks can be prevented in order to preserve security in wireless cellular communication.
- Published
- 2012
- Full Text
- View/download PDF
41. Using metadata for the intelligent browsing of structured media objects
- Author
-
William I. Grosky, Bogdan Capatina, Farshad Fotouhi, and Ishwar K. Sethi
- Subjects
Data element ,Information retrieval ,Computer science ,Search engine indexing ,Meta Data Services ,Hypermedia ,Marker interface pattern ,Metadata repository ,law.invention ,World Wide Web ,Metadata ,law ,Information system ,Software ,Database catalog ,Information Systems - Abstract
Interacting with a multimedia information system is different from interacting with a standard text-based information system. In this paper, we present the design of a system called Content-Based Hypermedia (CBH) , which allows a user to utilize metadata to intelligently browse through a collection of media objects. We describe the approach we use to model data in order to make it browsable, explore our approach to browsing, which we call metadata mediated browsing , indicate how metadata is used in the concept of similarity , present the architecture of our system, and discuss indexing techniques for similarity browsing using content-based metadata and approaches to clustering which generate higher-level metadata to help the user browse more effectively.
- Published
- 1994
- Full Text
- View/download PDF
42. A general approach for token correspondence
- Author
-
Ishwar K. Sethi, Jae Hung Yoo, and Nilesh V. Patel
- Subjects
Motion analysis ,Image processing ,High dimensional ,Security token ,Algebra ,General purpose ,Artificial Intelligence ,Signal Processing ,Simulated annealing ,Coherence (signal processing) ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Vector space ,Mathematics - Abstract
This paper presents a generalization of the correspondence approach of I. K. Sethi and R. Jain ( IEEE Trans. Pattern Anal. Mech. Intell. 9 , 56–73, 1987), by extending the path coherence criterion to high dimensional vector spaces, where tokens are represented as points. Two algorithms for obtaining correspondence of tokens are described. Experimental results for line and region tokens are presented to demonstrate the general purpose nature of the proposed correspondence approach.
- Published
- 1994
- Full Text
- View/download PDF
43. Making a mobile robot learn to determine its location using neural networks
- Author
-
Ishwar K. Sethi and Gening Yu
- Subjects
Artificial neural network ,Computer science ,business.industry ,Feed forward ,Mobile robot ,Robot learning ,Mobile robot navigation ,Artificial Intelligence ,Component (UML) ,Signal Processing ,Pattern recognition (psychology) ,Key (cryptography) ,Robot ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
The extraction of a meaningful relationship between a robot and its environment from sensed data is a key component of all intelligent robot task. In many instances, the relationship between the sensed data and the desired output from the robot is so highly complex that it becomes very difficult to specify algorithms for extracting the desired relationships. In such situations, the use of a nonalgorithmic approach through the procedural learning exhibited by the feedforward multiple-layer neural networks appears very attractive. Motivated by this, we present in this paper the details of applying such networks to the task of robot localization. The robot localization scheme presented here uses an ultrasonic ring sensor to capture the environment and is meant for indoor applications. Experimental results simulated and real sensor data are presented to show the usefulness of the proposed scheme.
- Published
- 1994
- Full Text
- View/download PDF
44. An ellipse detection method from the polar and pole definition of conics
- Author
-
Ishwar K. Sethi and Jae Hung Yoo
- Subjects
Estimation theory ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Concentric ,Ellipse ,Object detection ,Hough transform ,law.invention ,Artificial Intelligence ,law ,Conic section ,Signal Processing ,Polar ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
A new multistage Hough transform approach to the problem of ellipse detection in images is presented. The proposed approach is based on the polar and pole definition of conics and its advantage lies in avoiding the propagation of parameter estimation error over the successive stages of the Hough transform. The proposed method is capable of detecting partially visible ellipses, overlapping ellipses, and groups of concentric ellipses. A set of experimental results is presented to demonstrate these capabilities of the suggested approach.
- Published
- 1993
- Full Text
- View/download PDF
45. Optimal multiple level decision fusion with distributed sensors
- Author
-
T. Li and Ishwar K. Sethi
- Subjects
Fusion ,Engineering ,Artificial neural network ,business.industry ,Reliability (computer networking) ,Decision theory ,Bayesian probability ,Aerospace Engineering ,Sensor fusion ,computer.software_genre ,Physics::Plasma Physics ,Fusion rules ,Data mining ,Electrical and Electronic Engineering ,business ,computer ,Global optimization - Abstract
A data fusion model consisting of several levels of parallel decision fusions is considered. Global optimization of such a model is discussed to obtain the fusion rules for overall optimal performance. The reliability analysis of the proposed model is carried out to establish its superiority over the existing parallel and serial fusion models. >
- Published
- 1993
- Full Text
- View/download PDF
46. Neuropathic Pain Scale Based Clustering for Subgroup Analysis in Pain Medicine
- Author
-
Guangzhi Qu, Craig T. Hartrick, Ishwar K. Sethi, and Hui Wu
- Subjects
medicine.medical_specialty ,Physical medicine and rehabilitation ,Scale (ratio) ,Computer science ,Pain medicine ,Neuropathic pain ,Chronic pain ,medicine ,Subgroup analysis ,medicine.disease ,Cluster analysis - Abstract
Neuropathic pain (NeuP) is often more difficult to treat than other types of chronic pain. The ability to predict outcomes in NeuP, such as response to specific therapies and return to work, would have tremendous value to both patients and society. In this work, we propose an adaptive clustering algorithm using the Neuropathic Pain Scale (NPS) to develop a set of standard patient templates. These templates may be useful in studying treatment response in NeuP. The approach is evaluated on 108 subjects' baseline data and results demonstrate the efficacy of utilizing neuropathic pain scale (NPS) metrics and our proposed method.
- Published
- 2010
- Full Text
- View/download PDF
47. Optimizing the Location Prediction of a Moving Patient to Prevent the Accident
- Author
-
Wei-Chun Chang, Ching-Seh Wu, Chih-Chiang Fang, and Ishwar K. Sethi
- Subjects
Focus (computing) ,Engineering ,Accident (fallacy) ,Patient safety ,Operations research ,business.industry ,Survival of the fittest ,Health care ,Evolutionary algorithm ,business ,Object (philosophy) ,Evolutionary computation - Abstract
An evolutionary solution for optimal forecasting the movement of a patient is proposed in this paper. As the great changes of society and environment in modern life, high living pressure results in more and more mental disorders. The safety issue regarding the mental illness patients has risen. According to the statistics of patient safety repost system from the Ministry of the interior in Taiwan, the accidents happened in all the patients have over 85% from the schizophrenias during their treatment period. It indicates the potential dangerous of the psychotics while they are in the treatment period. Therefore, to prevent the accidents will need to focus on the understanding of patient movement in order to issue the warning before the accident happened. It is a practical problem in health care. Moving object prediction addresses the problem of locating a moving object correctly. Therefore, how to effectively forecast or predict the location of moving objects can lead to valuable design for real world applications regarding the issues of moving objects. To address the solution for forecasting problem, we applied evolutionary algorithm (EA) as the target algorithm. The algorithm has powerful techniques to helping stochastic search on the fittest solutions for the problems. Hence, the objective of this paper is to devise the methods that apply EA to solve the practical problems. A case study is a good means to broaden the understanding of the methods how EA can solve problems. Moreover, a case analysis to apply the model for the accident prevention of inpatients is presented to identify the potential of solving the practical problem.
- Published
- 2010
- Full Text
- View/download PDF
48. An Architecture for Collaborative Translational Research Utilizing the Honest Broker System
- Author
-
Gautam B. Singh, Nilesh Patel, Christopher Gillies, Ishwar K. Sethi, George S. Wilson, and Jan Akervall
- Subjects
Computer science ,On demand ,Management system ,Honest Broker ,Translational research ,Architecture ,Data science ,Data type ,Biobank ,Simulation ,Data warehouse - Abstract
Translational biomedical research involves a diverse group of researchers, data sources and data types voluminous quantities. A collaborative research application is needed to connect these diverse elements. Two approaches are used to integrate biomedical research data; a central data warehouse approach in what all the information is imported into it and a mediator approach which integrates the data on demand. We propose the Beaumont BioBank Integration Management System (BIMS), a collaborative research architecture based on the mediator approach utilizing the Honest Broker System. Our system provides collaborative, flexible and secure environment capable of accelerating biomedical research.
- Published
- 2010
- Full Text
- View/download PDF
49. Multi-agent Framework Based on Web Service in Medical Data Quality Improvement for e-Healthcare Information Systems
- Author
-
Ching-Seh Wu, Ishwar K. Sethi, Wei-Chun Chang, and Nilesh Patel
- Subjects
business.industry ,Service delivery framework ,Information quality ,Pharmacy ,computer.software_genre ,medicine.disease ,World Wide Web ,Data quality ,Health care ,Information system ,medicine ,Medical emergency ,Web service ,business ,Web intelligence ,computer - Abstract
It was reported that between 44,000 and 98,000 deaths occur annually as a consequence of medical errors within American hospitals alone [1] and the US National Association of Boards of Pharmacy reports that as many as 7,000 deaths occur in the US each year because of incorrect prescriptions [2] Therefore, a great desire to improve access to new healthcare methods, and the challenge of delivering healthcare becomes significant nowadays.
- Published
- 2010
- Full Text
- View/download PDF
50. Local association based recognition of two-dimensional objects
- Author
-
Nagarajan Ramesh and Ishwar K. Sethi
- Subjects
Computer science ,business.industry ,Hash function ,Geometric transformation ,Cognitive neuroscience of visual object recognition ,Computer Science Applications ,Set (abstract data type) ,Hardware and Architecture ,Test set ,Pattern recognition (psychology) ,Identity (object-oriented programming) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Associative property - Abstract
A model based two-dimensional object recognition system capable of performing under occlusion and geometric transformation is described in this paper. The system is based on the concept of associative search using overlapping local features. During the training phase, the local features are hashed to set up the associations between the features and models. In the recognition phase, the same hashing procedure is used to retrieve associations that participate in a voting process to determine the identity of the shape. Two associative retrieval techniques for discrete and continuous features, respectively, are described in the paper. The performance of the system is studied using a test set of 1,000 shapes that are corrupted versions of 100 models in the shape database. It is shown that the incorporation of a verification phase to confirm the retrieved associations can provide zero error performance with a small reject rate.
- Published
- 1992
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.