236 results on '"Nutter, Brian"'
Search Results
102. A Task-related & Resting State Realistic fMRI Simulator for fMRI Data Validation.
- Author
-
Hill, Jason E., Xiangyu Liu, Nutter, Brian, and Mitra, Sunanda
- Published
- 2017
103. Low-memory-usage image coding with line-based wavelet transform.
- Author
-
Ye, Linning, Guo, Jiangling, Nutter, Brian, and Mitra, Sunanda
- Subjects
IMAGE processing ,CODING theory ,WAVELETS (Mathematics) ,MATHEMATICAL transformations ,COMPUTER storage devices ,JPEG (Image coding standard) ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
When compared to the traditional row-column wavelet transform, the line-based wavelet transform can achieve significant memory savings. However, the design of an image codec using the line-based wavelet transform is an intricate task because of the irregular order in which the wavelet coefficients are generated. The independent block coding feature of JPEG2000 makes it work effectively with the line-based wavelet transform. However, with wavelet tree-based image codecs, such as set partitioning in hierarchical trees, the memory usage of the codecs does not realize significant advantage with the line-based wavelet transform because many wavelet coefficients must be buffered before the coding starts. In this paper, the line-based wavelet transform was utilized to facilitate backward coding of wavelet trees (BCWT). Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional wavelet tree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-based image codecs, including line-based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2011
104. Design of predictive vector quantizer for image coding
- Author
-
Yin, Jie, Karp, Tanja, Nutter, Brian, Mitra, Sunanda, and Nutter, Brian S.
- Subjects
Vector quantizer ,Image compression ,Source coding - Abstract
Due to the prediction loop, instability is a major difficulty in the design of a predictive quantizer. The conventional open-loop method enjoys good stability but suffers from poor performance. In contrast, the closed-loop method may be able to perform better but it can not guarantee a complete convergence and so it is unstable. Recently, the asymptotic closed-loop approach was proposed to benefit from the stability of open-loop design while asymptotically optimizing the actual closed-loop system. In this work, all of the above three design algorithms for predictive quantization are discussed and applied to image coding. Based on the analysis of simulation results, modifications to closed-loop design and asymptotic closed-loop design are proposed for further improvement in design quality and reliability.
- Published
- 2005
105. Use of a sequential forward floating search algorithm to detect subgroup features in heterogeneous data sets
- Author
-
Anderson, Ronald C., Mitra, Sunanda, Ghosh, Bijoy K., Nutter, Brian, and Baker, Mary C.
- Subjects
SFFS ,Sequential search techniques ,Feature selection ,Heterogeneous data - Abstract
In pattern recognition problems, a feature is defined as any property that may be useful in differentiating among classes of inputs. A classic example entails using color and shape as features to sort apples and bananas into their respective classes. Both the generation and assessment of features has become an essential part of modern machine learning and pattern classification endeavors, particularly with the continued growth of feature spaces in recent years. In support of such problems, many feature selection algorithms with distinct benefits and drawbacks have been devised. Despite these advancements in the field, questions remain about feature subset selection and performance in certain scenarios. Many classes, such as some found in the medical domain, are not reliably described by a single defining feature. In these heterogeneous classes, detecting feature synergies is essential to high performance classification. A confounding issue occurs when sample sizes for heterogeneous classes are small. Small sample size is often unavoidable for various reasons, and such situations may present challenges to commonly employed techniques of feature selection, such as statistical methods. In this dissertation, a sequential forward floating search (SFFS) algorithm is investigated as a feature subset selection technique on heterogeneous data sets of small sample size. The objective is to assess SFFS performance against statistical selection techniques that are common in the literature. To this end, an exemplar data set from the neuroimaging domain is used to represent a typical heterogeneous data set. This data set is analyzed using both traditional approaches and with the SFFS algorithm. The findings of this investigation are then used to inform synthetic data simulations to establish the ground-truth performance of SFFS with respect to such heterogeneous data. Results of this investigation show SFFS to be a sensitive technique for feature subset selection in data sets. In many cases, SFFS is shown to outperform the t-test in individual feature detectability. However, potential outliers and increased variability in small sample size data remain a concern for successful assessment of classification results in specific problems.
- Published
- 2018
106. Enhancing Smartgrid Situational Awareness and Resiliency to Data Attack using Dynamic State Estimation
- Author
-
Ghosal, Malini, Rao, Vittal S., Iyer, Ram V., Nutter, Brian, Ren, Bei Bei, and He, Miao
- Subjects
Power systems, Dynamic state estimation, Multi-rate sensor fusion, Unscented Kalman filter, Data integrity attack, Stealth attack, Topology error - Abstract
Present day large-scale power system employs static state (voltage magnitude and angle at each bus) estimation based on quasi-static assumption of the system. However, with ever increasing complexity and renewable integration, such quasi-static assumptions are bound to fail. It is therefore important to get real-time information of dynamic states which dictate transient stability phenomenon (e.g., relative rotor angle and speed). With the advent of fast and synchronized measurements from Phasor Measurement Units and using dynamical state estimation tools such as Kalman filtering, it is now possible to estimated these nonlinear dynamical states (PSDSE). This dissertation is about improving situational awareness of large scale power system by estimating the dynamic states associated with the electro-mechanical transients using modern sensor data (i.e., Phasor Measurements Units or PMU) and dynamic state estimation tools such as Kalman filtering and protecting the PSDSE against data-integrity attacks. Existing power system dynamic state estimation algorithms are improved by introducing a framework of the treatment of multi-rate sensor data that are misaligned using multi-sensor data fusion algorithm. Final estimated states are obtained at the time-scale of finest available group of sensors and the fused estimator is shown to perform much better than the traditional state estimators based on individual groups of sensor. The power system dynamic state estimator is further protected against data integrity attacks by considering three kinds of anomalies: 1) a fast switching attack is mitigated by taking a weighted-average of estimated based on a Bayesian rule based metric `trustiness', 2) stealthy attack detection using subspace analysis and 3) detecting the false alarms in bad data detector triggered by modeling error again using a novel algorithm using subspace analysis.
- Published
- 2017
107. Finite-state transition models for path planning, packet screening and control of networked industrial control systems
- Author
-
Pothuwila, Kalana L., Ren, Beibei, Nutter, Brian, de Farias, Ismael R., Jr., and Berg, Jordan M.
- Subjects
Packet screening ,FSTS ,Packet sniffing ,Networked industrial control systems - Abstract
This research project investigates the use of finite-state transition representations for path planning, cyber security, and closed-loop control of networked industrial control systems. In particular, we consider systems consisting of physical components that are controlled by networked digital devices, such as programmable logic controllers (PLCs). The physical components are typically described in continuous time using differential equations, while the communication and control components operate in discrete time. Such combinations of continuous and discrete elements are called "hybrid systems." Although there is a substantial body of literature for analyzing either continuous or discrete systems, the techniques applicable to one are generally not compatible with the other, making analysis and control of hybrid systems a challenging task. The approach taken here is to approximate the continuous dynamics by finitestate, discrete-time transition models. These can be naturally integrated with the digital control and communication infrastructure, and the resulting unified finite-state transition models can be analyzed using powerful algorithmic tools. In this work, the tasks of interest are 1) planning open-loop set-point changes of the system state, 2) detecting and neutralizing harmful control actions, and 3) implementing closed-loop control for uncertainty and disturbance rejection. We first extend existing methods for finite-state transition modeling to include multiple equilibrium points with both stable and unstable points. This extension is demonstrated on the classic inverted pendulum on a cart (IPC). We show the path planning and closed-loop control capabilities by swinging up the IPC to its unstable, upward-pointing position and stabilizing it there. Finally, we extend the current state-of-the-art in cyber security for networked industrial control systems, by using the finite-state transition model to create a state-aware packet monitoring system to interpret control inputs in the context of the current system configuration.
- Published
- 2017
108. Detection and segmentation of overlapping red blood cells in microscopic medical images of stained peripheral blood smears
- Author
-
Moallem, Golnaz, Nutter, Brian, Gale, Richard O., and Sari-Sarraf, Hamed
- Subjects
Meanshift clustering algorithm ,Cell detection ,Overlapping cells ,Overlapping red blood cells ,Cell segmentation ,Medical image processing ,Red blood cells ,Snakes active contour models ,GVF snakes ,Thin blood smears - Abstract
Automated image analysis of slides of stained peripheral blood smears assists with early diagnosis of blood disorders. Automated detection and segmentation of the cells is a prerequisite for any subsequent quantitative analysis. Overlapping cell regions introduce considerable challenges to detection and segmentation techniques. Throughout this thesis, we propose a novel algorithm that can successfully detect and segment overlapping cells in microscopic images of stained peripheral blood smears. The algorithm is composed of three steps. In the first step, the input image is binarized to obtain the binary mask of the image. The second step accomplishes a reliable cell center localization approach that employs adaptive mean-shift clustering. The third step fulfills the cell segmentation purpose by obtaining the boundary of each cell utilizing a Gradient Vector Flow (GVF) driven snake algorithm. We compare the experimental results of our methodology with those reported in the most current literature. Additionally, performance of the proposed method is evaluated by comparing both cell detection and cell segmentation results with those produced manually. The method is systematically tested on 100 image patches comprising overlapping cell regions and containing more than 3800 cells. We evaluate the performance of the proposed cell detection step using precision/ TP/FP and FN rates. Moreover, the cell segmentation step is assessed employing sensitivity, specificity and Jaccard index.
- Published
- 2017
109. Content-based image retrieval and speech enhancement system using deep learning structure
- Author
-
Zhao, Xiangyuan, Mitra, Sunanda, Pal, Ranadip, and Nutter, Brian
- Subjects
Deep leaning ,CBIR ,Speech enhancement ,Convolutional neural network ,Autoencoder - Abstract
Deep learning recently attracted a lot of attention in image processing and signal processing. It shows great potential in downsampling high dimension data while abstracts key information inside these data. This characteristic makes deep learning very powerful in content-based image retrieval (CBIR) and speech enhancement (SE) because both of them need high quality and low dimension semantic features. By using the code layer in deep autoencoder (DAE), which is a fully connected deep learning model, the CBIR and SE system can get decent results. For the CBIR, our newly designed multiple input multiple task DAE (MIMT-DAE) using wavelet coefficients can even get better performance than the single input single task DAE using less trainable parameters. However, for image processing, the fully connected structure shows limitation and a locally connected structure named convolutional neural network (CNN) and a hybrid structure is proposed in this dissertation. The CNN works as a preprocessing stage for the autoencoder can provide better input features than the raw images because its locally connected weights. The hybrid structure boosts the retrieval performance substantially in both grayscale and color image retrieval. For the SE system, the fully connected DAE trained only on mask approximation (MA) function does not present desired performance. We design a multiple task structure adding a signal approximation (SA) function during training for the SE system to reduce false positive. Training on both cost functions simultaneously gives much better performance than trained only on MA function or even the latest method that fined tuned on SA function. We also explored the long-short term memory structure and propose it as the future work.
- Published
- 2017
110. 3D augmented reality for medical application
- Author
-
Dang, Duc Tran Minh, Mitra, Sunanda, and Nutter, Brian
- Subjects
Augmented reality, Android, Computerized axial tomography, 3D human anatomy model - Abstract
Augmented Reality (AR) is a technology that augments user reality with computer-generated data such as GPS, audio, video or 3D objects. In comparison with virtual reality (VR), where the user’s view is totally replaced with a computer-generated environment, Augmented Reality focus on enhancing user’s reality experience with superimposed information. The application of Augmented Reality not only focuses on military and advertisement applications but also on education and medicine. This thesis focuses on development of an Augmented Reality application on the Android platform to view computerized axial tomography (CAT) scan images and 3D atlas models superimposed over the physical world, displayed on the device screen in real time. The developed application is based on AndAR framework, a Java API that provides a foundation to develop Augmented Reality projects on the Android platform.
- Published
- 2016
111. Hand gesture & proximity sensing using wireless power transfer coil: Analysis and application
- Author
-
Juneja, Supreet kaur, Nutter, Brian, and Li, Changzhi
- Subjects
Wireless technology, Near field communication, sensitivity, iPhone application - Abstract
After Nikola Tesla’s initial research in the field of Wireless Power transmission not many improvements or developments were done in this field, until now when there is increased popularity of Wireless technology. In a step ahead in the application of wireless power transmission- Near field communication, research was done on Non-contact human interaction using this technology. With moving hand or finger position and gesture, the change in coil inductance was observed and using this this change desired output operations were performed. Initially research was conducted to improve upon the sensitivity of a differential structure based on oscillators and mixers and to reduce the circuitry used to achieve the desired result of using the wireless power transmission coil to not just transfer power but act as a sensor. Later on, application was developed and bringing this to reality and demonstrating how the WPT coils sense the presence of hand or finger and interact with humans. Although the sensitivity of the coil is not much, but it was successfully tested as a sensor and was able to detect the presence of hand and display different pictures depending on where the hand is placed. Also, by looking at the less reliable apple iPhone home buttons and constant presence of home button icon on the screen of the phone, application was developed to replace this with the coil sensor and perform the task same as the home button getting rid of the icon on the screen.
- Published
- 2016
112. Complimentary based logic design for arithmetic building blocks
- Author
-
Patil, Nikhil, Nutter, Brian, Bayne, Stephen B., and Nikoubin, Tooraj
- Subjects
BCD,Excess-1,BEC,CSLA - Abstract
High-performance low power arithmetic circuit with reduced area is essential for advanced arithmetic processes. This thesis proposes a modification of arithmetic units using a new design technique which is called Complimentary Based Logic Design (CBLD). CBLD is the design technique which minimizes the number of gates hence improves the area efficiency of the overall circuit. This new design approach can be issued in an arithmetic unit in which multiple blocks perform conditional operations in parallel at one stage or in series with dependency. As multiple module performs in parallel only one module functions and others stays idle. CBLD reduces such idle modules by compressing all modules with a single module. The functionality of CBLD can be verified by implementing it on an optimized module of BCD adder and BCD adder/subtractor module. Comparison of CBLD design with its’ older counterpart shows significant area power and delay efficiency. Second Part of this thesis proposes a novel design of Carry select adder implementing CBLD. Proposed design of conditional BEC-CSLA or modified ripple carry adder is compared with recently published Area power and time efficient Carry select adders.
- Published
- 2016
113. Design of secure and stable power system integrated with renewable energy sources
- Author
-
Arvani, Ata, Nutter, Brian, He, Miao, Iyer, Ram V., and Rao, Vittal
- Subjects
Wind turbine, Small signal stability, Transient stability, Power oscillation damping controller, Transient energy function, Cyber security, Intrusion detection, Discrete wavelet transform - Abstract
Integration of wind power plant into the power systems requires long distance transmission lines to transmit wind power to the market. Low frequency interarea oscillation modes are an important issue faced in long distance transmission of power system which can limit the electric power transfer between the areas. Insufficient damping of interarea oscillation modes can lead to rotor angle separation and blackouts. With this assertion, the present study is aimed at designing the systematic methodology to enhance the damping of interarea oscillation modes of a large interconnected power system integrated with wind power plant. A lead-lag Power Oscillation Damping (POD) controller is evaluated on two-area test system for single interarea mode. In addition, an observer-based POD controller is proposed for large interconnected power system with multiple interarea modes. Transient energy function is proposed to determine the Critical Clearing Time (CCT) of the power system integrated with wind power plant. This technique will enable us to easily define the CCT in comparison with the conventional time-consuming numerical analysis. Modern power system integrated with synchrophasors including Phasor Measurement Units (PMUs) and Phasor Data Concentrator (PDCs) will allow us to collect system level information in real time. In future electrical power systems, the wide use of PMUs is inevitable and thus raises the importance of cyber security. Conventional model-based methodologies are used to identify false data injection. A signal-based technique using Discrete Wavelet Transform (DWT) is proposed to detect the anomalies. The Hardware-in-the-loop simulation of the power system is demonstrated at Smart Grid Energy Center Lab at Texas Tech University. This technique will allow us to detect the malicious activities in real time with various resolutions.
- Published
- 2016
114. Wearable system for obstacle detection and human assistance using ultrasonic sensor array
- Author
-
Patankar, Ashish, Nutter, Brian, Bayne, Stephen B., and Nikoubin, Tooraj
- Subjects
NI myRIO1900, Obstacle detection, Portable systems,Human assistance - Abstract
Generally when we consider any system that deals with human assistance, basic requirements about the system are, it should be simple, user friendly and portable. As a person whoever is using this system need not be aware of all the operations which are taking place behind the screen, one must consider an ease of access to the user. Most of the human assistance systems are proposed to assist physically disabled people. In this project, we have proposed and tested a system which is based on the navigation and capable of proving information about the possible mapping of obstacles in surrounding. System consists of three phases. First phase is associated with the signal generation of chirp signal and echoing. Ultrasonic sensor array is used for this phase of the system. Second phase is associated with the signal processing unit. National Instruments myRIO1900 which is a Re configurable-Input-Output device is used to access all the pins of sensors. This device is made portable by using external power supply and a Wi-Fi module. It can be accessed and programmed using LabVIEW platform. The third phase is the output phase. The output is represented in terms of 2D maps, generated using LabVIEW and 3D maps, and generated using MATLAB. Design is proposed which is capable of providing some additional features in the system which makes it more reliable and can perform multiple tasks at the same time.
- Published
- 2016
115. Standard cell library design and optimization with CDM for deeply scaled FinFET devices
- Author
-
Joshi, Ashish, Nutter, Brian, Bayne, Stephen B., and Nikoubin, Tooraj
- Subjects
Standard Cell Library, FinFET, Low Power Design - Abstract
In this thesis, we propose the new methodology to achieve the minimum glitch standard cell based design. The Standard cell library has been designed using the logic cells designed in the CDM logic style. The CDM logic style has been analyzed and compared with the conventional CMOS logic style with the FinFET devices in super-threshold operation. Standard cell library with FinFET logic gates in CDM and static CMOS logic style has been developed in various selected technologies (7nm, 10nm, 14nm, 16nm & 20nm) and used to synthesize the ISCAS’85 benchmark designs to evaluate the performance improvement. Synopsys silicon smart and library compiler tool has been used to generate the standard cell libraries using FinFET device models from PTM and design compiler to synthesize the designs with developed standard cell libraries. The simulation results shows that CDM based standard cell library achieve the average power improvement of 17-21% and average PDP improvement of 7-26% for all benchmark designs compared with conventional CMOS standard cell library in 7nm, 10nm, 14nm, and 16nm & 20nm technology node respectively. Hence we demonstrated that our low power standard cell design is comparable to the contemporary custom design optimization techniques used to save power in the design.
- Published
- 2016
116. Analysis, design and optimization of binary to BCD converters
- Author
-
Rangisetti, Sri Rathan, Li, Changzhi, Nutter, Brian, and Nikoubin, Tooraj
- Subjects
Binary To BCD converter ,Decimal multiplication ,Area-efficient ,Power-efficient - Abstract
Decimal information handling applications have developed exponentially as of late subsequently expanding the need to have equipment support for decimal math. Binary to BCD converter is the basic block of decimal digit multipliers. The decimal multiplication involves performing digit-by-digit multiplication in Binary and then converting the resulting Partial Products to decimal. Decimal Partial products are then added as appropriate to form the final decimal product. For this approach to multiplication area, the power consumption of Binary to BCD conversion circuits are essential performance parameters. We can say that with the number of digits multiplied the size of circuit grows, similarly power also increases. The delay of conversion of overall multiplication circuit is same as each Binary to BCD conversion circuit because the conversion of all partial products operates in parallel. Thus optimizing power and area parameters are more important for such multiplication circuit. In this project we analyzed and optimized the existing previous architectures and proposed the novel implementation of the shift-add algorithm using add by a constant technique that makes this design area efficient in compare with existing architectures. The final architecture presented is the implementation of the novel algorithm which we called Range Detection Algorithm in this report. This Range Detection circuit is power efficient in compared with existing architectures. We further progressed to design and implement area-efficient and power-efficient two digits Binary to BCD converter which converts all the possible states 0 to 99.
- Published
- 2015
117. Timing behavior modeling and analysis for Hybrid Logic full adders in bulk CMOS and FinFET
- Author
-
Challa, Sai Krishna Karthikeya, Li, Changzhi, Nutter, Brian, and Nikoubin, Tooraj
- Subjects
Hybrid logic ,C-CMOS ,Timing behavior ,FinFET - Abstract
The full adders form a very important part of arithmetic circuits. The full adders are built either in C-CMOS logic style or Hybrid CMOS logic style. C-CMOS logic style is a conventional way to build circuits with a strict methodology of having a pull up and a pull down network. There is also the concept of Logical Effort for effective transistor sizing. Hybrid logic style is a mixture of different logic styles like CPL, PTL, Transmission gates etc. The concept of Logical Effort provides a specific modeling technique for the circuits built in C-CMOS logic style, which enable us to understand the behavior and estimate the delay in single test bench or multistage. In case of hybrid circuits, because of their random structure it becomes impossible to estimate their behavior. Hence this paper presents a ‘Timing Behavior Modeling’ of these hybrid logic full adders which can allow us to estimate their performance in multistage networks. The full adders have been implemented in 32nm Bulk CMOS and 32nm FinFET PTM models.
- Published
- 2015
118. Anti-cancer drug sensitivity modeling using genomic characterization and functional data
- Author
-
Haider, Saad Md J., Mitra, Sunanda, Nutter, Brian, and Pal, Ranadip
- Subjects
Machine learning ,Genomic data ,Predictive modeling ,Random forest - Abstract
A framework for design of personalized cancer therapy requires the ability to predict the sensitivity of a tumor to anti-cancer drugs. The predictive modeling of tumor sensitivity to anti-cancer drugs has primarily focused on generating functions that map gene expressions and genetic mutation profiles to drug sensitivity. In this dissertation, we have explored a new approach for drug sensitivity prediction and combination therapy design based on integrated functional and genomic characterizations. We’ve also proposed two novel approaches for drug sensitivity prediction based on genetic characterization only. The proposed modeling approach involving integrated functional and genomic characterizations when applied to data from the Cancer Cell Line Encyclopedia shows a significant gain in prediction accuracy as compared to elastic net and random forest techniques based on genomic characterizations. Utilizing a Mouse Embryonal Rhabdomyosarcoma cell culture and a drug screen of 60 targeted drugs, we show that predictive modeling based on functional data alone can also produce high accuracy predictions. The framework also allows us to generate personalized tumor proliferation circuits to gain further insights on the individualized biological pathway. Among myriad approaches proposed for mapping genomic characterization to drug sensitivity, ensemble based learning techniques such as random forests (RF) have turned out to be a top performer. The majority of current approaches infer a predictive model for each drug individually but correlation between different drug sensitivities suggests that multiple response prediction incorporating the co-variance of the different drug responses can possibly improve the prediction accuracy. We present a prediction and analysis framework based on Multivariate Random Forests (MRF) that incorporates the correlation between different drug sensitivities as one of our novel approaches. The application of the MRF framework on genomics of drug sensitivity in cancer project and cancer cell line encyclopedia datasets shows accuracy improvement over RF and Elastic Net approaches. The results also illustrate that the higher accuracy of MRF as compared to RF is maintained for different sets of model parameters. Furthermore, the proposed framework enables the prediction of multivariate distribution of drug sensitivities that can be utilized to improve prediction accuracy by generating conditional expected values of drug efficacy based on the knowledge of cell viabilities of related drugs. Random Forests generate a deterministic predictive model for each drug based on the genetic characterization of the cell lines and ignores the relationship between different drug sensitivities during model generation. Thus, there is a need for generation of multivariate ensemble learning techniques that can increase predictive accuracy and improve variable importance ranking by incorporating the relationships between different drug responses. As the second novel approach, we propose a novel cost criterion that captures the dissimilarity in the output response structure between the training data and node samples as the difference in the two empirical copulas. We illustrate that copulas are suitable for capturing the multivariate structure of output responses independent of the marginal distributions and the copula based multivariate random forest framework can provide higher accuracy prediction and improved variable selection. The proposed framework has been validated on genomics of drug sensitivity for cancer database.
- Published
- 2015
119. Calculating the weight of a pig through facial geometry using 2-dimensional image processing
- Author
-
Clark, Alexander W., Mitra, Sunanda, and Nutter, Brian
- Subjects
MATLAB ,Swine ,Classifier ,Clustering ,Unsupervised ,Transformation ,Least squares ,Image processing ,LBP ,Pattern recognition ,Facial features ,Local binary patterns ,OpenCV ,Bicubic ,Probability ,Pig ,Bilinear ,Agriculture ,Estimate ,Facial recognition ,Weight ,Supervised ,Regression ,Recognition ,Cascade classifiers ,Cluster ,Face ,Perspective ,Pigs ,Estimation ,Software - Abstract
This thesis will outline the groundwork of facial detection and recognition software to be used with pigs in order to estimate their weight from a digital image. The facial detection of the pig is achieved through identification of the features using the Viola-Jones method for cascade classifiers and basic likelihood functions. The document will cover both the general theory behind these concepts and the actual implementation as used in the software. Next, the need of transforming the newly detected pig face to be used for facial recognition is covered through perspective transformation and bicubic pixel interpolation of the facial geometries. After this, the thesis will discuss the use of local binary patterns to sort the photos of the pigs with an unsupervised clustering technique. Next, the implementation of least squares regression is covered to predict the weight of a pig from the facial features. Finally, the thesis will conclude with a discussion on the multiple error-checking and outlier correction techniques used to make the software more robust.
- Published
- 2015
120. Computational approaches to drug sensitivity prediction and personalized cancer therapy
- Author
-
Berlow, Noah Everett, Sari-Sarraf, Hamed, Nutter, Brian, and Pal, Ranadip
- Subjects
Targeted therapy ,Computational modeling ,Probabilistic models ,Personalized therapy ,Personalized medicine ,Cancer - Abstract
This dissertation represents the accumulated research in the field of Personalized Cancer Therapy performed as a Graduate Student at Texas Tech University. This research has focused on the design, implementation, and validation of computational, data-driven models of drug sensitivity and their application to personalized cancer therapy. This has resulted in projects of varying depth in the following subjects. Probabilistic Computational Modeling of Tumor Sensitivity to Targeted Therapeutic Compounds: A key open problem in the field of Systems Medicine is the drug sensitivity prediction problem: given a new cancer patient and a list of drugs or drug combinations, what will the patient response (sensitivity) to the different compounds be. Robust solutions to this problem translate to viable approaches to Personlized Therapy, where therapy assignment to cancer patients is based on the underlying patient and cancer biology, instead of a one-size-fits-all approach. The transition to personalized therapy is a primary need for the clinical oncologist community, who are often faced with a dearth of viable treatment options for relapsed, unresponsive, or high risk patients. An integrative model of drug sensitivity, focusing on functional drug screen data and informed by available genetic data, was developed to address this issue; in silico modeling results are presented in this section. Regression modeling of drug sensitivity from the CCLE Database: As another form of in silico validation, the change in drug sensitivity prediction following integration of drug-target inhibition data to an existing dataset was tested. The Cancer Cell Line Encyclopedia (CCLE) database consists of 24 anticancer drugs profiled across 479 hum-origin cancer cell lines. These cell lines underwent thorough genetic characterization, with exome sequencing, copy number variation, and gene expression sequencing data available. A few of these anti-cancer agents also have known drug-target inhibitions profiles; these commonalities are utilized to show that integration of dataset improves sensitivity prediction; in silico modeling results are presented in this section. Model-driven combination therapy design: in vitro and in vivo validation: The in silico validation of the tumor sensitivity modeling constituted the first step in development of this computational approach. The next key step was translation from in silico validation to in vitro (in glass) and in vivo (in life) validation. Biological experimentation was required to move the computational approach closer to clinical viablity. As part of this research, a year was spent in the laboratory of key collaborator, Dr. Charles Keller. The biological validations performed showed that functional data-based modeling was capable of translating to successful biological outcomes. Design of Dynamic Network Models from Static Models and Expression experiments: The computational approach presented here is based on data that acts as a single timepoint snapshot of a biological system. However, tumor cells are never at rest; they are constantly undergoing a myriad of necessary biological processes. The cellular processes exist on numerous biological pathways and have a vast number of potential ways to interact. Because of this, there are upstream and downstream biological processes; by intervening in upstream processes, the downstream processes may respond without need for intervention. The static computational model was informed with a small set of gene expression experiments to construct dynamic, upstream-downstream and parallel process models of tumor sensitivity and performed in silico valdiations of the approach. Analysis of Drug Screen Information Gain and Drug Screen Design : The monetary and time cost of producing functional drug screens for high-throughput screening of new patient cancer samples, as well as the limited population of patient cancer cells available for testing, are key practical constraints in preclinical testing scenarios. As such, maximizing the usable information gained from a functional drug screen is extremely important when the data is used to inform clinical decisions for patients. This work establishes a metric for comparing expected information gain from a drug screen of an arbitrary size, and establishes a framework for drug selection for new drug screens.
- Published
- 2015
121. Digital signal processor based voice recognition
- Author
-
Hu, Bo, Li, Changzhi, and Nutter, Brian
- Subjects
C6713 DSK ,Voice recognition ,Dynamic time warping (DTW) - Abstract
Language is the most convenient and natural way to communicate. Voice recognition is a powerful means for people to communicate with computers and machines through language. This thesis discusses a speaker-dependent isolated-word voice recognition system. Software design is discussed based on the characteristics of voice signals. Primary procedures are pre-emphasis, start and end point detection, feature parameter extraction, and pattern matching. The critical bands feature vector is used in feature parameter extraction, and a dynamic time warping algorithm is used in pattern matching. The system is built on a TMS320C6713 DSP DSK. The TMS320C6713 DSP DSK provides all the necessary hardware for the system, such as digital signal processer, codec with ADC and DAC, CPLD, LEDs and DIP switches. The codec converts the input voice signal. The C6713 DSP analyzes and recognizes the signal. The LEDs show the result and system working status. The DIP switches control the system.
- Published
- 2015
122. Advanced modular testing methods for integrated circuits
- Author
-
Hall, Benjamin, Nutter, Brian, and Gale, Richard O.
- Subjects
Electrical engineering ,Semiconductor ,Curriculum ,Test engineering - Abstract
The Program for Semiconductor and Product Engineering (PSPE) at Texas Tech University strives to prepare students for a job in the semiconductor field. One crucial partner with PSPE is National Instruments (NI) who has donated a state of the art Automated Test Equipment (ATE) called the Semiconductor Test System (STS). This thesis discusses the development of curriculum for ECE 5332: Advanced Modular Testing Methods for Integrated Circuits, which focuses on training students how to properly utilize the STS foe their own projects
- Published
- 2015
123. Modeling and evaluation of high voltage, high power 4h-Silicon carbide insulated-gate bipolar transistors
- Author
-
Hinojosa, Miguel, Gale, Richard O., Nutter, Brian, Giesselmann, Michael G., and Bayne, Stephen B.
- Subjects
4H-SiC ,Bipolar ,Wide bandgap ,Silicon carbide ,IGBT - Abstract
In this study the current state of 12 kV, N-channel, 4H-Silicon carbide Insulated-Gate Bipolar transistors (IGBTs) was investigated for use in high voltage and high power applications. These next-generation switches offer many potential benefits to the Army including cost, weight, and space reductions in power electronics as well as an increase in mission capabilities. Silicon carbide provides superior electrical, thermal, and mechanical properties at high voltages, but its fabrication process is relatively new in comparison to silicon. For this work, two methods were used to understand the IGBT’s internal operation, to identify failure mechanisms, and to quantify the device’s performance. The first method involved the creation of a calibrated, physics-based model for device and circuit simulations. The second method involved the development of high voltage infrastructure to enable the collection of laboratory parametric measurements. The impact of this study will be advantageous to the development of robust devices, the creation of new applications, and the improvement of current processes and circuit designs.
- Published
- 2014
124. An ensemble based approach for drug sensitivity prediction
- Author
-
Wan, Qian, Nutter, Brian, Roeger, Lih-Ing, and Pal, Ranadip
- Subjects
Drug sensitivity prediction ,Multivariate random forests - Abstract
Drug sensitivity prediction based on genomic characterization remains a significant challenge in the area of systems medicine. Multiple approaches have been proposed for mapping genomic characterization to drug sensitivity and among them ensemble based learning techniques such as random forests have turned out to be a top performer. In the first part of this thesis, we consider the problem of predicting sensitivity of cancer cell lines to new drugs based on supervised learning on genomic profiles. The genetic and epigenetic characterization of a cell line provides observations on various aspects of regulation including DNA copy number variations, gene expression, DNA methylation and protein abundance. To extract relevant information from the various data types, we applied a Random Forests based approach to generate sensitivity predictions from each type of data and combined the predictions in a linear regression model to generate the final drug sensitivity prediction. Our approach when applied to the NCI-DREAM drug sensitivity prediction challenge was a top performer among 47 teams and produced high accuracy predictions. Our results show that the incorporation of multiple genomic characterizations lowered the mean and variance of the estimated bootstrap prediction error. We also applied our approach to the Cancer Cell Line Encyclopedia database and it produced high accuracy drug sensitivity prediction with the ability to extract the top targets of an anti-cancer drug. The results illustrate the effectiveness of our approach in predicting drug sensitivity from heterogeneous genomic datasets. For the purpose of further exploring the predictability of anti-cancer drug sensitivities, we observe that majority of current approaches infer a predictive model for each drug individually, but correlation between different drug sensitivities suggests that multiple response prediction incorporating the co-variance of the different drug responses can possibly improve the prediction accuracy. In the second part, we present a prediction and analysis framework based on Multivariate Random Forests that incorporates the correlation between different drug sensitivities. The results of application of our framework to the Genomics of Drug Sensitivity in Cancer project dataset and Cancer Cell Line Encyclopedia data shows marked improvement over regular Random Forests and Elastic Net. The presented framework was also utilized to generate multivariate probability distributions of the predicted output responses. Experimental results show that conditional expectation based on multivariate probability distribution and knowledge of the response of a correlated drug can be utilized to considerably improve prediction accuracy.
- Published
- 2014
125. Retinal image segmentation and 3D visualization
- Author
-
Gatti, Vijay, Nutter, Brian, and Mitra, Sunanda
- Subjects
3 Dimensional (3D) visualization ,Optical flow ,Demons registration ,Glaucoma ,Cup to disc area ratio (CAR) ,Optic nerve head ,Cup to disc diameter ratio (CDR) ,Diffeomorphism - Abstract
One of the novel methods prevailing in the clinical world for early detection of glaucoma progression is determining the deformation of optic nerve head based on fundus images. In order to overcome the subjective hand-sketched markings recorded in a clinic, research into various methods for automatic detection of the optic disc and cup boundaries in glaucomatous fundus images began to evolve. One of the earlier research work was based on creating a 3D model of the optic nerve head for glaucoma assessment. A recent approach based on demons non rigid registration method estimates the cupping of the optic disc by finding the cup boundary instead of the entire cup region, by using optical flow technique. This is achieved by quantitative estimation of depth discontinuities along the boundary region by computing the relative motion field between sequential images. Sequentially acquired monocular fundus images can be used in the estimation of depth discontinuity instead of stereo image pairs required for 3D visualization. A comparison of the cup boundary parameters along with cup to disc diameter ratio (CDR) and cup to disc area ratio (CAR) values estimated by the above two approaches is presented and validated with manual marking of the cup boundary by ophthalmologists.
- Published
- 2014
126. EEG artifact removal and detection via clustering
- Author
-
Hames, Elizabeth C., Karp, Tanja, Gale, Richard O., Nutter, Brian, O'Boyle, Michael W., and Baker, Mary C.
- Subjects
Artifact removal ,Independent component analysis (ICA) ,Isodata ,Electroencephalography (EEG) ,Clustering - Abstract
An automatic method for detecting and cleaning EEG artifactual ICA components is presented in this dissertation. Unsupervised learning is utilized for the detection of artifactual components. The artifact removal method is implemented in a six step process called ABEAR. The six steps of ABEAR are bad epoch removal, ICA, generation of component features, component clustering, cluster labeling, and component cleaning. Each step of ABEAR is evaluated using a recorded dataset with manually labeled components. A simulated dataset is also created to test the benefits of cleaning components compared to removing components. The simulated dataset reveals that cleaning components presents benefits when potential cerebral signal is included in artifactual components. ABEAR successfully detects and removes artifactual contributions to EEG signal caused by eye movements, electrocardiogram signals, electromyogram signals, movement, and bad channels.
- Published
- 2014
127. Fast and efficient lossless image compression based on CUDA Parallel Wavelet Tree encoding
- Author
-
Ao, Jingqi, Pal, Ranadip, Mitra, Sunanda, and Nutter, Brian
- Subjects
Wavelet tree ,Compute unified device architecture (CUDA) ,JPEG-XR algorithm ,Lossless image compression - Abstract
Lossless compression is still in high demand in medical image applications despite improvements in the computing capability and decrease in storage cost in recent years. With the development of General Purpose Graphic Processing Unit (GPGPU) computing techniques, sequential lossless image compression algorithms can be modified to achieve more efficiency and speed. Backward Coding of Wavelet Trees (BCWT) is an efficient and fast algorithm, utilizing Maximum Quantization of Descendants (MQD) and it is quite suitable for lossless parallel compression because of its intrinsic parallelism and simplicity. However, the original implementation of BCWT is a CPU-based sequential codec and that implementation has multiple drawbacks which hinder the parallel extension of BCWT. Parallel Coding of Wavelet Trees (PCWT) modifies the BCWT from theoretical workflow to implementation details. PCWT introduces multiple new parallel stages, including parallel wavelet transform stage, parallel MQD calculation stage, parallel Qmax search stage, parallel element encoding stage and parallel group encoding stage, and change the encoding sequence from backward to forward. All those stages are designed to accelerate the compression process. PCWT implementation is designed with the consideration of Compute Unified Device Architecture (CUDA) hardware constrains and implementation scalability. With newly designed workflow and highly optimized parallel stages, PCWT performs faster than the lossless JPEG-XR algorithm, the current standard, with comparable compression ratios. Multiple possible improvements in speed and flexibility on PCWT are also proposed as future work.
- Published
- 2014
128. A line-based lossless backward coding of wavelet trees (BCWT) and BCWT improvements for application
- Author
-
Li, Bian, Mitra, Sunanda, Pal, Ranadip, and Nutter, Brian
- Subjects
Zero tree detection ,Backward coding of wavelet trees (BCWT) ,Adaptive arithmetic coding ,Integer to integer wavelet transform ,Lossless image compression ,Line-based wavelet transform - Abstract
Image compression has been developed through many years in order to approach the lower limit of compression ratio bounded by the entropy at the lowest possible system cost. Compression techniques such as JPEG, JPEG-LS and JPEG2000 were accepted as the International Compression Standards for continuous-tone images because of their excellent performance. In 2009, JPEG-XR was announced a a new image standard for lossy and lossless image compression. However, because ofthe compatibility issues such as its non- compatibility with previous standards or non-Microsoft products, JPEG-XR are not broadly employed. The previous standards are still employed for both lossy and lossless image compression. Wavelet-based codecs such as JPEG2000 provide abundant functionalities and excellent compression efficiency compared to other codecs but with more complexity. A wavelet-based algorithm, BCWT, has been developed in order to offer very low complexity while still providing the excellent compression efficiency and functionalities found in JPEG2000. It provides excellent performance, but several limitations hinder its practical application. In order to solve the limitations of the BCWT application, in this dissertation, a ‘set to zeros’ method and a ‘zero tree detection’ algorithm are proposed and incorporated in the BCWT algorithm, which greatly enhance the compression ratio while reserving the algorithm’s advantages without significantly increasing its complexities. An efficient line-based lossless BCWT with very low computational cost is proposed to extensively meet the lossless requirements for specific occasions. For further improvement of the compression ratio, statistical models are investigated so that the adaptive arithmetic coding technique can be effectively applied to the output bit-stream. Tests and analysis results show that the improved BCWT algorithm consumes less memory and computational resources and obtain higher compression ratio without significant increase of the system complexity as compared to the original BCWT. The improved BCWT algorithm and the proposed lossless version have been successfully applied to industrial embedded applications.
- Published
- 2014
129. A performance comparison between oversampled filter banks and the 3GPP long term evolution
- Author
-
Riley, Matthew E, Nutter, Brian, and Karp, Tanja
- Subjects
Symbol error rate (SER) ,Long term evolution (LTE) ,Least squares (LS) ,Orthogonal frequency-division multiplexing (OFDM) ,Channel estimation ,Oversampled filter banks ,Normalized least mean squared (NLMS) - Abstract
Recent attention is being paid in the field of wireless communications to the Long Term Evolution (LTE) standard developed by the Third Generation Partnership Project (3GPP). The LTE downlink uses an efficient transmission scheme called orthogonal frequency-division multiplexing (OFDM). Despite many advantages to OFDM, the system suffers from poor stopband attenuation, making it susceptible to inter-carrier interference (ICI). This thesis presents a multicarrier system based on oversampled filter banks (OFB) that is comparable to LTE in terms of occupied bandwidth and symbol transmission rate while achieving superior stopband attenuation and subcarrier separation. In addition, a new treatment of the LTE pilot symbol configuration is proposed that simplifies the interpolation process. Using the new pilot symbol treatment, the channel equalization performance of the two systems will be measured using the 1-tap equalizer through symbol error rate (SER) curves. The channel estimation process uses two different estimation and interpolation techniques. An initial channel estimate is obtained using the least squares (LS) estimator or the normalized least mean squared (NLMS) algorithm, and the full estimate is obtained using linear or cubic spline interpolation between pilot symbols. The goal is to provide a performance evaluation of oversampled filter banks in comparison to the LTE standard.
- Published
- 2013
130. Effects of simultaneous delivery of silver and zinc oxide on the efficacy of healing acute wounds
- Author
-
Halldorson, Michael, Hamood, Abdul N., Rivero, Iris V., and Nutter, Brian
- Subjects
Zinc oxide (ZnO) ,Wound healing ,Silver (Ag) ,Microspheres ,Polylactic acid (PLA) Two - Abstract
This thesis is aimed to establish the efficiency of the simultaneous delivery of Silver (Ag), a proven antibacterial, and Zinc Oxide (ZnO), a compound shown to assist wound healing, in treating wounds, and how the extended release aspect of microspheres improves upon current available healing agent. A review of existing literature provided a foundation for the process of planning the healing agent design. Original research developed through this study is presented in the form of a journal article in Chapter II. This research looks at a new method of simultaneous delivery of Ag and ZnO. The research was planned to show how various ratios of these elements could avoid toxicity, maintain bacteriostatic properties, and decrease the amount of time it would take for a wound to heal. By using PLA as the encapsulating polymer and petrolatum gel as a base carrier, we were able to present a healing gel that provided results showing significant inhibition of bacteria, significant increase in wound closure, and a treatment that can last for an extended period of time. The discussion of future work provides a brief discussion of how alternative base carriers can be used to deliver the microspheres. The results presented in this work provide the foundation for a potential new form of healing gel that can be used on all forms of acute wounds.
- Published
- 2013
131. Application of information theoretic unsupervised learning to medical image analysis
- Author
-
Hill, Jason E, Nutter, Brian, and Mitra, Sunanda
- Subjects
Medical images ,Spectral clustering ,Unsupervised learning - Abstract
Automated segmentation of medical images is a challenging problem. The number of segments in a medical image may be unknown a priori, due to the presence or absence of pathological anomalies. Some unsupervised learning techniques that take advantage of information theory concepts may provide a solid approach to the solution of this problem. To this end, there has been the recent development of the Improved “Jump” Method (IJM), a technique that efficiently finds a suitable number of clusters representing different tissue characteristics in a medical image. The IJM works by optimizing an objective function, the margin, that quantifies the quality of particular cluster configurations. Recent developments involving interesting relationships between Spectral Clustering (SC) and kernel Principal Component Analysis (kPCA) are used by the implementation of the IJM to cover the non-linear domain. In this novel SC approach the data is mapped to a new space where the points belonging to the same cluster are collinear if the parameters of a Radial Basis Function (RBF) kernel are adequately selected. After projecting these points onto the unit sphere, IJM measures the quality of different cluster configurations, yielding an algorithm that simultaneously selects the number of clusters, and the RBF kernel parameter. Validation of this method is sought via segmentation of MR brain images in a combination of all major modalities. Such labeled MRI datasets serve as benchmarks for any segmentation algorithm. The effectiveness of the nonlinear IJM is demonstrated in the segmentation of uterine cervix color images for early identification of cervical neoplasia, as an aid to cervical cancer diagnosis. Limitations of the current implementation of IJM are encountered when attempting to segment and MR brain images with multiple sclerosis (MS) lesions. These limitations and a strategy to overcome them are discussed. Finally, an outlook to applying this method to the segmentation of cells in Pap smear test micrographs is laid out.
- Published
- 2013
132. OFDM physical layer architecture and real-time multi-path fading channel emulation for the 3GPP long term evolution downlink
- Author
-
Briggs, Elliot S, Mitra, Sunanda, Nutter, Brian, and Karp, Tanja
- Subjects
Signal processing ,Resampling ,Channel emulation ,Orthogonal frequency-division multiplexing (OFDM) ,Channel estimation ,Long-term evolution (LTE) ,Synchronization ,Regression ,Machine learning ,Equalization ,Adaptive filters ,Multi-rate ,Cellular ,Farrow ,Field-programmable gate array (FPGA) - Abstract
This dissertation is focused on OFDM receiver algorithms, particularly involving receiver synchronization and channel equalization. These two topics are critical components in an LTE downlink receiver. The various aspects of receiver synchronization are presented and their impact on reception quality is quantitatively defined. Building on this information, a receiver architecture is constructed that is capable of simultaneously correcting symbol timing and sampling frequency offset using a feedback-controlled arbitrary-ratio resampler. The topic of channel estimation is presented by first investigating MMSE algorithms, leading to the more practical family of algorithms that use stochastic optimization techniques. A new family of algorithms is explored that are based on locally weighted linear regression. The regression algorithm uses an optimum parameterized kernel, found using offline training. Throughout the dissertation, algorithms are tested using realistic models that emulate typical time-varying multi-path fading channel scenarios defined by the LTE standard for conformance testing. To perform extended simulations in real-time, a channel emulator architecture is developed, implemented, and tested in FPGA hardware. The developed architecture allows online programming of the desired spatial and temporal correlation properties of the channel and has been designed to be scalable to the desired spatial or temporal dimensions. The primary goal of the dissertation is to offer high performance, while maintaining a low complexity, cost-effective hardware implementation. Although implementation details target an FPGA-based design, the concepts can be extrapolated to ASIC or even software-based targets.
- Published
- 2012
133. VPX based data acquisition and processing system
- Author
-
Tile, Milind, Nutter, Brian, and Giesselmann, Michael G.
- Subjects
Data acquisition ,VPX system ,OpenVPX system - Abstract
The VPX based data acquisition and processing system is a 3U OpenVPX based embedded computer that has been specifically designed for instrumentation and signal processing applications. It has features that make it ideal for use in high speed data acquisition, data streaming and real time data processing applications. The development of this project involved step by step modular debugging. The debugging process helped in identifying and isolating the problems in the VPX system. The thesis explains stepwise procedure of the design, development and debugging of the project, VPX based data acquisition and processing system, which was assigned to author during his internship tenure at Innovative Integration.
- Published
- 2012
134. Evolution of current mode control approaches for implementation in rapid capacitor charger technology
- Author
-
Vollmer, Travis T., Bayne, Stephen B., Nutter, Brian, and Giesselmann, Michael G.
- Subjects
Power electronics ,Capacitor charger ,Compact pulsed power ,Peak current mode control - Abstract
With the interest in RF (radio frequency) and HPM (high power microwave) directed energy applications growing, rapid capacitor charger technology has advanced to meet the input power management parameters to these systems. Current mode control is seen as a desirable control method for the power inverter to quickly charge the capacitor bank. An analog current mode control platform has been demonstrated at the P3E (pulsed power and power electronics) Laboratory. Engineering design advancements have resulted in optimization for power density. Further developments in current mode control have yielded: 1) a current mode control approach that allows multiple inverters to be stacked for high power (100 kW capability) applications 2) a digital peak current mode control approach that allows for adaptive slope compensation with a reduction of analog peripheral circuitry. The method to handle slope compensation for the peak current mode control has also progressed along with the hardware developments. To artificially adjust the current upslope into the CS pin of the analog IC, the initial use of a BJT emitter follower matured into an op-amp circuit for a more eloquent solution. . The digital peak current mode control was then implemented with a dsPIC controller and demonstrated specifically with a pulse forming network charging application. The digital control method continuously monitors the peak output current and adjusts the current limit with relation to the PWM duty cycle. The continued development of these control methods has led to a digital control platform that shows improvements over the analog method while still providing peak current mode control with stable operation at duty cycles greater than 50%.
- Published
- 2012
135. Reverse-engineering biological pathways
- Author
-
Jaafari, Nima, Nutter, Brian, and Pal, Ranadip
- Subjects
Gene regulatory network inference ,Reverse engineering of biological pathways - Abstract
Cancer can alter the way in which fundamental cellular functions transpire in the human body. Well-studied signaling pathway models that normally provide an adequate structure for cell functions can become unreliable as a result of mutations in cell DNA that bring about unknown changes of inter-gene regulation in the model. Thus inference of the distorted pathway is critical to effectively treat a patient, and majority of current procedures experiment by applying all combinations, or random combinations of drugs to discover the unique changes and ways to alter the diseased pathway. Without a guided systematic approach to this type of procedure, it can become very costly. In this thesis, we discuss optimization methods for reverse-engineering the steady-state inter-gene relationships of unique signaling pathways. We first show how the problem can be modeled and structured for simulation, then we discuss how a priori biological knowledge and experimental gene expression measurements can aid in selecting a reduced number of required tests to discover the altered inter-gene dynamics. Finally, the algorithm is tested on pathways, known a priori, and its performance evaluated.
- Published
- 2012
136. Dynamic causal modeling of brain networks in spatial reasoning tasks
- Author
-
Kapse, Kushal, Nutter, Brian, and Baker, Mary C.
- Subjects
Statistical parametric mapping (SPM8) ,Dynamic casual modeling (DCM) ,Math-gifted ,Effective connectivity - Abstract
DCM is a connectivity analysis technique applied to study the neural networks in human brain. This thesis demonstrates the basic understanding of DCM mathematics, Applications and fMRI dataset conclusion based on DCM
- Published
- 2012
137. Unsupervised learning methods: An efficient clustering framework with integrated model selection
- Author
-
Corona, Enrique, Mitra, Sunanda, Pal, Ranadip, López-Benitez, Noé, and Nutter, Brian
- Subjects
Information theory ,Spectral clustering ,Kernel methods ,Clustering validation ,Least squares support vector machines ,Unsupervised learning ,Clustering - Abstract
Classification is one of the most important practices in data analysis. In the context of machine learning, this practice can be viewed as the problem of identifying representative data patterns in such a manner that coherent groups are formed. If the data structure is readily available (e.g. supervised learning), it is usually used to establish classification rules for discrimination. However, when the data is unlabeled, its underlying structure must be unveiled first. Consequently, unsupervised classification poses more challenges. Among them, the fundamental question of an appropriate number of groups or clusters in the data must be addressed. In this context, the "jump" method, an efficient but limited linear approach that finds plausible answers to the number of clusters in a dataset, is improved via the optimization of an appropriate objective function that quantifies the quality of particular cluster configurations. Recent developments showing interesting associations between spectral clustering (SC) and kernel principal component analysis (KPCA) are used to extend the improved method to the non-linear domain. This is achieved by mapping the input data to a new space where the original clusters appear as linear structures. The characteristics of this mapping depend to a large extent on the parameters of the kernel function selected. By projecting these linear structures to the unit sphere, the proposed method is able to measure the quality of the resulting cluster configurations. These quality scores aid in the simultaneous decision of the kernel parameters (i.e. model selection) and the number of clusters present in the dataset. Results of the enhanced jump method are compared to other relative validation criteria such as minimum description length (MDL), Akaike's information criterion (AIC) and consistent Akaike's information criterion (CAIC). The extension of the method is tested with other cluster validity indices, in similar settings, such as the adjusted Rand index (ARI) and the balanced line fit (BLF). Finally, image segmentation examples are shown as a real world application of the technique.
- Published
- 2012
138. Feature generation of EEG data using wavelet analysis
- Author
-
Chesnutt, Catherine F, O'Boyle, Michael W., Nutter, Brian, and Baker, Mary C.
- Subjects
Wavelet analysis and its applications ,Electroglottography ,Autism spectrum disorders - Abstract
Wavelet analysis is a modern method of time-frequency analysis that can be used to analyze EEG signals. There are several popular methods of generating wavelet-based features for the purposes of classification and brain modeling. These methods generate one feature per wavelet decomposition level, effectively averaging out the temporal information contained in the wavelet transform. This thesis proposes a method of generating features based on segments of the continuous wavelet transform and provides a Matlab software tool capable of generating features of EEG data using this and a number of other methods. The methods are then tested in an example study on attention networks in individuals with autism spectrum disorder (ASD). There is evidence of a selective attention abnormality in autism that is identified by the attention network task (ANT). The primary area of activation in the brain related to selective attention is the prefrontal cortex and anterior cingulate. The ANT task was given to a group of five participants diagnosed with ASD and a control group of five neuro-typical participants. The EEGs were recorded using a 64-channel EGI system and preprocessed using EEGLab. The Matlab software tool proposed herein was used to generate features of the data using coherence, conventional average power, wavelet-power, and time-segmented wavelet power. The results are examined by comparing the number of features that pass a t-test for each method. The time-averaged wavelet power method produced more significant features than conventional average power, and the time-segmented wavelet power method produced more features than the time-averaged wavelet-power method. As hypothesized, the prefrontal cortex and anterior cingulate were the most significant area of activation for the wavelet-based methods. The average values of the power features were larger in the autistic group, while the average values of coherence were larger in the controls group. The occipital lobe was also an area of significant difference between the autistic and controls groups but not within the groups, supporting evidence of hypersensitivity to visual stimuli in autistic individuals. While the time-averaged wavelet method produced a small number of significant features, the time-segmented wavelet method produced a much larger number of significant features that create a model of the unfolding nature of the processes of the brain.
- Published
- 2012
139. Light transport simulation in reflective displays
- Author
-
Feng, Zhanpeng, Nutter, Brian, Mitra, Sunanda, Karp, Tanja, Gale, Richard O., and Westfall, Peter
- Subjects
Monte Carlo method ,Reflectors, Lighting ,Reflection (Optics) ,Simulation - Abstract
In the last several years, reflective displays have gained substantial popularity in mobile devices such as e-readers, because of their significant advantages in power consumption and sunlight readability. A typical reflective display consists of a stack of optical layers. Accurate and efficient simulation of light transport in these layers provides valuable information for optical design and analysis. Physically based ray tracing algorithms are able to produce simulation results that mirror the real world display performance in a wide range of illumination conditions, viewing angles, and distances. These simulation outcomes help system architects make far reaching decisions as early as possible in the design process. In this dissertation, a reflective display is modeled as a layered material, with a FOS (front of screen) layer on the top, a diffusive layer (diffuser) underneath the FOS, a transparent layer (glass) in the middle, and a wavelength-dependent reflective layer (pixel array) at the bottom. A set of simple and efficient spectral functions is developed to model the reflectance and absorption of FOS. A novel hybrid approach combining both spectro-radiometer based and imaging based measurement methods is developed to acquire high resolution reflectance data in both angular and spectral domains. A BTDF (bidirectional transmittance distribution function) is generated from the measured data to model the diffuser. A wavelength dependent BRDF (bidirectional reflectance distribution function) is used to model the pixels. Realistic light transport simulation requires interplay of three factors: surface geometry, lighting, and material reflectance. Monte Carlo ray tracing methods are used to link these factors together. Path tracing is employed to provide unbiased results. Stratified sampling and importance sampling are used for effective variance reduction. Stratified sampling produces well distributed random samples, and importance sampling helps Monte Carlo simulation converge more quickly. Different importance sampling methods are compared and analyzed. Simulation results of display performance, including reflectance, color gamut, contrast ratio, and daylight readability, are presented. The impact of different lighting conditions, diffusers, and FOS designs are studied. Measurement data and physically based analyses are used to confirm the validity of the simulation tool. The simulation tool provides the desired accuracy and predictability for display design in a wide range of lighting conditions, which makes it a valuable mechanism for display designers to find the optimal solution for real world applications.
- Published
- 2012
140. Cache resident self testing on passive loopback board
- Author
-
Pakala, Pavan, Gale, Richard O., and Nutter, Brian
- Subjects
Cache, Testing, Microprocessor - Abstract
Testing of devices is an important factor in the semiconductor industry. There is a constant effort by major semiconductor companies to bring down test cost and time, without compromising on the test quality. Implementation of built in self test techniques (BIST) are required, especially for complex components like microprocessors. Several challenges are associated with development of BIST techniques and development of such techniques on the ATE is time consuming. This thesis project is an attempt to address the challenges associated with development of a certain BIST, called cache resident self testing (CReST), developed at AMD [5]. In CReST, test vectors are loaded into the cache of the microprocessor, and the processor is used to test itself. In this work, high speed IO links in the processor are tested. The device under test is an AMD processor with a G34 package, having four HyperTransport links. The work includes debugging an engineering device interface board (DIB), developed to implement the loopback test, avoiding certain tester channels. This passive loopback DIB gives better performance and is expected to be used in production testing soon. A comparison of the loopback and the production DIB is presented. Also the aspects of loopback testing and principles of CReST are discussed, along with an overview of the ATE used for this process
- Published
- 2011
141. Multispectral imager using band pass optical filters and image illumination correction
- Author
-
Chavan, Akshay R., Mitra, Sunanda, and Nutter, Brian
- Subjects
Hyperspectral ,Multispectral ,Optical filters ,Illumination correction - Abstract
Commercially available multispectral or hyperspectral imaging systems are designed to capture images, at predefined wavelength intervals. Various mechanisms are implemented for changing the wavelength intervals of light used for capturing spectral images, and any variations in these frequency intervals require major alterations. Dispersion of light is carried out either by diffraction gratings or by electronically tunable filters. Apparatus with diffraction grating are inexpensive but have slow response time. Electronically tunable filters have quick response time but make the apparatus expensive. Imaging systems where optical filters are mounted on a rotating disc to capture hyperspectral images have a predefined utility because there is no means to select or adjust the filters that are placed on the disc for an given experiment. The proposed approach includes the use of step up rings to attach the optical filters with threaded c-mount attachments to the holes in a metal disc. The use of step up rings to mount the optical filters provides a way to replace filters effortlessly. As a result, the system has additional research functionality to be used as a test structure to evaluate the performance of the filters, based on their response and the selection of central frequency according to the requirements of an experiment. The precision processing of light required while using diffraction gratings or tunable filters can be avoided by the use of optical filters. The assembly with optical filters does not require precise control on the disc, because positioning the filters in front of the camera. The proposed assembly also provides a prototype that can be improved to accelerate the process of image acquisition, which will make the apparatus faster than the one with diffraction grating and less expensive than the one with tunable filters.
- Published
- 2011
142. Automated analysis of linear array images for the detection of human papillomavirus genotypes
- Author
-
Wilhelm, Matthew S., Mitra, Sunanda, and Nutter, Brian
- Subjects
Automated ,Human papillomavirus (HPV) ,Detection ,Image ,Linear array ,Analysis - Abstract
Persistent infections with carcinogenic Human Papillomavirus (HPV) are a necessary cause for cervical cancer, which is the fifth most deadly cancer for women worldwide. Approximately 20 million Americans are currently infected with HPV, but only a subset will develop cervical cancer. While a negative HPV test indicates a very low risk for cervical cancer, a positive test cannot discriminate between an innocuous transient infection and a prevalent cancer. Additional information such as HPV genotype and HPV viral load is thought to improve the ability to predict which women will develop cervical cancer. The visual interpretation of hybridization-strip-based HPV genotyping results, however, is heterogeneous and poorly standardized. The need for accurate and repeatable results has led to work toward the development of a robust automated image analysis package for HPV genotyping strips.
- Published
- 2011
143. LCD/LED Digit recognition by iPhone
- Author
-
Li, Xian, Mitra, Sunanda, and Nutter, Brian
- Subjects
iPhone programming ,Digital recognition - Abstract
Home medical devices with LCD or LED screens are quite common. While they offer many conveniences, the data is usually not retained. In this thesis, an iPhone application will be introduced that recognizes the digits on LCD or LED screens. The user takes a picture of the desired LCD or LED, and the app will convert the image to text in seconds. The app also has an email feature that allows the user to send the picture and the text conveniently to a secure medical data logging database. A contour finding algorithm is illustrated in this application for image preprocessing. This algorithm is relatively faster and more efficient than typical image preprocessing techniques. The Tesseract optical character recognition engine will be implemented for recognition of digits. A two-step recognition process improves the accuracy of the recognition. In addition to the image processing techniques, cross-compiling, Objective-C coding and Cocoa TouchTM event handling are discussed in this application.
- Published
- 2011
144. Implementation of BCWT in GUI wavelet toolbox
- Author
-
Kongara, Spandana, Nutter, Brian, Mitra, Sunanda, and Karp, Tanja
- Subjects
Image compression ,Graphical user interface (GUI) wavelet toolbox ,Backward coding of wavelet trees (BCWT) ,Command line - Abstract
MATLAB has different tools available for Image processing applications such as Image processing toolbox, Wavelet Toolbox etc. The Wavelet Toolbox has different GUI interfaces in it for various wavelet applications which can be accessed by the command ‘wavemenu’. For Image Compression the Wavelet Toolbox has a GUI tool named True Compression 2D. It can also be accessed by the command ‘wc2dtool’. The user can also access a command line function instead of GUI toolbox using the command ‘wcompress’. The toolbox and the command line function have different compression algorithms available for compressing images such as EZW, SPIHT etc. The user can select the desired method for a particular application and compress the images. The BCWT Compression algorithm is proposed by Jiangling Guo is advantageous than some of the existing algorithms in the Wavelet toolbox such as EZW, SPIHT etc. BCWT algorithm is less complex, fast and uses less memory than EZW and SPIHT. BCWT is added to the available compression algorithms in the toolbox and the command line function such that user can access it similar to the other compression methods. BCWT is made simple to all the users by integrating it into the GUI Wavelet Toolbox.
- Published
- 2010
145. Pessimism of memory built in self test screening with elevated back bias and core voltage
- Author
-
Heasley, Brian J., Gale, Richard O., and Nutter, Brian
- Subjects
Semiconductor reliability ,Negative bias temperature instability (NBTI) ,SRAM reliability - Abstract
V N-well (VNW) biasing is a screening methodology for sub-65 nm silicon semiconductors that provides a means of detecting the effects of Vmin drift often associated with burn-in and time dependent wear-out mechanisms. The following thesis explores the application of utilizing VNW biasing and manipulation of Core operating voltage VDD to model and predict parametric Vmin drift in embedded SRAM arrays of large processors. The goal of this thesis is to quantify the overall effectiveness and coverage of implementing a VNW SRAM screen.
- Published
- 2010
146. Joint solution of urban structure detection from hyperion hyperspectral images
- Author
-
Cong, Lin, Liang, Daan, Mitra, Sunanda, and Nutter, Brian
- Subjects
Hyperspectral image processing ,Fourier transform ,Spectral-spatial feature extraction ,Co-occurrence matrix ,Hyperion - Abstract
Hyperspectral remote sensing has shown great potential for disaster analysis. In post-disaster urban damage assessment, residential areas and buildings must be accurately identified in the images before and after the disaster. However, the traditional spectral-only or spatial-only solutions prove ineffective for residence detection from low resolution hyperspectral images, such as Hyperion data. To solve this problem, a joint solution of residential area classification, based on both spectral signature and spatial texture, is proposed in this thesis. Correlations between every pixel spectrum and the selected endmembers’ spectra and the most significant PCA (Principle Component Analysis) components of the spectral data provide spectral features of every pixel. A hierarchical Fourier Transform – Co-occurrence Matrix approach is designed to help capture spatial textures. Eight second order texture measures are calculated based on the co-occurrence matrix, and K-fold cross validation is performed on the training data to select the best combination of features for the proposed algorithm. Compared with most existing methods that focus exclusively on spectral or spatial information and rely on high spatial resolution hyperspectral images that are usually taken by airborne sensors, our solution makes use of both spectral signature and macroscopic grid patterns of the residential areas and hence works well for low resolution Hyperion imagery.
- Published
- 2010
147. Estimating volume of an object from two profile images
- Author
-
Block, Scott T., Nutter, Brian, and Pal, Ranadip
- Subjects
Image segmentation ,Volume estimation ,Simple 3D object rendering - Abstract
Estimating the volume of an object from two dimensional cross-sectional images of the object has applications in preventive and therapeutic medicine, automated industrial processing and defense. In this thesis, we present various approaches to achieve volume estimation from two profile images of the coronal (front) and sagittal (side) planes. The initial step in the process is segmenting the image to extract the object information. Secondly, the binary profile images are used to represent object slices based on an ellipse or rectangular cross section. The next step in estimating the volume is based on summing up the individual slice volumes along the height of the object. The known height of the object is used to give a relationship between the voxel volume and the actual volume. Finally objects of known volume are used to correct errors that may have occurred during segmentation and some of the optical effects of the camera. The thesis presents existing and modified approaches to achieve fast volume estimation from profile images.
- Published
- 2010
148. Joint source channel coding using complex number DFT block codes
- Author
-
Mallela, Sandeep, Nutter, Brian, and Karp, Tanja
- Subjects
Complex number codes ,Joint source channel coding ,Masking technique ,Error correction coding ,Complex BCH codes ,GBG channel model ,DFT codes - Abstract
The rapid growth in communication systems brought an increasing requirement for efficient and robust error control coding. The main aim of this thesis is to first review some of the available coding and decoding procedures for complex number DFT block codes and then, in a second step, make necessary modifications to improve the efficiency and error correction capability. A new technique that is designed to achieve better bandwidth efficiency by reducing redundancy along with maintaining the error correction capability is also proposed and investigated in this thesis. The error control coding on complex numbers is investigated, which can effectively represent most signals and lend themselves to better implementation of joint source channel coding when compared to error correction codes using finite fields. The coding and decoding procedures proposed by Redinbo are studied and implemented. The design of error control codes is studied by considering a complex valued channel model called the Gaussian – Bernoulli Gaussian (GBG) channel model. Various decoding algorithms such as Peterson–Gorenstein–Zierler (PGZ) decoder for detecting error locations and values, Bayes hypothesis tester for locating error positions and a Wiener estimator for estimating error values are studied. These methods are compared for their error correction capability (symbol error rates) with extensive simulations and analysis, and a suitable modification to the decoder is proposed, analyzed, and verified for its superior performance when compared to the other studied decoding algorithms. A new bandwidth efficient technique called “Masking technique” is developed and compared with the existing algorithms for its performance.
- Published
- 2010
149. Automated temperature characterization for packaged integrated circuits using on-chip electrostatic discharge structures
- Author
-
Ahuja, Ashish, Gale, Richard O., and Nutter, Brian
- Abstract
This thesis proposes an improved approach to perform temperature characterization for packaged integrated circuits. In the semiconductor industry, releasing parts late to market reduces early revenue for any given product line. The objective of this work is to lower overall project development cost and to reduce time to market by reducing the time required to perform temperature characterization at multiple temperature points. This goal has been achieved by eliminating the soak time, resulting in time savings of about 750 minutes, upfront, for a pre-production characterization sample of 30 units at 7 different temperatures across a 200 °C temperature range. Also, this method provides extended capability to collect more data with no additional test time, providing a better understanding of the device being characterized. This method is more user friendly and convenient. On-chip Electrostatic Discharge (ESD) protection diodes have been used to measure the die temperature, an Automated Test Equipment (ATE) and Precision Temperature Forcing System (PTFS) have been interfaced using General Purpose Interface Bus (GPIB), and Pascal routines have been written to control PTFS from the ATE test program. Conventional temperature characterization involves first soaking the device at a given temperature for a fixed time, followed by executing the device’s test sequence on the ATE. Soaking is done to bring the device to thermal equilibrium. The soak time becomes significant as the number of temperature points and sample size increases. This thesis suggests monitoring an on-chip temperature-sensitive parameter, for example, voltage drop across a forward biased ESD diode, and simultaneously heating/ cooling the device to a known temperature at a known rate. As and when the desired temperature is achieved, the device’s test sequence is executed. A 20 pin, buck voltage converter housed in TSSOP-PWP package has been used to demonstrate this approach. For accurate temperature measurements, ESD protection diodes were first calibrated in an oven at thermal equilibrium at 9 different temperature points across 200 °C. The Least square estimation method was used to obtain a linear relation between the forward voltage (Vd) and the junction temperature (T) for the diode. The effect of temperature variation across the die was minimized by doing temperature measurements at different pins across the die. After analyzing the preliminary results, different temperature ramping profiles were implemented, and recommendations were made. The approach has been optimized for a given ATE test program for the buck voltage regulator that has been used in the experiments. Different temperature ramping methodologies were implemented. Based upon the experimental results, comparisons between different methodologies were made. Also, the effect of increasing the temperature ramp rate on total characterization time and measurement variation were experimentally determined.
- Published
- 2009
150. Next generation assisting clinical applications by using semantic-aware electronic health records
- Author
-
Rik Van de Walle, Erik Mannens, Pedro Debevere, Pieterjan De Potter, and Nutter, Brian
- Subjects
Data processing ,Technology and Engineering ,business.industry ,Computer science ,Interoperability ,Ontology (information science) ,Health records ,Data structure ,clinical applications ,Data science ,health care ,Health care ,Architecture ,business ,Implementation - Abstract
The health care sector is no longer imaginable without electronic health records. However, since the original idea of electronic health records was focused on data storage and not on data processing, a lot of current implementations do not take full advantage of the opportunities provided by computerization. This paper introduces the Patient Summary Ontology for the representation of electronic health records and demonstrates the possibility to create next generation assisting clinical applications based on these semantic-aware electronic health records. Also, an architecture to interoperate with electronic health records formatted using other standards is presented.
- Published
- 2009
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.