72 results on '"Zurada JM"'
Search Results
2. Synchronizing Speech Mixtures in Speech Separation Problems under Reverberant Conditions
- Author
-
Llerena, Cosme, Gil-Pita, Roberto, Alvarez, Lorena, Manuel Rosa-Zurera, Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
3. Speech Enhancement in Noisy Environments in Hearing Aids Driven by a Tailored Gain Function Based on a Gaussian Mixture Model
- Author
-
Alvarez, Lorena, Alexandre, Enrique, Llerena, Cosme, Roberto Gil-Pita, Cuadra, Lucas, Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
4. Do We Need Whatever More Than k-NN?
- Author
-
Miroslaw Kordos, Blachnik, Marcin, Strzempa, Dawid, Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
5. Instance Selection in Logical Rule Extraction for Regression Problems
- Author
-
Kordos, Miroslaw, Bialka, Szymon, Marcin Blachnik, Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
6. Bagging of Instance Selection Algorithms
- Author
-
Blachnik, Marcin, Miroslaw Kordos, Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
7. Selecting Representative Prototypes for Prediction the Oxygen Activity in Electric Arc Furnace
- Author
-
Marcin Blachnik, Kordos, Miroslaw, Wieczorek, Tadeusz, Golak, Slawomir, Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, La, and Zurada, Jm
8. Feasibility of Error-Related Potential Detection as Novelty Detection Problem in P300 Mind Spelling
- Author
-
Nikolay Chumerin, Nikolay V. Manyakov, Adrien Combaz, Arne Robben, Marc M. Van Hulle, Marijn van Vliet, Rutkowski, L, Korytkowski, M, Scherer, R, Tadeusiewicz, R, Zadeh, LA, and Zurada, JM
- Subjects
medicine.diagnostic_test ,Computer science ,Interface (computing) ,Speech recognition ,Realization (linguistics) ,02 engineering and technology ,Electroencephalography ,Novelty detection ,Spelling ,03 medical and health sciences ,Statistical classification ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,030217 neurology & neurosurgery - Abstract
In this paper, we report on the feasibility of the Error-Related Potential (ErrP) integration in a particular type of Brain-Computer Interface (BCI) called the P300 Mind Speller. With the latter, the subject can type text only by means of his/her brain activity without having to rely on speech or muscular activity. Hereto, electroencephalography (EEG) signals are recorded from the subject’s scalp. But, as with any BCI paradigm, decoding mistakes occur, and when they do, an EEG potential is evoked, known as the Error-Related Potential (ErrP), locked to the subject’s realization of the mistake. When the BCI would be able to also detect the ErrP, the last typed character could be automatically corrected. However, since the P300 Mind Speller is optimized to correctly operate in the first place, we have much less ErrP’s than responses to correctly typed characters. In fact, exactly because it is supposed to be a rare phenomenon, we advocate that ErrP detection can be treated as a novelty detection problem. We consider in this paper different one-class classification algorithms based on novelty detection together with a correction algorithm for the P300 Mind Speller. ispartof: pages:293-301 ispartof: Lecture notes in computer science (LNCS) vol:7268 issue:2 pages:293-301 ispartof: International Conference on Artificial Intelligence and Soft Computing (ICAISC) location:Zakopane, Poland date:29 Apr - 3 May 2012 status: published
- Published
- 2012
9. Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization.
- Author
-
Fan Q, Kang Q, Zurada JM, Huang T, and Xu D
- Abstract
In this article, we investigate the boundedness and convergence of the online gradient method with the smoothing group L
1/2 regularization for the sigma-pi-sigma neural network (SPSNN). This enhances the sparseness of the network and improves its generalization ability. For the original group L1/2 regularization, the error function is nonconvex and nonsmooth, which can cause oscillation of the error function. To ameliorate this drawback, we propose a simple and effective smoothing technique, which can effectively eliminate the deficiency of the original group L1/2 regularization. The group L1/2 regularization effectively optimizes the network structure from two aspects redundant hidden nodes tending to zero and redundant weights of surviving hidden nodes in the network tending to zero. This article shows the strong and weak convergence results for the proposed method and proves the boundedness of weights. Experiment results clearly demonstrate the capability of the proposed method and the effectiveness of redundancy control. The simulation results are observed to support the theoretical results.- Published
- 2023
- Full Text
- View/download PDF
10. Patient-Specific Modeling and Model Predictive Control Approach to Personalized Optimal Anemia Management.
- Author
-
Affan A, Zurada JM, and Inane T
- Subjects
- Humans, Patient-Specific Modeling, Kidney, Anemia drug therapy, Erythropoietin therapeutic use, Renal Insufficiency, Chronic
- Abstract
In the condition of anemia, kidneys produce less erythropoietin hormone to stimulate the bone marrow to make red blood cells (RBC) leading to a reduced hemoglobin (Hgb) level, also known as chronic kidney disease (CKD). External recombinant human erythropoietin (EPO) is administrated to maintain a healthy level of Hgb, i.e., 10 - 12 g/dl. The semi-blind robust model identification method is used to obtain a personalized patient model using minimum dose-response data points. The identified patient models are used as predictive models in the model predictive control (MPC) framework. The simulation results of MPC for different CKD patients are compared with those obtained from the existing clinical method, known as anemia management protocol (AMP), used in hospitals. The in-silico results show that MPC outperforms AMP to maintain healthy levels of Hgb without over-or-under- shoots. This offers a considerable performance improvement compared to AMP which is unable to stabilize EPO dosage and shows oscillations in Hgb levels throughout the treatment.Clinical Relevance-This research work provides a framework to help clinicians in decision-making for personalized EPO dose guidance using MPC with semi-blind robust model identification using minimum clinical patient dose-response data.
- Published
- 2023
- Full Text
- View/download PDF
11. Control-Relevant Adaptive Personalized Modeling From Limited Clinical Data for Precise Warfarin Management.
- Author
-
Affan A, Zurada JM, and Inanc T
- Abstract
Warfarin is a challenging drug to administer due to the narrow therapeutic index of the International Normalized Ratio (INR), the inter- and intra-variability of patients, limited clinical data, genetics, and the effects of other medications. Goal: To predict the optimal warfarin dosage in the presence of the aforementioned challenges, we present an adaptive individualized modeling framework based on model (In)validation and semi-blind robust system identification. The model (In)validation technique adapts the identified individualized patient model according to the change in the patient's status to ensure the model's suitability for prediction and controller design. Results: To implement the proposed adaptive modeling framework, the clinical data of warfarin-INR of forty-four patients has been collected at the Robley Rex Veterans Administration Medical Center, Louisville. The proposed algorithm is compared with recursive ARX and ARMAX model identification methods. The results of identified models using one-step-ahead prediction and minimum mean squared analysis (MMSE) show that the proposed framework effectively predicts the warfarin dosage to keep the INR values within the desired range and adapt the individualized patient model to exhibit the true status of the patient throughout treatment. Conclusion: This paper proposes an adaptive personalized patient modeling framework from limited patientspecific clinical data. It is shown by rigorous simulations that the proposed framework can accurately predict a patient's doseresponse characteristics and it can alert the clinician whenever identified models are no longer suitable for prediction and adapt the model to the current status of the patient to reduce the prediction error.
- Published
- 2023
- Full Text
- View/download PDF
12. Analysis of Equilibria for a Class of Recurrent Neural Networks With Two Subnetworks.
- Author
-
Xu F, Liu L, Zurada JM, and Yi Z
- Subjects
- Computer Simulation, Neural Networks, Computer
- Abstract
This article is concerned with the problem of the number and dynamical properties of equilibria for a class of connected recurrent networks with two switching subnetworks. In this network model, parameters serve as switches that allow two subnetworks to be turned ON or OFF among different dynamic states. The two subnetworks are described by a nonlinear coupled equation with a complicated relation among network parameters. Thus, the number and dynamical properties of equilibria have been very hard to investigate. By using Sturm's theorem, together with the geometrical properties of the network equation, we give a complete analysis of equilibria, including the existence, number, and dynamical properties. Necessary and sufficient conditions for the existence and exact number of equilibria are established. Moreover, the dynamical property of each equilibrium point is discussed without prior assumption of their locations. Finally, simulation examples are given to illustrate the theoretical results in this article.
- Published
- 2022
- Full Text
- View/download PDF
13. Adaptive Individualized Drug-Dose Response Modeling from a Limited Clinical Data: Case of Warfarin Management.
- Author
-
Affan A, Zurada JM, Brier ME, and Inanc T
- Subjects
- Anticoagulants, Humans, Pharmaceutical Preparations, Warfarin
- Abstract
Administration of drugs requires sophisticated methods to determine the drug quantity for optimal results, and it has been a challenging task for the number of diseases. To solve these challenges, in this paper, we present the semi-blind robust model identification technique to find individualized patient models using the minimum number of clinically acquired patient-specific data to determine optimal drug dosage. To ensure the usability of these models for dosage predictability and controller design, the model (In)validation technique is also investigated. As a case study, the patients treated with warfarin are studied to demonstrate the semi-blind robust identification and model (In)validation techniques. The performance of models is assessed by calculating minimum means squared error (MMSE).Clinical Relevance- This work establishes a general framework for adaptive individualized drug-dose response models from a limited number of clinical patient-specific data. This work will help clinicians in decision-making for improved drug dosing, patient care, and limiting patient exposure to agents with a narrow therapeutic range.
- Published
- 2021
- Full Text
- View/download PDF
14. Precise Warfarin Management through Personalized Modeling and Control with Limited Clinical Data.
- Author
-
Ali Meerza SI, Affan A, Mirinejad H, Brier ME, Zurada JM, and Inanc T
- Subjects
- Anticoagulants therapeutic use, Humans, Warfarin therapeutic use, Pulmonary Embolism, Thrombosis, Venous Thrombosis
- Abstract
Warfarin belongs to a medication class called anticoagulants or blood thinners. It is used for the treatment to prevent blood clots from forming or growing larger. Patients with venous thrombosis, pulmonary embolism, or who have suffered a heart attack, have an irregular heartbeat, or prosthetic heart valves are prescribed with warfarin. It is challenging to find optimal doses due to inter-patient and intra-patient variabilities and narrow therapeutic index. This work presents an individualized warfarin dosing method by utilizing the individual patient model generated using limited clinical data of the patients with chronic conditions under warfarin anticoagulation treatment. Then, the individual precise warfarin dosing is formalized as an optimal control problem, which is solved using the DORBF control approach. The efficiency of the proposed approach is compared with results obtained from practiced clinical protocol.
- Published
- 2021
- Full Text
- View/download PDF
15. Redundant feature pruning for accelerated inference in deep neural networks.
- Author
-
Ayinde BO, Inanc T, and Zurada JM
- Subjects
- Humans, Deep Learning trends, Neural Networks, Computer
- Abstract
This paper presents an efficient technique to reduce the inference cost of deep and/or wide convolutional neural network models by pruning redundant features (or filters). Previous studies have shown that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one another or are very similar and show little or no variations, thus resulting in filtering redundancy. We propose to prune these redundant features along with their related feature maps according to their relative cosine distances in the feature space, thus leading to smaller networks with reduced post-training inference computational costs and competitive performance. We empirically show on select models (VGG-16, ResNet-56, ResNet-110, and ResNet-34) and dataset (MNIST Handwritten digits, CIFAR-10, and ImageNet) that inference costs (in FLOPS) can be significantly reduced while overall performance is still competitive with the state-of-the-art., (Copyright © 2019 Elsevier Ltd. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
16. Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction.
- Author
-
Ayinde BO, Inanc T, and Zurada JM
- Abstract
This paper proposes a new and efficient technique to regularize the neural network in the context of deep learning using correlations among features. Previous studies have shown that oversized deep neural network models tend to produce a lot of redundant features that are either the shifted version of one another or are very similar and show little or no variations, thus resulting in redundant filtering. We propose a way to address this problem and show that such redundancy can be avoided using regularization and adaptive feature dropout mechanism. We show that regularizing both negative and positive correlated features according to their differentiation and based on their relative cosine distances yields network extracting dissimilar features with less overfitting and better generalization. This concept is illustrated with deep multilayer perceptron, convolutional neural network, sparse autoencoder, gated recurrent unit, and long short-term memory on MNIST digits recognition, CIFAR-10, ImageNet, and Stanford Natural Language Inference data sets.
- Published
- 2019
- Full Text
- View/download PDF
17. Deep Learning of Constrained Autoencoders for Enhanced Understanding of Data.
- Author
-
Ayinde BO and Zurada JM
- Abstract
Unsupervised feature extractors are known to perform an efficient and discriminative representation of data. Insight into the mappings they perform and human ability to understand them, however, remain very limited. This is especially prominent when multilayer deep learning architectures are used. This paper demonstrates how to remove these bottlenecks within the architecture of non-negativity constrained autoencoder. It is shown that using both L1 and L2 regularizations that induce non-negativity of weights, most of the weights in the network become constrained to be non-negative, thereby resulting into a more understandable structure with minute deterioration in classification accuracy. Also, this proposed approach extracts features that are more sparse and produces additional output layer sparsification. The method is analyzed for accuracy and feature interpretation on the MNIST data, the NORB normalized uniform object data, and the Reuters text categorization data set.
- Published
- 2018
- Full Text
- View/download PDF
18. The convergence analysis of SpikeProp algorithm with smoothing L 1∕2 regularization.
- Author
-
Zhao J, Zurada JM, Yang J, and Wu W
- Subjects
- Breast Neoplasms epidemiology, Female, Humans, Statistics as Topic trends, Algorithms, Neural Networks, Computer, Statistics as Topic methods
- Abstract
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L
1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification., (Copyright © 2018 Elsevier Ltd. All rights reserved.)- Published
- 2018
- Full Text
- View/download PDF
19. A Novel Pruning Algorithm for Smoothing Feedforward Neural Networks Based on Group Lasso Method.
- Author
-
Wang J, Xu C, Yang X, and Zurada JM
- Abstract
In this paper, we propose four new variants of the backpropagation algorithm to improve the generalization ability for feedforward neural networks. The basic idea of these methods stems from the Group Lasso concept which deals with the variable selection problem at the group level. There are two main drawbacks when the Group Lasso penalty has been directly employed during network training. They are numerical oscillations and theoretical challenges in computing the gradients at the origin. To overcome these obstacles, smoothing functions have then been introduced by approximating the Group Lasso penalty. Numerical experiments for classification and regression problems demonstrate that the proposed algorithms perform better than the other three classical penalization methods, Weight Decay, Weight Elimination, and Approximate Smoother, on both generalization and pruning efficiency. In addition, detailed simulations based on a specific data set have been performed to compare with some other common pruning strategies, which verify the advantages of the proposed algorithm. The pruning abilities of the proposed strategy have been investigated in detail for a relatively large data set, MNIST, in terms of various smoothing approximation cases.
- Published
- 2018
- Full Text
- View/download PDF
20. Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.
- Author
-
Ayinde BO and Zurada JM
- Subjects
- Cluster Analysis, Databases, Factual, Learning, Information Storage and Retrieval methods, Pattern Recognition, Automated methods
- Abstract
This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset., (Copyright © 2017 Elsevier Ltd. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
21. Individualized drug dosing using RBF-Galerkin method: Case of anemia management in chronic kidney disease.
- Author
-
Mirinejad H, Gaweda AE, Brier ME, Zurada JM, and Inanc T
- Subjects
- Anemia complications, Dose-Response Relationship, Drug, Hemoglobins, Humans, Models, Theoretical, Renal Insufficiency, Chronic blood, Anemia drug therapy, Erythropoietin administration & dosage, Hematinics administration & dosage, Renal Insufficiency, Chronic complications
- Abstract
Background and Objective: Anemia is a common comorbidity in patients with chronic kidney disease (CKD) and is frequently associated with decreased physical component of quality of life, as well as adverse cardiovascular events. Current treatment methods for renal anemia are mostly population-based approaches treating individual patients with a one-size-fits-all model. However, FDA recommendations stipulate individualized anemia treatment with precise control of the hemoglobin concentration and minimal drug utilization. In accordance with these recommendations, this work presents an individualized drug dosing approach to anemia management by leveraging the theory of optimal control., Methods: A Multiple Receding Horizon Control (MRHC) approach based on the RBF-Galerkin optimization method is proposed for individualized anemia management in CKD patients. Recently developed by the authors, the RBF-Galerkin method uses the radial basis function approximation along with the Galerkin error projection to solve constrained optimal control problems numerically. The proposed approach is applied to generate optimal dosing recommendations for individual patients., Results: Performance of the proposed approach (MRHC) is compared in silico to that of a population-based anemia management protocol and an individualized multiple model predictive control method for two case scenarios: hemoglobin measurement with and without observational errors. In silico comparison indicates that hemoglobin concentration with MRHC method has less variation among the methods, especially in presence of measurement errors. In addition, the average achieved hemoglobin level from the MRHC is significantly closer to the target hemoglobin than that of the other two methods, according to the analysis of variance (ANOVA) statistical test. Furthermore, drug dosages recommended by the MRHC are more stable and accurate and reach the steady-state value notably faster than those generated by the other two methods., Conclusions: The proposed method is highly efficient for the control of hemoglobin level, yet provides accurate dosage adjustments in the treatment of CKD anemia., (Copyright © 2017 Elsevier B.V. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
22. Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints.
- Author
-
Hosseini-Asl E, Zurada JM, and Nasraoui O
- Abstract
We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (nonnegativity-constrained autoencoder), that learns features that show part-based representation of data. The learning algorithm is based on constraining negative weights. The performance of the algorithm is assessed based on decomposing data into parts and its prediction performance is tested on three standard image data sets and one text data set. The results indicate that the nonnegativity constraint forces the autoencoder to learn features that amount to a part-based representation of data, while improving sparsity and reconstruction quality in comparison with the traditional sparse autoencoder and nonnegative matrix factorization. It is also shown that this newly acquired representation improves the prediction performance of a deep neural network.
- Published
- 2016
- Full Text
- View/download PDF
23. Boundedness and convergence analysis of weight elimination for cyclic training of neural networks.
- Author
-
Wang J, Ye Z, Gao W, and Zurada JM
- Subjects
- Algorithms, Machine Learning, Neural Networks, Computer
- Abstract
Weight elimination offers a simple and efficient improvement of training algorithm of feedforward neural networks. It is a general regularization technique in terms of the flexible scaling parameters. Actually, the weight elimination technique also contains the weight decay regularization for a large scaling parameter. Many applications of this technique and its improvements have been reported. However, there is little research concentrated on its convergence behavior. In this paper, we theoretically analyze the weight elimination for cyclic learning method and determine the conditions for the uniform boundedness of weight sequence, and weak and strong convergence. Based on the assumed network parameters, the optimal choice for the scaling parameter can also be determined. Moreover, two illustrative simulations have been done to support the theoretical explorations as well., (Copyright © 2016 Elsevier Ltd. All rights reserved.)
- Published
- 2016
- Full Text
- View/download PDF
24. Infant Brain Extraction in T1-Weighted MR Images Using BET and Refinement Using LCDG and MGRF Models.
- Author
-
Alansary A, Ismail M, Soliman A, Khalifa F, Nitzken M, Elnakib A, Mostapha M, Black A, Stinebruner K, Casanova MF, Zurada JM, and El-Baz A
- Subjects
- Algorithms, Humans, Infant, Brain diagnostic imaging, Imaging, Three-Dimensional methods, Magnetic Resonance Imaging methods, Models, Statistical
- Abstract
In this paper, we propose a novel framework for the automated extraction of the brain from T1-weighted MR images. The proposed approach is primarily based on the integration of a stochastic model [a two-level Markov-Gibbs random field (MGRF)] that serves to learn the visual appearance of the brain texture, and a geometric model (the brain isosurfaces) that preserves the brain geometry during the extraction process. The proposed framework consists of three main steps: 1) Following bias correction of the brain, a new three-dimensional (3-D) MGRF having a 26-pairwise interaction model is applied to enhance the homogeneity of MR images and preserve the 3-D edges between different brain tissues. 2) The nonbrain tissue found in the MR images is initially removed using the brain extraction tool (BET), and then the brain is parceled to nested isosurfaces using a fast marching level set method. 3) Finally, a classification step is applied in order to accurately remove the remaining parts of the skull without distorting the brain geometry. The classification of each voxel found on the isosurfaces is made based on the first- and second-order visual appearance features. The first-order visual appearance is estimated using a linear combination of discrete Gaussians (LCDG) to model the intensity distribution of the brain signals. The second-order visual appearance is constructed using an MGRF model with analytically estimated parameters. The fusion of the LCDG and MGRF, along with their analytical estimation, allows the approach to be fast and accurate for use in clinical applications. The proposed approach was tested on in vivo data using 300 infant 3-D MR brain scans, which were qualitatively validated by an MR expert. In addition, it was quantitatively validated using 30 datasets based on three metrics: the Dice coefficient, the 95% modified Hausdorff distance, and absolute brain volume difference. Results showed the capability of the proposed approach, outperforming four widely used BETs: BET, BET2, brain surface extractor, and infant brain extraction and analysis toolbox. Experiments conducted also proved that the proposed framework can be generalized to adult brain extraction as well.
- Published
- 2016
- Full Text
- View/download PDF
25. 3-D Lung Segmentation by Incremental Constrained Nonnegative Matrix Factorization.
- Author
-
Hosseini-Asl E, Zurada JM, Gimelfarb G, and El-Baz A
- Subjects
- Algorithms, Databases, Factual, Humans, Image Interpretation, Computer-Assisted, Lung Neoplasms diagnosis, Imaging, Three-Dimensional methods, Lung diagnostic imaging
- Abstract
Accurate lung segmentation from large-size 3-D chest-computed tomography images is crucial for computer-assisted cancer diagnostics. To efficiently segment a 3-D lung, we extract voxel-wise features of spatial image contexts by unsupervised learning with a proposed incremental constrained nonnegative matrix factorization (ICNMF). The method applies smoothness constraints to learn the features, which are more robust to lung tissue inhomogeneities, and thus, help to better segment internal lung pathologies than the known state-of-the-art techniques. Compared to the latter, the ICNMF depends less on the domain expert knowledge and is more easily tuned due to only a few control parameters. Also, the proposed slice-wise incremental learning with due regard for interslice signal dependencies decreases the computational complexity of the NMF-based segmentation and is scalable to very large 3-D lung images. The method is quantitatively validated on simulated realistic lung phantoms that mimic different lung pathologies (seven datasets), in vivo datasets for 17 subjects, and 55 datasets from the Lobe and Lung Analysis 2011 (LOLA11) study. For the in vivo data, the accuracy of our segmentation w.r.t. the ground truth is 0.96 by the Dice similarity coefficient, 9.0 mm by the modified Hausdorff distance, and 0.87% by the absolute lung volume difference, which is significantly better than for the NMF-based segmentation. In spite of not being designed for lungs with severe pathologies and of no agreement between radiologists on the ground truth in such cases, the ICNMF with its total accuracy of 0.965 was ranked fifth among all others in the LOLA11. After excluding the nine too pathological cases from the LOLA11 dataset, the ICNMF accuracy increased to 0.986.
- Published
- 2016
- Full Text
- View/download PDF
26. Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks.
- Author
-
Fan Q, Wu W, and Zurada JM
- Abstract
This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.
- Published
- 2016
- Full Text
- View/download PDF
27. Self-organizing neural networks integrating domain knowledge and reinforcement learning.
- Author
-
Teng TH, Tan AH, and Zurada JM
- Subjects
- Algorithms, Cognition, Humans, Pattern Recognition, Automated, Computer Simulation, Knowledge, Models, Theoretical, Neural Networks, Computer, Reinforcement, Psychology
- Abstract
The use of domain knowledge in learning systems is expected to improve learning efficiency and reduce model complexity. However, due to the incompatibility with knowledge structure of the learning systems and real-time exploratory nature of reinforcement learning (RL), domain knowledge cannot be inserted directly. In this paper, we show how self-organizing neural networks designed for online and incremental adaptation can integrate domain knowledge and RL. Specifically, symbol-based domain knowledge is translated into numeric patterns before inserting into the self-organizing neural networks. To ensure effective use of domain knowledge, we present an analysis of how the inserted knowledge is used by the self-organizing neural networks during RL. To this end, we propose a vigilance adaptation and greedy exploitation strategy to maximize exploitation of the inserted domain knowledge while retaining the plasticity of learning and using new knowledge. Our experimental results based on the pursuit-evasion and minefield navigation problem domains show that such self-organizing neural network can make effective use of domain knowledge to improve learning efficiency and reduce model complexity.
- Published
- 2015
- Full Text
- View/download PDF
28. Learning understandable neural networks with nonnegative weight constraints.
- Author
-
Chorowski J and Zurada JM
- Abstract
People can understand complex structures if they relate to more isolated yet understandable concepts. Despite this fact, popular pattern recognition tools, such as decision tree or production rule learners, produce only flat models which do not build intermediate data representations. On the other hand, neural networks typically learn hierarchical but opaque models. We show how constraining neurons' weights to be nonnegative improves the interpretability of a network's operation. We analyze the proposed method on large data sets: the MNIST digit recognition data and the Reuters text categorization data. The patterns learned by traditional and constrained network are contrasted to those learned with principal component analysis and nonnegative matrix factorization.
- Published
- 2015
- Full Text
- View/download PDF
29. Individualized model discovery: the case of anemia patients.
- Author
-
Akabua E, Inanc T, Gaweda A, Brier ME, Kim S, and Zurada JM
- Subjects
- Algorithms, Anemia etiology, Dose-Response Relationship, Drug, Hemoglobins metabolism, Humans, Kidney Failure, Chronic complications, Linear Models, Recombinant Proteins administration & dosage, Anemia blood, Anemia drug therapy, Erythropoietin administration & dosage, Patient-Specific Modeling statistics & numerical data
- Abstract
The universal sequel to chronic kidney condition (CKD) is anemia. Patients of anemia have kidneys that are incapable of performing certain basic functions such as sensing of oxygen levels to secrete erythropoietin when red blood cell counts are low. Under such conditions, external administration of human recombinant erythropoietin (EPO) is administered as alternative to improve conditions of CKD patients by increasing their hemoglobin (Hb) levels to a given therapeutic range. Presently, EPO dosing strategies extensively depend on packet inserts and on "average" responses to the medication from previous patients. Clearly dosage strategies based on these approaches are, at best, nonoptimal to EPO medication and potentially dangerous to patients that do not adhere to the notion of expected "average" response. In this work, a technique called semi-blind robust identification is provided to uniquely identify models of the individual patients of anemia based on their actual Hb responses and EPO administration. Using the a priori information and the measured input-output data of the individual patients, the procedure identifies a unique model consisting of a nominal model and the associated model uncertainty for the patients. By incorporating the effects of unknown system initial conditions, considerably small measurement samples can be used in the modeling process., (Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
30. Shape analysis of the human brain: a brief survey.
- Author
-
Nitzken MJ, Casanova MF, Gimelfarb G, Inanc T, Zurada JM, and El-Baz A
- Subjects
- Algorithms, Female, Humans, Magnetic Resonance Imaging, Male, Models, Statistical, Brain anatomy & histology, Image Processing, Computer-Assisted methods, Neuroimaging methods
- Abstract
The survey outlines and compares popular computational techniques for quantitative description of shapes of major structural parts of the human brain, including medial axis and skeletal analysis, geodesic distances, Procrustes analysis, deformable models, spherical harmonics, and deformation morphometry, as well as other less widely used techniques. Their advantages, drawbacks, and emerging trends, as well as results of applications, in particular, for computer-aided diagnostics, are discussed.
- Published
- 2014
- Full Text
- View/download PDF
31. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
- Author
-
Wu W, Fan Q, Zurada JM, Wang J, Yang D, and Liu Y
- Subjects
- Computer Simulation, Feedback, Humans, Algorithms, Learning, Neural Networks, Computer
- Abstract
The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided., (Copyright © 2013 Elsevier Ltd. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
32. A competitive layer model for cellular neural networks.
- Author
-
Zhou W and Zurada JM
- Subjects
- Image Processing, Computer-Assisted methods, Neural Networks, Computer
- Abstract
This paper discusses a Competitive Layer Model (CLM) for a class of recurrent Cellular Neural Networks (CNNs) from continuous-time type to discrete-time type. The objective of the CLM is to partition a set of input features into salient groups. The complete convergence of such networks in continuous-time type has been discussed first. We give a necessary condition, and a necessary and sufficient condition, which allow the CLM performance existence in our networks. We also discuss the properties of such networks of discrete-time type, and propose a novel CLM iteration method. Such method shows similar performance and storage allocation but faster convergence compared with the previous CLM iteration method (Wersing, Steil, & Ritter, 2001a). Especially for a large scale network with many features and layers, it can significantly reduce the computing time. Examples and simulation results are used to illustrate the developed theory, the comparison between two CLM iteration methods, and the application in image segmentation., (Copyright © 2012 Elsevier Ltd. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
33. Computational properties and convergence analysis of BPNN for cyclic and almost cyclic learning with penalty.
- Author
-
Wang J, Wu W, and Zurada JM
- Subjects
- Computer Simulation, Computational Biology methods, Learning physiology, Neural Networks, Computer
- Abstract
Weight decay method as one of classical complexity regularization methods is simple and appears to work well in some applications for backpropagation neural networks (BPNN). This paper shows results for the weak and strong convergence for cyclic and almost cyclic learning BPNN with penalty term (CBP-P and ACBP-P). The convergence is guaranteed under certain relaxed conditions for activation functions, learning rate and under the assumption for the stationary set of error function. Furthermore, the boundedness of the weights in the training procedure is obtained in a simple and clear way. Numerical simulations are implemented to support our theoretical results and demonstrate that ACBP-P has better performance than CBP-P on both convergence speed and generalization ability., (Copyright © 2012 Elsevier Ltd. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
34. Guest editorial: special section on white box nonlinear prediction models.
- Author
-
Baesens B, Martens D, Setiono R, and Zurada JM
- Subjects
- Computer Simulation, Algorithms, Artificial Intelligence, Data Mining methods, Databases, Factual, Nonlinear Dynamics
- Published
- 2011
- Full Text
- View/download PDF
35. Extracting rules from neural networks as decision diagrams.
- Author
-
Chorowski J and Zurada JM
- Subjects
- Computer Simulation, Algorithms, Decision Support Techniques, Neural Networks, Computer, Nonlinear Dynamics
- Abstract
Rule extraction from neural networks (NNs) solves two fundamental problems: it gives insight into the logic behind the network and in many cases, it improves the network's ability to generalize the acquired knowledge. This paper presents a novel eclectic approach to rule extraction from NNs, named LOcal Rule Extraction (LORE), suited for multilayer perceptron networks with discrete (logical or categorical) inputs. The extracted rules mimic network behavior on the training set and relax this condition on the remaining input space. First, a multilayer perceptron network is trained under standard regime. It is then transformed into an equivalent form, returning the same numerical result as the original network, yet being able to produce rules generalizing the network output for cases similar to a given input. The partial rules extracted for every training set sample are then merged to form a decision diagram (DD) from which logic rules can be extracted. A rule format explicitly separating subsets of inputs for which an answer is known from those with an undetermined answer is presented. A special data structure, the decision diagram, allowing efficient partial rule merging is introduced. With regard to rules' complexity and generalization abilities, LORE gives results comparable to those reported previously. An algorithm transforming DDs into interpretable boolean expressions is described. Experimental running times of rule extraction are proportional to the network's training time.
- Published
- 2011
- Full Text
- View/download PDF
36. Toward better understanding of protein secondary structure: extracting prediction rules.
- Author
-
Nguyen MN, Zurada JM, and Rajapakse JC
- Subjects
- Algorithms, Data Mining, Databases, Protein, Decision Trees, Proteins classification, Artificial Intelligence, Computational Biology methods, Protein Structure, Secondary, Proteins chemistry
- Abstract
Although numerous computational techniques have been applied to predict protein secondary structure (PSS), only limited studies have dealt with discovery of logic rules underlying the prediction itself. Such rules offer interesting links between the prediction model and the underlying biology. In addition, they enhance interpretability of PSS prediction by providing a degree of transparency to the predicting model usually regarded as a black box. In this paper, we explore the generation and use of C4.5 decision trees to extract relevant rules from PSS predictions modeled with two-stage support vector machines (TS-SVM). The proposed rules were derived on the RS126 data set of 126 nonhomologous globular proteins and on the PSIPRED data set of 1,923 protein sequences. Our approach has produced sets of comprehensible, and often interpretable, rules underlying the PSS predictions. Moreover, many of the rules seem to be strongly supported by biological evidence. Further, our approach resulted in good prediction accuracy, few and usually compact rules, and rules that are generally of higher confidence levels than those generated by other rule extraction techniques.
- Published
- 2011
- Full Text
- View/download PDF
37. Competitive layer model of discrete-time recurrent neural networks with LT neurons.
- Author
-
Zhou W and Zurada JM
- Subjects
- Computer Simulation, Models, Neurological, Neural Networks, Computer, Neurons physiology
- Abstract
This letter discusses the competitive layer model (CLM) for a class of discrete-time recurrent neural networks with linear threshold (LT) neurons. It first addresses the boundedness, global attractivity, and complete stability of the networks. Two theorems are then presented for the networks to have CLM property. We also present the analysis for network dynamics, which performs a column winner-take-all behavior and grouping selection among different layers. Furthermore, we propose a novel synchronous CLM iteration method, which has similar performance and storage allocation but faster convergence compared with the previous asynchronous CLM iteration method (Wersing, Steil, & Ritter, 2001 ). Examples and simulation results are used to illustrate the developed theory, the comparison between two CLM iteration methods, and the application in image segmentation.
- Published
- 2010
- Full Text
- View/download PDF
38. Identification of full and partial class relevant genes.
- Author
-
Zhu Z, Ong YS, and Zurada JM
- Subjects
- Biomarkers, Tumor, Computer Simulation, Databases, Genetic, Humans, Markov Chains, Oligonucleotide Array Sequence Analysis, Algorithms, Computational Biology methods, Genes, Neoplasms genetics
- Abstract
Multiclass cancer classification on microarray data has provided the feasibility of cancer diagnosis across all of the common malignancies in parallel. Using multiclass cancer feature selection approaches, it is now possible to identify genes relevant to a set of cancer types. However, besides identifying the relevant genes for the set of all cancer types, it is deemed to be more informative to biologists if the relevance of each gene to specific cancer or subset of cancer types could be revealed or pinpointed. In this paper, we introduce two new definitions of multiclass relevancy features, i.e., full class relevant (FCR) and partial class relevant (PCR) features. Particularly, FCR denotes genes that serve as candidate biomarkers for discriminating all cancer types. PCR, on the other hand, are genes that distinguish subsets of cancer types. Subsequently, a Markov blanket embedded memetic algorithm is proposed for the simultaneous identification of both FCR and PCR genes. Results obtained on commonly used synthetic and real-world microarray data sets show that the proposed approach converges to valid FCR and PCR genes that would assist biologists in their research work. The identification of both FCR and PCR genes is found to generate improvement in classification accuracy on many microarray data sets. Further comparison study to existing state-of-the-art feature selection algorithms also reveals the effectiveness and efficiency of the proposed approach.
- Published
- 2010
- Full Text
- View/download PDF
39. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.
- Author
-
Mazurowski MA, Zurada JM, and Tourassi GD
- Subjects
- Algorithms, Artificial Intelligence, Automation, Databases, Factual, Female, Humans, Reproducibility of Results, Breast Neoplasms diagnostic imaging, Diagnosis, Computer-Assisted, Image Interpretation, Computer-Assisted methods, Mammography methods
- Abstract
Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.
- Published
- 2009
- Full Text
- View/download PDF
40. Normalized mutual information feature selection.
- Author
-
Estévez PA, Tesmer M, Perez CA, and Zurada JM
- Abstract
A filter method of feature selection based on mutual information, called normalized mutual information feature selection (NMIFS), is presented. NMIFS is an enhancement over Battiti's MIFS, MIFS-U, and mRMR methods. The average normalized mutual information is proposed as a measure of redundancy among features. NMIFS outperformed MIFS, MIFS-U, and mRMR on several artificial and benchmark data sets without requiring a user-defined parameter. In addition, NMIFS is combined with a genetic algorithm to form a hybrid filter/wrapper method called GAMIFS. This includes an initialization procedure and a mutation operator based on NMIFS to speed up the convergence of the genetic algorithm. GAMIFS overcomes the limitations of incremental search algorithms that are unable to find dependencies between groups of features.
- Published
- 2009
- Full Text
- View/download PDF
41. Surgical management of a dermal lymphatic malformation of the lower extremity.
- Author
-
Schneider LF, Chen CM, Zurada JM, Walther R, and Grant RT
- Abstract
Dermal lymphatic malformations are rare congenital hamartomas of superficial lymphatics characterized by high recurrence rates after excision. The standard therapy for a single lesion is surgical excision with wide margins, which reduces recurrence but can have a potentially unacceptable aesthetic outcome. A case of a 24-year-old woman with a 6 cm x 5 cm dermal lymphatic malformation on her right thigh, diagnosed by clinical history, physical examination, magnetic resonance imaging and pathological findings, is reported. The patient underwent wide local excision with split-thickness skin grafting. After pathological examination revealed negative margins, the patient underwent tissue expander placement and excision of the skin graft with primary closure. The lesion did not recur, and the patient achieved a satisfactory aesthetic result. The present case represents the first report of the use of tissue expanders to treat dermal lymphatic malformations in the lower extremity and demonstrates a safe, staged approach to successful treatment.
- Published
- 2008
- Full Text
- View/download PDF
42. Selection of examples in case-based computer-aided decision systems.
- Author
-
Mazurowski MA, Zurada JM, and Tourassi GD
- Subjects
- Computer Storage Devices, Reproducibility of Results, Decision Making, Computer-Assisted
- Abstract
Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system's diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system's performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2-4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system.
- Published
- 2008
- Full Text
- View/download PDF
43. Blur identification by multilayer neural network based on multivalued neurons.
- Author
-
Aizenberg I, Paliy DV, Zurada JM, and Astola JT
- Subjects
- Algorithms, Artificial Intelligence, Feedback, Image Processing, Computer-Assisted, Normal Distribution, Neural Networks, Computer, Neurons physiology
- Abstract
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.
- Published
- 2008
- Full Text
- View/download PDF
44. Training neural network classifiers for medical decision making: the effects of imbalanced datasets on classification performance.
- Author
-
Mazurowski MA, Habas PA, Zurada JM, Lo JY, Baker JA, and Tourassi GD
- Subjects
- Algorithms, Breast Neoplasms classification, Breast Neoplasms diagnosis, Computer Simulation, Diagnosis, Computer-Assisted methods, Electronic Data Processing, Humans, ROC Curve, Artificial Intelligence, Decision Making, Feedback, Neural Networks, Computer
- Abstract
This study investigates the effect of class imbalance in training data when developing neural network classifiers for computer-aided medical diagnosis. The investigation is performed in the presence of other characteristics that are typical among medical data, namely small training sample size, large number of features, and correlations between features. Two methods of neural network training are explored: classical backpropagation (BP) and particle swarm optimization (PSO) with clinically relevant training criteria. An experimental study is performed using simulated data and the conclusions are further validated on real clinical data for breast cancer diagnosis. The results show that classifier performance deteriorates with even modest class imbalance in the training data. Further, it is shown that BP is generally preferable over PSO for imbalanced training data especially with small data sample and large number of features. Finally, it is shown that there is no clear preference between oversampling and no compensation approach and some guidance is provided regarding a proper selection.
- Published
- 2008
- Full Text
- View/download PDF
45. Decision optimization of case-based computer-aided decision systems using genetic algorithms with application to mammography.
- Author
-
Mazurowski MA, Habas PA, Zurada JM, and Tourassi GD
- Subjects
- Case-Control Studies, Databases, Factual, ROC Curve, Algorithms, Decision Support Systems, Clinical, Diagnosis, Computer-Assisted methods, Mammography methods
- Abstract
This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates.
- Published
- 2008
- Full Text
- View/download PDF
46. A class of single-class minimax probability machines for novelty detection.
- Author
-
Kwok JT, Tsang IW, and Zurada JM
- Subjects
- Computer Simulation, Information Storage and Retrieval methods, Neural Networks, Computer, Algorithms, Artificial Intelligence, Creativity, Decision Support Techniques, Game Theory, Models, Statistical, Pattern Recognition, Automated methods
- Abstract
Single-class minimax probability machines (MPMs) offer robust novelty detection with distribution-free worst case bounds on the probability that a pattern will fall inside the normal region. However, in practice, they are too cautious in labeling patterns as outlying and so have a high false negative rate (FNR). In this paper, we propose a more aggressive version of the single-class MPM that bounds the best case probability that a pattern will fall inside the normal region. These two MPMs can then be used together to delimit the solution space. By using the hyperplane lying in the middle of this pair of MPMs, a better compromise between false positives (FPs) and false negatives (FNs), and between recall and precision can be obtained. Experiments on the real-world data sets show encouraging results.
- Published
- 2007
- Full Text
- View/download PDF
47. Using clinical information in goal-oriented learning.
- Author
-
Gaweda AE, Muezzinoglu MK, Aronoff GR, Jacobs AA, Zurada JM, and Brier ME
- Subjects
- Algorithms, Computer Simulation, Humans, Models, Biological, Anemia complications, Anemia drug therapy, Artificial Intelligence, Decision Support Systems, Clinical, Drug Therapy, Computer-Assisted methods, Erythropoietin administration & dosage, Kidney Failure, Chronic complications
- Abstract
We have proposed an extension to the Q-learning algorithm that incorporates the existing clinical expertise into the trial-and-error process of acquiring an appropriate administration strategy of rHuEPO to patients with anemia due to ESRD. The specific modification lies in multiple updates of the Q-values for several dose/response combinations during a single learning event. This in turn decreases the risk of administering doses that are inadequate in certain situations and thus increases the speed of the learning process. We have evaluated the proposed method using a simulation test-bed involving an "artificial patient" and compared the outcomes to those obtained by a classical Q-learning and a numerical implementation of a clinically used administration protocol for anemia management. The outcomes of the simulated treatments demonstrate that the proposed method is a more effective tool than the traditional Q-learning. Furthermore, we have observed that it has a potential to provide even more stable anemia management than the AMP.
- Published
- 2007
- Full Text
- View/download PDF
48. Reliability analysis framework for computer-assisted medical decision systems.
- Author
-
Habas PA, Zurada JM, Elmaghraby AS, and Tourassi GD
- Subjects
- Humans, Image Enhancement methods, Reproducibility of Results, Sensitivity and Specificity, Software Validation, Algorithms, Breast Neoplasms diagnostic imaging, Decision Support Systems, Clinical, Image Interpretation, Computer-Assisted methods, Mammography methods, Quality Assurance, Health Care methods, Software
- Abstract
We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional training nor building multiple decision models to assess the case-specific CAD accuracy.
- Published
- 2007
- Full Text
- View/download PDF
49. Estimation of the dynamic spinal forces using a recurrent fuzzy neural network.
- Author
-
Hou Y, Zurada JM, Karwowski W, Marras WS, and Davis K
- Subjects
- Artificial Intelligence, Biomechanical Phenomena methods, Computer Simulation, Fuzzy Logic, Humans, Software, Stress, Mechanical, Systems Theory, Algorithms, Biomimetics methods, Movement physiology, Muscle Contraction physiology, Muscle, Skeletal physiology, Neural Networks, Computer, Spine physiology
- Abstract
Estimation of the dynamic spinal forces from kinematics data is very complicated because it involves the handling of the relationship between kinematic variables and electromyography (EMG) signals, as well as the relationship between EMG signals and the forces. A recurrent fuzzy neural network (RFNN) model is proposed to establish the kinematics-EMG-force relationship and model the dynamics of muscular activities. The EMG signals are used as an intermediate output and are fed back to the input layer. Since EMG is a direct reflection of muscular activities, the feedback of this model has a physical meaning. It expresses the dynamics of muscular activities in a straightforward way and takes advantage from the recurrent property. The trained model can then have the forces predicted directly from kinematic variables while bypassing the costly procedure of measuring EMG signals and avoiding the use of a biomechanics model. A learning algorithm is derived for the RFNN model.
- Published
- 2007
- Full Text
- View/download PDF
50. Topical treatments for hypertrophic scars.
- Author
-
Zurada JM, Kriegel D, and Davis IC
- Subjects
- Administration, Cutaneous, Aminoquinolines administration & dosage, Aminoquinolines adverse effects, Aminoquinolines therapeutic use, Bandages, Biological Dressings, Cicatrix, Hypertrophic drug therapy, Cicatrix, Hypertrophic prevention & control, Clinical Trials as Topic, Double-Blind Method, Humans, Imiquimod, Occlusive Dressings, Onions, Phytotherapy, Plant Extracts administration & dosage, Plant Extracts therapeutic use, Polyurethanes, Pressure, Randomized Controlled Trials as Topic, Silicone Gels administration & dosage, Silicone Gels therapeutic use, Single-Blind Method, Vitamin A administration & dosage, Vitamin A adverse effects, Vitamin A therapeutic use, Vitamin E administration & dosage, Vitamin E adverse effects, Vitamin E therapeutic use, Wound Healing drug effects, Cicatrix, Hypertrophic therapy
- Abstract
Hypertrophic scars represent an abnormal, exaggerated healing response after skin injury. In addition to cosmetic concern, scars may cause pain, pruritus, contractures, and other functional impairments. Therapeutic modalities include topical medications, intralesional corticosteroids, laser therapy, and cryosurgery. Topical therapies, in particular, have become increasingly popular because of their ease of use, comfort, noninvasiveness, and relatively low cost. This review will discuss the properties and effectiveness of these agents, including pressure therapy, silicone gel sheeting and ointment, polyurethane dressing, onion extract, imiquimod 5% cream, and vitamins A and E in the prevention and treatment of hypertrophic scars.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.