359 results on '"Paulo J G Lisboa"'
Search Results
152. Non-negative matrix factorisation methods for the spectral decomposition of MRS data from human brain tumours.
- Author
-
Sandra Ortega-Martorell, Paulo J. G. Lisboa, Alfredo Vellido, Margarida Julià-Sapé, and Carles Arús
- Published
- 2012
- Full Text
- View/download PDF
153. A Methodological Framework for Geographic Information Systems Development
- Author
-
Emma Dean, Mark Taylor, Hulya Francis, M. Jones, Paulo J. G. Lisboa, and Debbie Appleton
- Subjects
Service (systems architecture) ,021103 operations research ,Information Systems and Management ,Geographic information system ,Geospatial analysis ,Operations research ,Computer science ,business.industry ,Strategy and Management ,Fire prevention ,0211 other engineering and technologies ,General Social Sciences ,Functional design ,02 engineering and technology ,computer.software_genre ,Data science ,Local information systems ,GIS and public health ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Enterprise GIS ,business ,computer - Abstract
Geographic Information Systems (GIS) provide map based spatial analyses of geo-coded data. In this paper we examine a methodological framework for geographic information systems development that was developed and refined over a six year period based upon a fire prevention support geographic information system for a UK fire and rescue service. The methodological framework involves the use of a multi-methodology approach that incorporates social and organisational analysis, spatial modelling, and functional design.
- Published
- 2016
- Full Text
- View/download PDF
154. MRSI-based molecular imaging of therapy response to temozolomide in preclinical glioblastoma using source analysis
- Author
-
Ana Paula Candiota, Francisco V. Fernández, Martí Pumarola, Ivan Olier, Carles Arús, Teresa Delgado-Goñi, Paulo J. G. Lisboa, Margarida Julià-Sapé, Sandra Ortega-Martorell, and Magdalena Ciezka
- Subjects
Pathology ,medicine.medical_specialty ,Temozolomide ,medicine.diagnostic_test ,business.industry ,Dacarbazine ,Magnetic resonance spectroscopic imaging ,Magnetic resonance imaging ,medicine.disease ,03 medical and health sciences ,0302 clinical medicine ,Therapy response ,030220 oncology & carcinogenesis ,medicine ,Molecular Medicine ,Radiology, Nuclear Medicine and imaging ,Histopathology ,Molecular imaging ,business ,030217 neurology & neurosurgery ,Spectroscopy ,medicine.drug ,Glioblastoma - Abstract
Characterization of glioblastoma (GB) response to treatment is a key factor for improving patients' survival and prognosis. MRI and magnetic resonance spectroscopic imaging (MRSI) provide morphologic and metabolic profiles of GB but usually fail to produce unequivocal biomarkers of response. The purpose of this work is to provide proof of concept of the ability of a semi-supervised signal source extraction methodology to produce images with robust recognition of response to temozolomide (TMZ) in a preclinical GB model. A total of 38 female C57BL/6 mice were used in this study. The semi-supervised methodology extracted the required sources from a training set consisting of MRSI grids from eight GL261 GBs treated with TMZ, and six control untreated GBs. Three different sources (normal brain parenchyma, actively proliferating GB and GB responding to treatment) were extracted and used for calculating nosologic maps representing the spatial response to treatment. These results were validated with an independent test set (7 control and 17 treated cases) and correlated with histopathology. Major differences between the responder and non-responder sources were mainly related to the resonances of mobile lipids (MLs) and polyunsaturated fatty acids in MLs (0.9, 1.3 and 2.8 ppm). Responding tumors showed significantly lower mitotic (3.3 ± 2.9 versus 14.1 ± 4.2 mitoses/field) and proliferation rates (29.8 ± 10.3 versus 57.8 ± 5.4%) than control untreated cases. The methodology described in this work is able to produce nosological images of response to TMZ in GL261 preclinical GBs and suitably correlates with the histopathological analysis of tumors. A similar strategy could be devised for monitoring response to treatment in patients. Copyright © 2016 John Wiley & Sons, Ltd.
- Published
- 2016
- Full Text
- View/download PDF
155. Novel hybrid classified vector quantization using discrete cosine transform for image compression.
- Author
-
Ali Al-Fayadh, Abir Jaafar Hussain, Paulo J. G. Lisboa, and Dhiya Al-Jumeily
- Published
- 2009
- Full Text
- View/download PDF
156. Dynamic Proteome Profiling of Protein Fractional and Molar Synthesis Rates in Human Muscle in vivo
- Author
-
Samuel Bennett, Julien Louis, Connor A Stead, Jatin George Burniston, Graeme L. Close, Paulo J. G. Lisboa, and Jennifer Barrett
- Subjects
Molar ,Proteome profiling ,Human muscle ,Biochemistry ,Chemistry ,In vivo ,Genetics ,Molecular Biology ,Biotechnology - Published
- 2020
- Full Text
- View/download PDF
157. Reliability of Protein Abundance and Synthesis Measurements in Human Skeletal Muscle
- Author
-
Matthew Cocks, Jatin G. Burniston, Kanchana Srisawat, Sam O. Shepherd, Ben Edwards, Juliette A. Strauss, Katie Hesketh, and Paulo J. G. Lisboa
- Subjects
Male ,Proteomics ,Novel technique ,Coefficient of variation ,Muscle Proteins ,Biochemistry ,Mass Spectrometry ,RC1200 ,QH301 ,03 medical and health sciences ,Peptide mass ,medicine ,Humans ,Deuterium Oxide ,Muscle, Skeletal ,Molecular Biology ,030304 developmental biology ,QM ,0303 health sciences ,Chromatography ,Chemistry ,030302 biochemistry & molecular biology ,Reproducibility of Results ,Skeletal muscle ,Repeatability ,QP ,Proteome profiling ,medicine.anatomical_structure ,Protein Biosynthesis ,Protein abundance ,Glycolysis ,Chromatography, Liquid - Abstract
We investigated the repeatability of dynamic proteome profiling (DPP), which is a novel technique for measuring the relative abundance (ABD) and fractional synthesis rate (FSR) of proteins in humans. LC-MS analysis was performed on muscle samples taken from male participants (n = 4) that consumed 4 × 50 ml doses of deuterium oxide (2 H2 O) per day for 14 d. ABD was measured by label-free quantitation and FSR was calculated from time-dependent changes in peptide mass isotopomer abundances. One-hundred and one proteins had at least 1 unique peptide and were used in the assessment of protein ABD. Fifty-four of these proteins met more stringent criteria and were used in the assessment of FSR data. The median (M), lower- (Q1 ) and upper-quartile (Q3 ) values for protein FSR (%/d) were M = 1.63, Q1 = 1.07, Q3 = 3.24. The technical CV of ABD data had a median value of 3.6% (Q1 1.7% - Q3 6.7%), whereas the median CV of FSR data was 10.1% (Q1 3.5% - Q3 16.5%). These values compare favorably against other assessments of technical repeatability of proteomics data, which often set a CV of 20% as the upper bound of acceptability. This article is protected by copyright. All rights reserved.
- Published
- 2020
- Full Text
- View/download PDF
158. A novel multivariate approach for biomechanical profiling of stair negotiation
- Author
-
Raul V. Casana-Eslava, Natasha C. Francksen, Mark A. Hollands, Thijs Ackermans, Thomas D. O'Brien, Carolyn Lees, Constantinos N. Maganaris, Vasilios Baltzopoulos, and Paulo J. G. Lisboa
- Subjects
0301 basic medicine ,Adult ,Male ,medicine.medical_specialty ,Multivariate statistics ,Aging ,Friction ,Walking ,Biochemistry ,RC1200 ,03 medical and health sciences ,symbols.namesake ,Young Adult ,0302 clinical medicine ,Endocrinology ,Physical medicine and rehabilitation ,Risk Factors ,Genetics ,medicine ,Humans ,Molecular Biology ,Gait ,Postural Balance ,Aged ,Foot ,Cell Biology ,Fall risk ,QP ,Biomechanical Phenomena ,030104 developmental biology ,Bonferroni correction ,Stair descent ,Multivariate Analysis ,Predictive power ,symbols ,Wounds and Injuries ,Accidental Falls ,Female ,Analysis of variance ,Psychology ,Cadence ,030217 neurology & neurosurgery ,Foot (unit) - Abstract
Stair falls, especially during stair descent, are a major problem for older people. Stair fall risk has typically been assessed by quantifying mean differences between subject groups (e.g. older vs. younger individuals) for a number of biomechanical parameters individually indicative of risk, e.g., a reduced foot clearance with respect to the stair edge, which increases the chances of a trip. This approach neglects that individuals within a particular group may also exhibit other concurrent conservative strategies that could reduce the overall risk for a fall, e.g. a decreased variance in foot clearance. The purpose of the present study was to establish a multivariate approach that characterises the overall stepping behaviour of an individual. Twenty-five younger adults (age: 24.5 ± 3.3 y) and 70 older adults (age: 71.1 ± 4.1 y) descended a custom-built instrumented seven-step staircase at their self-selected pace in a step-over-step manner without using the handrails. Measured biomechanical parameters included: 1) Maximal centre of mass angular acceleration, 2) Foot clearance, 3) Proportion of foot length in contact with stair, 4) Required coefficient of friction, 5) Cadence, 6) Variance of these parameters. As a conventional analysis, a one-way ANOVA followed by Bonferroni post-hoc testing was used to identify differences between younger adults, older fallers and non-fallers. To examine differences in overall biomechanical stair descent behaviours between individuals, k-means clustering was used. The conventional grouping approach showed an effect of age and fall history on several single risk factors. The multivariate approach identified four clusters. Three clusters differed from the overall mean by showing both risky and conservative strategies on the biomechanical outcome measures, whereas the fourth cluster did not display any particularly risky or conservative strategies. In contrast to the conventional approach, the multivariate approach showed the stepping behaviours identified did not contain only older adults or previous fallers. This highlights the limited predictive power for stair fall risk of approaches based on single-parameter comparisons between predetermined groups. Establishing the predictive power of the current approach for future stair falls in older people is imperative for its implementation as a falls prevention tool.
- Published
- 2018
159. Robust Interpretation of Genomic Data in Chronic Obstructive Pulmonary Disease (COPD)
- Author
-
Paulo J. G. Lisboa, Carl Chalmers, Jade Hind, Abir Hussain, Casimiro Aday Curbelo Montañez, and Dhiya Al-Jumeily
- Subjects
COPD ,Standardization ,business.industry ,Computer science ,Single-nucleotide polymorphism ,Logistic regression ,medicine.disease ,Machine learning ,computer.software_genre ,Support vector machine ,03 medical and health sciences ,0302 clinical medicine ,030228 respiratory system ,030220 oncology & carcinogenesis ,Cohort ,medicine ,Predictive power ,SNP ,Artificial intelligence ,business ,computer - Abstract
Within genomic studies, a considerable amount of publications have reported SNP variants associated with COPD with little to no reproducibility. In this paper, we present a robust methodology which analyses a COPD cohort dataset using a genome-wide association study, additionally an investigation of the associated results using a variety of machine learning (ML) methods is performed. We use a logistic regression model to provide preliminary results and for further analysis we use machine learning models, RF, MLP, GLM and SVM. Within this study, indications of well established SNPs in previous publications occur in the preliminary results but fail to provide further indication of associative relationship when using ML methods for classification purposes. Results within this study show little to no predictive power after performing a robust methodology. These results indicate that a standardization of practice should be implemented to ensure the publication of false positive results is reduced and deterred. Further investigation of associative features should be considered a standard practice given the resulting information that can be provided with its' use.
- Published
- 2018
- Full Text
- View/download PDF
160. AI 2.0: Augmented Intelligence
- Author
-
Paulo J. G. Lisboa
- Subjects
Intelligence amplification ,Computer science ,business.industry ,Artificial intelligence ,business - Published
- 2018
- Full Text
- View/download PDF
161. Whole-body biomechanical load in running-based sports: The validity of estimating ground reaction forces from segmental accelerations
- Author
-
Jasper Verheul, Mark A. Robinson, Warren Gregson, Jos Vanrenterghem, and Paulo J. G. Lisboa
- Subjects
Adult ,Male ,Full-body segmental accelerations ,Deceleration ,Acceleration ,Physical Therapy, Sports Therapy and Rehabilitation ,Kinematics ,Impulse (physics) ,Loading characteristics ,Motion capture ,Biomechanical loads ,Running ,Training load monitoring ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Humans ,Orthopedics and Sports Medicine ,Force platform ,030212 general & internal medicine ,Ground reaction force ,Mathematics ,business.industry ,Segment reductions ,030229 sport sciences ,Structural engineering ,Biomechanical Phenomena ,Athletes ,Loading rate ,Female ,business ,Whole body ,Biomechanical load - Abstract
OBJECTIVES: Unlike physiological loads, the biomechanical loads of training in running-based sports are still largely unexplored. This study, therefore, aimed to assess the validity of estimating ground reaction forces (GRF), as a measure of external whole-body biomechanical loading, from segmental accelerations. METHODS: Fifteen team-sport athletes performed accelerations, decelerations, 90° cuts and straight running at different speeds including sprinting. Full-body kinematics and GRF were recorded with a three-dimensional motion capture system and a single force platform respectively. GRF profiles were estimated as the sum of the product of all fifteen segmental masses and accelerations, or a reduced number of segments. RESULTS: Errors for GRF profiles estimated from fifteen segmental accelerations were low (1-2Nkg-1) for low-speed running, moderate (2-3Nkg-1) for accelerations, 90° cuts and moderate-speed running, but very high (>4Nkg-1) for decelerations and high-speed running. Similarly, impulse (2.3-11.1%), impact peak (9.2-28.5%) and loading rate (20.1-42.8%) errors varied across tasks. Moreover, mean errors increased from 3.26±1.72Nkg-1 to 6.76±3.62Nkg-1 across tasks when the number of segments was reduced. CONCLUSIONS: Accuracy of estimated GRF profiles and loading characteristics was dependent on task, and errors substantially increased when the number of segments was reduced. Using a direct mechanical approach to estimate GRF from segmental accelerations is thus unlikely to be a valid method to assess whole-body biomechanical loading across different dynamic and high-intensity activities. Researchers and practitioners should, therefore, be very cautious when interpreting accelerations from one or several segments, as these are unlikely to accurately represent external whole-body biomechanical loads. ispartof: JOURNAL OF SCIENCE AND MEDICINE IN SPORT vol:22 issue:6 pages:716-722 ispartof: location:Australia status: published
- Published
- 2018
162. A Data Science Methodology Based on Machine Learning Algorithms for Flood Severity Prediction
- Author
-
Mohammed Khalaf, Dhiya Al-Jumeily, Robert Keight, Paul Fergus, Thar Baker, Paulo J. G. Lisboa, Abir Hussain, and Ala S. Al Kafri
- Subjects
QA75 ,Flood myth ,Computer science ,business.industry ,Global warming ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,QA76 ,Random forest ,0202 electrical engineering, electronic engineering, information engineering ,Flood mitigation ,020201 artificial intelligence & image processing ,Artificial intelligence ,Precipitation ,TD ,Natural disaster ,Surface runoff ,business ,Algorithm ,computer - Abstract
In this paper, a novel application of machine learning algorithms including Neural Network architecture is presented for the prediction of flood severity. Floods are considered natural disasters that cause wide scale devastation to areas affected. The phenomenon of flooding is commonly caused by runoff from rivers and precipitation, specifically during periods of extremely high rainfall. Due to the concerns surrounding global warming and extreme ecological effects, flooding is considered a serious problem that has a negative impact on infrastructure and humankind. This paper attempts to address the issue of flood mitigation through the presentation of a new flood dataset, comprising 2000 annotated flood events, where the severity of the outcome is categorised according to 3 target classes, demonstrating the respective severities of floods. The paper also presents various types of machine learning algorithms for predicting flood severity and classifying outcomes into three classes, normal, abnormal, and high-risk floods. Extensive research indicates that artificial intelligence algorithms could produce enhancement when utilised for the pre-processing of flood data. These approaches helped in acquiring better accuracy in the classification techniques. Neural network architectures generally produce good outcomes in many applications, however, our experiments results illustrated that random forest classifier yields the optimal results in comparison with the benchmarked models.
- Published
- 2018
- Full Text
- View/download PDF
163. A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices
- Author
-
Paulo J. G. Lisboa, Stephen H. Fairclough, Félix Fernando González Navarro, and Chelsea Dobbins
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Decision tree learning ,020208 electrical & electronic engineering ,Decision tree ,Wearable computer ,020207 software engineering ,02 engineering and technology ,Lifelog ,Anger ,Linear discriminant analysis ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Everyday life ,business ,Wearable technology ,media_common - Abstract
Repeated experiences of negative emotions, such as stress, anger or anxiety, can have long-term consequences for health. These episodes of negative emotion can be associated with inflammatory changes in the body, which are clinically relevant for the development of disease in the long-term. However, the development of effective coping strategies can mediate this causal chain. The proliferation of ubiquitous and unobtrusive sensor technology supports an increased awareness of those physiological states associated with negative emotion and supports the development of effective coping strategies. Smartphone and wearable devices utilise multiple on-board sensors that are capable of capturing daily behaviours in a permanent and comprehensive manner, which can be used as the basis for self-reflection and insight. However, there are a number of inherent challenges in this application, including unobtrusive monitoring, data processing, and analysis. This paper posits a mobile Iifelogging platform that utilises wearable technology to monitor and classify levels of stress. A pilot study has been undertaken with six participants, who completed up to ten days of data collection. During this time, they wore a wearable device on the wrist during waking hours to collect instances of heart rate (HR) and Galvanic Skin Resistance (GSR). Preliminary data analysis was undertaken using three supervised machine learning algorithms: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) and Decision Tree (DT). An accuracy of 70% was achieved using the Decision Tree algorithm.
- Published
- 2018
- Full Text
- View/download PDF
164. What impact did a Paediatric Early Warning system have on emergency admissions to the paediatric intensive care unit? An observational cohort study
- Author
-
Colman McGrath, Steven Lane, Gerri Sefton, Paulo J. G. Lisboa, Enitan D. Carrol, and Lyvonne N Tume
- Subjects
Male ,medicine.medical_specialty ,Pediatrics ,Adolescent ,Child Health Services ,Psychological intervention ,PIM2 ,Intensive Care Units, Pediatric ,Critical Care Nursing ,Severity of Illness Index ,State Medicine ,B700 ,Cohort Studies ,Paediatric intensive care unit ,Patient Admission ,Humans ,Medicine ,Child ,Emergency admission ,business.industry ,Infant, Newborn ,Infant ,Length of Stay ,Pediatric Nursing ,Benchmarking ,England ,Child, Preschool ,Emergency medicine ,Early warning system ,Female ,Observational study ,Emergencies ,Emergency Service, Hospital ,business ,Cohort study - Abstract
Summary\ud \ud The ideology underpinning Paediatric Early Warning systems (PEWs) is that earlier recognition of deteriorating in-patients would improve clinical outcomes.\ud \ud Objective\ud \ud To explore how the introduction of PEWs at a tertiary children's hospital affects emergency admissions to the Paediatric Intensive Care Unit (PICU) and the impact on service delivery. To compare ‘in-house’ emergency admissions to PICU with ‘external’ admissions transferred from District General Hospitals (without PEWs).\ud \ud Method\ud \ud A before-and-after observational study August 2005–July 2006 (pre), August 2006–July 2007 (post) implementation of PEWs at the tertiary children's hospital.\ud \ud Results\ud \ud The median Paediatric Index of Mortality (PIM2) reduced; 0.44 vs 0.60 (p < 0.001). Fewer admissions required invasive ventilation 62.7% vs 75.2% (p = 0.015) for a shorter median duration; four to two days. The median length of PICU stay reduced; five to three days (p = 0.002). There was a non-significant reduction in mortality (p = 0.47). There was no comparable improvement in outcome seen in external emergency admissions to PICU. A 39% reduction in emergency admission total beds days reduced cancellation of major elective surgical cases and refusal of external PICU referrals.\ud \ud Conclusions\ud \ud Following introduction of PEWs at a tertiary children's hospital PIM2 was reduced, patients required less PICU interventions and had a shorter length of stay. PICU service delivery improved.
- Published
- 2015
- Full Text
- View/download PDF
165. Hybrid Neural Network Predictive-Wavelet Image Compression System
- Author
-
Dhiya Al-Jumeily, Abir Hussain, Naeem Radi, and Paulo J. G. Lisboa
- Subjects
QA75 ,Computer science ,business.industry ,Cognitive Neuroscience ,Speech coding ,Tunstall coding ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,Computer Science Applications ,Hybrid neural network ,Artificial Intelligence ,JPEG 2000 ,Computer vision ,Artificial intelligence ,business ,computer ,Context-adaptive binary arithmetic coding ,Context-adaptive variable-length coding ,Image compression ,Data compression - Abstract
This paper considers a novel image compression technique called hybrid predictive wavelet coding. The\ud new proposed technique combines the properties of predictive coding and discrete wavelet coding. In\ud contrast to JPEG2000, the image data values are pre-processed using predictive coding to remove interpixel\ud redundancy. The error values, which are the difference between the original and the predicted\ud values, are discrete wavelet coding transformed. In this case, a nonlinear neural network predictor is\ud utilised in the predictive coding system. The simulation results indicated that the proposed technique\ud can achieve good compressed images at high decomposition levels in comparison to JPEG2000.
- Published
- 2015
- Full Text
- View/download PDF
166. Probabilistic Modeling in Machine Learning.
- Author
-
Davide Bacciu, Paulo J. G. Lisboa, Alessandro Sperduti, and Thomas Villmann
- Published
- 2015
- Full Text
- View/download PDF
167. Detection of glyphosate in deionised water using machine learning techniques with microwave spectroscopy
- Author
-
Andy Shaw, Alex Mason, Sean Cashman, Paulo J. G. Lisboa, and Olga Korostynska
- Subjects
Detection limit ,business.industry ,Machine learning ,computer.software_genre ,chemistry.chemical_compound ,chemistry ,Glyphosate ,Rotational spectroscopy ,Artificial intelligence ,Ecotoxicity ,Feature set ,business ,computer ,Mathematics - Abstract
Glyphosate is a commonly used herbicide which carries some risks of ecotoxicity and has been shown to be harmful to human beings with high levels of exposure. Existing methods of glyphosate detection often struggle to achieve the level of sensitivity required to meet regulatory requirements without the use of complicated analytical methods with multiple intermediary steps. We propose the use of microwave spectroscopy to determine the concentration of glyphosate in aqueous solutions, using machine learning methodology to identify a minimum feature set for our model. The resulting model had a limit of detection of roughly 10−3mg/L, fitted values were significantly (Pearson's R = 0.8833, P
- Published
- 2017
- Full Text
- View/download PDF
168. A robust method for the interpretation of genomic data
- Author
-
Basma Abdulaimma, Abir Hussain, Casimiro Aday Curbelo Montañez, Paulo J. G. Lisboa, Dhiya Al-Jumeily, and Jade Hind
- Subjects
0301 basic medicine ,Multivariate statistics ,Linkage disequilibrium ,Training set ,Artificial neural network ,Computer science ,Conditional probability ,computer.software_genre ,03 medical and health sciences ,030104 developmental biology ,Sample size determination ,Covariate ,Predictive power ,Biomarker (medicine) ,Sensitivity (control systems) ,Data mining ,computer - Abstract
This paper presents a robust methodology to find biomarkers that are predictive of any given clinical outcome, by combining three critical steps: Adjustment for correlated biomarkers, through Linkage Disequilibrium pre-processing; False Detection Rate (FD) control with q-values; multivariate predictive modelling with neural networks. The results show that neural network modelling with pre-processing using p-values can be misleading. In particular, the interpretation of the neural network through calculation of the conditional probabilities P(x|c) where x represents covariates and c the classes, haw an important role in elucidating the predictive power (or lack of it) of the biomarkers. The methodology is generally applicable to p>n modelling where the initial pool of potential predictive parameters p, e.g. biomarkers, is greater than the sample size n.
- Published
- 2017
- Full Text
- View/download PDF
169. Conditional independence mapping of DIGE data reveals PDIA3 protein species as key nodes associated with muscle aerobic capacity
- Author
-
Jatin G. Burniston, Ian H. Jarman, Yi-Wen Chen, Steven L. Britton, Donna Gray, Jenna Kenyani, James N. Cobley, Lauren G. Koch, Jonathan M. Wastling, Eleonora Guadagnin, Paulo J. G. Lisboa, and Daniel J. Cuthbertson
- Subjects
Leptin ,Male ,Proteomics ,STAT3 Transcription Factor ,Spectrometry, Mass, Electrospray Ionization ,Proteome ,Protein Disulfide-Isomerases ,Biophysics ,PDIA3 ,Biology ,Biochemistry ,Article ,Oxidative Phosphorylation ,Running ,N-myc down-regulated gene 2 ,RC1200 ,Sexual dimorphism ,03 medical and health sciences ,Sex Factors ,Tandem Mass Spectrometry ,Animals ,Electrophoresis, Gel, Two-Dimensional ,Animal Selection Model ,Phosphorylation ,QA ,Muscle, Skeletal ,Protein disulfide-isomerase ,Gene ,030304 developmental biology ,Genetics ,Gel electrophoresis ,0303 health sciences ,Polymorphism, Genetic ,Signal transducer and activator of transcription 3 ,Mass spectrometry ,QH ,030302 biochemistry & molecular biology ,Computational Biology ,Phenotype ,Rats ,Blot ,Bibliometric network analysis ,Physical Endurance ,Female ,Signal Transduction - Abstract
UNLABELLED: Profiling of protein species is important because gene polymorphisms, splice variations and post-translational modifications may combine and give rise to multiple protein species that have different effects on cellular function. Two-dimensional gel electrophoresis is one of the most robust methods for differential analysis of protein species, but bioinformatic interrogation is challenging because the consequences of changes in the abundance of individual protein species on cell function are unknown and cannot be predicted. We conducted DIGE of soleus muscle from male and female rats artificially selected as either high- or low-capacity runners (HCR and LCR, respectively). In total 696 protein species were resolved and LC-MS/MS identified proteins in 337 spots. Forty protein species were differentially (P
- Published
- 2014
- Full Text
- View/download PDF
170. Understanding community fire risk—A spatial model for targeting fire prevention activities
- Author
-
Paulo J. G. Lisboa, Emma Higgins, Mark Taylor, and M. Jones
- Subjects
Service (business) ,Geographic information system ,Process management ,Vulnerability index ,Operations research ,business.industry ,Information sharing ,Fire prevention ,Vulnerability ,General Physics and Astronomy ,General Chemistry ,Customer insight ,General Materials Science ,Safety, Risk, Reliability and Quality ,business ,Bespoke - Abstract
This paper outlines recent research completed in partnership between Merseyside Fire and Rescue Service and Liverpool John Moores University. The aim of the research was to investigate ways to develop and implement a bespoke spatial model that could be used to target services based on risks and needs. This paper outlines the techniques used to develop the spatial model. In particular, the paper investigates two strands of customer insight developed for Merseyside Fire and Rescue Service. These are community profiles, based on a cluster analysis approach, to understand risks present within communities and the vulnerability index, which identifies individuals most at risk from fire using data shared through information sharing agreements. Nationally recognised risk modelling toolkits, such as the Fire Service Emergency Cover toolkit do not utilise local information or have the ability to identify risk to an individual level. There is a need for this intelligence to be able to proactively target services, such as the Home Fire Safety Check. This paper also discusses some of the key operational and strategic areas that benefit from this information and investigates some of the barriers and challenges for fire and rescue services within this area.
- Published
- 2013
- Full Text
- View/download PDF
171. A principled approach to network-based classification and data representation
- Author
-
Ian H. Jarman, Héctor Ruiz, Terence A. Etchells, José D. Martín, and Paulo J. G. Lisboa
- Subjects
business.industry ,Cognitive Neuroscience ,Fisher kernel ,Pattern recognition ,Probability density function ,Conditional probability distribution ,External Data Representation ,computer.software_genre ,Computer Science Applications ,Weighting ,Euclidean distance ,symbols.namesake ,Data point ,Artificial Intelligence ,symbols ,Artificial intelligence ,Data mining ,Fisher information ,business ,computer ,Mathematics - Abstract
Measures of similarity are fundamental in pattern recognition and data mining. Typically the Euclidean metric is used in this context, weighting all variables equally and therefore assuming equal relevance, which is very rare in real applications. In contrast, given an estimate of a conditional density function, the Fisher information calculated in primary data space implicitly measures the relevance of variables in a principled way by reference to auxiliary data such as class labels. This paper proposes a framework that uses a distance metric based on Fisher information to construct similarity networks that achieve a more informative and principled representation of data. The framework enables efficient retrieval of reference cases from which a weighted nearest neighbour classifier closely approximates the original density function. Importantly, the identification of nearby data points permits the retrieval of further information with potential relevance to the assessment of a new case. The practical application of the method is illustrated for six benchmark datasets.
- Published
- 2013
- Full Text
- View/download PDF
172. Transformation in the Pharmaceutical Industry: Transformation-Induced Quality Risks--A Survey
- Author
-
James L. Ford, Nader Shafiei, Charles W. Morecroft, Paulo J. G. Lisboa, Yusra Mouzughi, and Mark Taylor
- Subjects
Marketing ,Risk ,Drug Industry ,business.industry ,Chemistry, Pharmaceutical ,Research ,media_common.quotation_subject ,Risk identification ,Pharmaceutical Science ,Survey result ,Risk model ,Transformation (function) ,Product lifecycle ,Risk analysis (engineering) ,Public Opinion ,Surveys and Questionnaires ,Humans ,Technology, Pharmaceutical ,Quality (business) ,Economics, Pharmaceutical ,Product (category theory) ,Process engineering ,business ,media_common ,Pharmaceutical industry - Abstract
This paper is the fourth in a series that explores ongoing transformation in the pharmaceutical industry and its impact on pharmaceutical quality from the perspective of risk identification. The aim of this paper is to validate proposed quality risks through elicitation of expert opinion and define the resultant quality risk model. Expert opinion was obtained using a questionnaire-based survey with participants with recognized expertise in pharmaceutical regulation, product lifecycle, or technology. The results of the survey validate the theoretical and operational evidence in support of the four main pharmaceutical transformation triggers previously identified. The quality risk model resulting from the survey indicated a firm relationship between the pharmaceutical quality risks and regulatory compliance outcomes during the marketing approval and post-marketing phases of the product lifecycle and a weaker relationship during the pre-market evaluation phase. LAY ABSTRACT: In this paper through conduct of an expert opinion survey the proposed quality risks carried forward from an earlier part of the research are validated and resultant quality risk model is defined. The survey results validate the theoretical and operational evidence previously identified. The quality risk model indicates that transformation-related risks have a larger regulatory compliance impact during product approval, manufacturing, distribution, and commercial use than during the development phase.
- Published
- 2013
- Full Text
- View/download PDF
173. Testing geographical information systems: a case study in a fire prevention support system
- Author
-
Emma Higgins, Mark Taylor, and Paulo J. G. Lisboa
- Subjects
Decision support system ,General Computer Science ,Operations research ,Computer science ,Process (engineering) ,Fire prevention ,Information system ,Support system ,Enterprise GIS ,Data science ,Information Systems ,Test (assessment) - Abstract
PurposeThe purpose of this paper is to describe the development and evaluation of a geographical information system (GIS) testing framework that was used to test a fire prevention support GIS.Design/methodology/approachA year‐long case study was undertaken concerning the testing of a fire prevention support GIS in a UK fire and rescue service.FindingsThe GIS testing framework developed involved testing the different components of a GIS, testing their interactions, and then testing the system as a whole. Since GISs contain different components such as spatial analyses and map‐based output, this supports the adoption of a different testing framework compared to existing types of information systems.Research limitations/implicationsGISs will typically be used by organisations for decision making. Clearly if the information presented by a GIS is inaccurate, unrepresentative, or unreliable, then the decision‐making process can be undermined.Practical implicationsThis is particularly important with regard to GISs used by emergency services (such as the fire and rescue service studied) where lives could potentially be put at risk by erroneous information provided by such systems.Originality/valuePrevious research had indicated that GISs may be inadequately tested. The framework developed for GISs testing provided a systematic testing approach, reducing the likelihood of errors in such systems.
- Published
- 2012
- Full Text
- View/download PDF
174. Flexible parametric modelling of the hazard function in breast cancer studies
- Author
-
Federico Ambrogi, Ilaria Ardoino, Patrizia Boracchi, Elia Biganzoli, Paulo J. G. Lisboa, and Chris Bajdik
- Subjects
Oncology ,Hazard (logic) ,Statistics and Probability ,medicine.medical_specialty ,Mathematical optimization ,Computer science ,media_common.quotation_subject ,Disease ,Accelerated failure time model ,Breast cancer ,Internal medicine ,medicine ,Parametric modelling ,Econometrics ,Gamma distribution ,Function (engineering) ,Additive model ,media_common ,Mathematics ,Proportional hazards model ,Nonparametric statistics ,Cancer ,medicine.disease ,Semiparametric model ,Spline (mathematics) ,Parametric model ,Statistics, Probability and Uncertainty - Abstract
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered.
- Published
- 2012
- Full Text
- View/download PDF
175. An exploration of causal factors in unintentional dwelling fires
- Author
-
Paulo J. G. Lisboa, Emma Higgins, Vince Kwasnica, and Mark Taylor
- Subjects
Economics and Econometrics ,business.industry ,Strategy and Management ,Environmental resource management ,Binge drinking ,Mental health ,Fire risk ,Risk forecasting ,Risk model ,Customer care ,Geography ,North west ,Environmental health ,Business and International Management ,business ,Finance ,Risk management - Abstract
We examine the causal factors involved in unintentional dwelling fire incidents within the Merseyside area of the North West region of the United Kingdom. The approach of all-subsets multiple linear regression was used to develop an unintentional dwelling fire risk model for the region. The risk model was based on data obtained from UK government agencies relating to causal factors identified by earlier published studies. In the region studied, mental health problems, disability and residents living alone were the most significant factors associated with unintentional dwelling fire fatalities. However, in a separate model of the incidence of unintentional dwelling fires within the region, binge drinking and smoking were additional statistically significant factors.
- Published
- 2012
- Full Text
- View/download PDF
176. Cohort-based kernel visualisation with scatter matrices
- Author
-
Paulo J. G. Lisboa, Tingting Mu, and Enrique Romero
- Subjects
business.industry ,Pattern recognition ,Linear discriminant analysis ,Kernel principal component analysis ,Data set ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Artificial Intelligence ,Kernel embedding of distributions ,Kernel (statistics) ,Signal Processing ,Radial basis function kernel ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Kernel Fisher discriminant analysis ,business ,Software ,Mathematics - Abstract
Visualisation with good discrimination between data cohorts is important for exploratory data analysis and for decision support interfaces. This paper proposes a kernel extension of the cluster-based linear visualisation method described in Lisboa et al. [15]. A representation of the data in dual form permits the application of the kernel trick, so projecting the data onto the orthonormalised cohort means in the feature space. The only parameters of the method are those for the kernel function. The method is shown to obtain well-discriminating visualisations of non-linearly separable data with low computational cost. The linearity of the visualisation was tested using nearest neighbour and linear discriminant classifiers, achieving significant improvements in classification accuracy with respect to the original features, especially for high-dimensional data, where 93% accuracy was obtained for the Splice-junction Gene Sequences data set from the UCI repository.
- Published
- 2012
- Full Text
- View/download PDF
177. Area effects on health inequalities: The impact of neighbouring deprivation on mortality
- Author
-
Penny A. Cook, Ian H. Jarman, Paulo J. G. Lisboa, and Xin Zhang
- Subjects
Male ,Health (social science) ,Inequality ,media_common.quotation_subject ,Geography, Planning and Development ,Population ,Context (language use) ,Health outcomes ,medicine.disease_cause ,Poverty Areas ,Area effect ,medicine ,Humans ,Mortality ,Socioeconomics ,Relative deprivation ,education ,media_common ,education.field_of_study ,Public Health, Environmental and Occupational Health ,Health Status Disparities ,Models, Theoretical ,Geography ,England ,Female ,Demography - Abstract
The exact nature of the association between the context of the local area and local health outcomes is unknown. We investigated whether areas geographically close but divergent in terms of deprivation have greater inequality in health than those where deprivation is similar across neighbouring localities. In order to disaggregate the strong correlation between the deprivation of a target area and that of its surrounding areas, we used principal component analysis to create a measure of relative deprivation. Both deprivation (s=0.183, p
- Published
- 2011
- Full Text
- View/download PDF
178. Managing unintentional dwelling fire risk
- Author
-
Paulo J. G. Lisboa, Mark Taylor, Mike Francis, and Emma Higgins
- Subjects
Service (business) ,business.industry ,Strategy and Management ,Fire prevention ,Environmental resource management ,General Engineering ,General Social Sciences ,Computer security ,computer.software_genre ,Fire risk ,Social group ,Risk model ,Geography ,Information system ,Safety, Risk, Reliability and Quality ,business ,human activities ,computer - Abstract
In this paper, we examine the management of unintentional dwelling fire risk through the development of a geographical information system (GIS) for dwelling fire prevention support based upon an 18-month case study in a UK fire and rescue service. Previous research into causal factors in unintentional dwelling fire incidents was used to guide the development of a multiple linear regression risk model for dwelling fire incidents that was the basis of the GIS developed. The GIS provided a more detailed analysis of unintentional dwelling fire risk factors, and enabled more targeted fire prevention activities for the identified at-risk social groups.
- Published
- 2011
- Full Text
- View/download PDF
179. The four-variable modification of diet in renal disease formula underestimates glomerular filtration rate in obese type 2 diabetic individuals with chronic kidney disease
- Author
-
K. Hayden, S. Nair, B. Pandya, V. Mishra, Kevin J Hardy, John P.H. Wilding, S. Vinjamuri, and Paulo J. G. Lisboa
- Subjects
Male ,medicine.medical_specialty ,Body Surface Area ,Endocrinology, Diabetes and Metabolism ,Urology ,Renal function ,Comorbidity ,Type 2 diabetes ,Disease ,urologic and male genital diseases ,Body Mass Index ,Internal medicine ,Internal Medicine ,medicine ,Obese group ,Humans ,Obesity ,reproductive and urinary physiology ,Aged ,Food, Formulated ,Body surface area ,business.industry ,Type 2 Diabetes Mellitus ,Feeding Behavior ,Middle Aged ,medicine.disease ,female genital diseases and pregnancy complications ,Endocrinology ,Diabetes Mellitus, Type 2 ,Chronic Disease ,Female ,Kidney Diseases ,business ,Glomerular Filtration Rate ,Kidney disease - Abstract
GFR is commonly estimated using the four-variable Modification of Diet in Renal Disease (MDRD) formula and this forms the basis for classification of chronic kidney disease (CKD). We investigated the effect of obesity on the estimation of glomerular filtration rate in type 2 diabetic participants with CKD. We enrolled 111 patients with type 2 diabetes mellitus in different stages of CKD. GFR was measured using 51Cr-labelled EDTA plasma clearance and was estimated using the four-variable MDRD formula. The bias between estimated and measured GFR was −22.4 (−33.8 to −11.0) p
- Published
- 2011
- Full Text
- View/download PDF
180. The feasibility of predicting ground reaction forces during running from a trunk accelerometry driven mass-spring-damper model
- Author
-
Mark A. Robinson, Paulo J. G. Lisboa, Jos Vanrenterghem, Terence A. Etchells, Niels Jensby Nedergaard, Jasper Verheul, and Barry Drust
- Subjects
Anatomy and Physiology ,Mean squared error ,lcsh:Medicine ,Bioengineering ,ACCELERATION ,Mass spring damper ,Accelerometer ,General Biochemistry, Genetics and Molecular Biology ,QA76 ,RC1200 ,Training load monitoring ,Root mean square ,03 medical and health sciences ,Acceleration ,0302 clinical medicine ,Optimisation ,BODY ,VALIDITY ,Biomechanical loading ,Ground reaction force ,Mass-spring model ,Mathematics ,Science & Technology ,Body-worn accelerometer ,General Neuroscience ,lcsh:R ,STIFFNESS ,030229 sport sciences ,General Medicine ,Limiting ,Kinesiology ,Geodesy ,Trunk ,Multidisciplinary Sciences ,IMPACT FORCES ,RELIABILITY ,MECHANICS ,Science & Technology - Other Topics ,Tissue adaptations ,General Agricultural and Biological Sciences ,030217 neurology & neurosurgery - Abstract
Background Monitoring the external ground reaction forces (GRF) acting on the human body during running could help to understand how external loads influence tissue adaptation over time. Although mass-spring-damper (MSD) models have the potential to simulate the complex multi-segmental mechanics of the human body and predict GRF, these models currently require input from measured GRF limiting their application in field settings. Based on the hypothesis that the acceleration of the MSD-model’s upper mass primarily represents the acceleration of the trunk segment, this paper explored the feasibility of using measured trunk accelerometry to estimate the MSD-model parameters required to predict resultant GRF during running. Methods Twenty male athletes ran at approach speeds between 2–5 m s−1. Resultant trunk accelerometry was used as a surrogate of the MSD-model upper mass acceleration to estimate the MSD-model parameters (ACCparam) required to predict resultant GRF. A purpose-built gradient descent optimisation routine was used where the MSD-model’s upper mass acceleration was fitted to the measured trunk accelerometer signal. Root mean squared errors (RMSE) were calculated to evaluate the accuracy of the trunk accelerometry fitting and GRF predictions. In addition, MSD-model parameters were estimated from fitting measured resultant GRF (GRFparam), to explore the difference between ACCparam and GRFparam. Results Despite a good match between the measured trunk accelerometry and the MSD-model’s upper mass acceleration (median RMSE between 0.16 and 0.22 g), poor GRF predictions (median RMSE between 6.68 and 12.77 N kg−1) were observed. In contrast, the MSD-model was able to replicate the measured GRF with high accuracy (median RMSE between 0.45 and 0.59 N kg−1) across running speeds from GRFparam. The ACCparam from measured trunk accelerometry under- or overestimated the GRFparam obtained from measured GRF, and generally demonstrated larger within parameter variations. Discussion Despite the potential of obtaining a close fit between the MSD-model’s upper mass acceleration and the measured trunk accelerometry, the ACCparam estimated from this process were inadequate to predict resultant GRF waveforms during slow to moderate speed running. We therefore conclude that trunk-mounted accelerometry alone is inappropriate as input for the MSD-model to predict meaningful GRF waveforms. Further investigations are needed to continue to explore the feasibility of using body-worn micro sensor technology to drive simple human body models that would allow practitioners and researchers to estimate and monitor GRF waveforms in field settings.
- Published
- 2018
- Full Text
- View/download PDF
181. Clustering of protein expression data: a benchmark of statistical and neural approaches
- Author
-
Davide Bacciu, Ian H. Jarman, Terence A. Etchells, Paulo J. G. Lisboa, Ian O. Ellis, and Jonathan M. Garibaldi
- Subjects
Clustering high-dimensional data ,Computer science ,Correlation clustering ,Conceptual clustering ,Machine learning ,computer.software_genre ,Clustering ,Theoretical Computer Science ,Biclustering ,Breast cancer ,CURE data clustering algorithm ,Consensus clustering ,Cluster analysis ,Artificial neural network ,business.industry ,Data set ,ComputingMethodologies_PATTERNRECOGNITION ,Clustering, Protein expression, Breast cancer, Neural networks ,Protein expression ,Geometry and Topology ,Artificial intelligence ,Data mining ,business ,computer ,Neural networks ,Software - Abstract
Clustering issues are fundamental to exploratory analysis of bioinformatics data. This process may follow algorithms that are reproducible but make assumptions about, for instance, the ability to estimate the global structure by successful local agglomeration or alternatively, they use pattern recognition methods that are sensitive to the initial conditions. This paper reviews two clustering methodologies and highlights the differences that result from the changes in data representation, applied to a protein expression data set for breast cancer (n = 1,076). The two clustering methodologies are a reproducible approach to model-free clustering and a probabilistic competitive neural network. The results from the two methods are compared with existing studies of the same data set, and the preferred clustering solutions are profiled for clinical interpretation.
- Published
- 2010
- Full Text
- View/download PDF
182. Utilising Deep Learning and Genome Wide Association Studies for Epistatic-Driven Preterm Birth Classification in African-American Women
- Author
-
Carl Chalmers, Basma Abdulaimma, Paulo J. G. Lisboa, Casimiro Aday Curbelo Montañez, Beth L. Pineles, and Paul Fergus
- Subjects
QA75 ,Computer science ,Genome-wide association study ,Computational biology ,Logistic regression ,Polymorphism, Single Nucleotide ,Deep Learning ,Pregnancy ,Genetics ,False positive paradox ,Humans ,Multifactor dimensionality reduction ,Applied Mathematics ,Infant, Newborn ,Computational Biology ,Epistasis, Genetic ,Autoencoder ,Random forest ,Black or African American ,Support vector machine ,Binary classification ,Premature Birth ,Female ,RG ,Algorithms ,Genome-Wide Association Study ,Biotechnology - Abstract
Genome-Wide Association Studies (GWAS) are used to identify statistically significant genetic variants in case-control studies. The main objective is to find single nucleotide polymorphisms (SNPs) that influence a particular phenotype (i.e., disease trait). GWAS typically use a p-value threshold of $5*10^{-8}$ 5 * 10 - 8 to identify highly ranked SNPs. While this approach has proven useful for detecting disease-susceptible SNPs, evidence has shown that many of these are, in fact, false positives. Consequently, there is some ambiguity about the most suitable threshold for claiming genome-wide significance. Many believe that using lower p-values will allow us to investigate the joint epistatic interactions between SNPs and provide better insights into phenotype expression. One example that uses this approach is multifactor dimensionality reduction (MDR), which identifies combinations of SNPs that interact to influence a particular outcome. However, computational complexity is increased exponentially as a function of higher-order combinations making approaches like MDR difficult to implement. Even so, understanding epistatic interactions in complex diseases is a fundamental component for robust genotype-phenotype mapping. In this paper, we propose a novel framework that combines GWAS quality control and logistic regression with deep learning stacked autoencoders to abstract higher-order SNP interactions from large, complex genotyped data for case-control classification tasks in GWAS analysis. We focus on the challenging problem of classifying preterm births which has a strong genetic component with unexplained heritability reportedly between 20-40 percent. A GWAS data set, obtained from dbGap is utilised, which contains predominantly urban low-income African-American women who had normal and preterm deliveries. Epistatic interactions from original SNP sequences were extracted through a deep learning stacked autoencoder model and used to fine-tune a classifier for discriminating between term and preterm births observations. All models are evaluated using standard binary classifier performance metrics. The findings show that important information pertaining to SNPs and epistasis can be extracted from 4,666 raw SNPs generated using logistic regression (p-value = $5*10^{-3}$ 5 * 10 - 3 ) and used to fit a highly accurate classifier model. The following results (Sen = 0.9562, Spec = 0.8780, Gini = 0.9490, Logloss = 0.5901, AUC = 0.9745, and MSE = 0.2010) were obtained using 50 hidden nodes and (Sen = 0.9289, Spec = 0.9591, Gini = 0.9651, Logloss = 0.3080, AUC = 0.9825, and MSE = 0.0942) using 500 hidden nodes. The results were compared with a Support Vector Machine (SVM), a Random Forest (RF), and a Fishers Linear Discriminant Analysis classifier, which all failed to improve on the deep learning approach.
- Published
- 2018
- Full Text
- View/download PDF
183. Determination of mode of ventilation using OSRE
- Author
-
Paulo J. G. Lisboa, D. Faulke, Terence A. Etchells, and Michael J. Harrison
- Subjects
Artificial ventilation ,Respiratory rate ,business.industry ,medicine.medical_treatment ,Health Informatics ,Respiration, Artificial ,Computer Science Applications ,Intermittent positive pressure ventilation ,Case-Control Studies ,Anesthesia ,Respiratory Physiological Phenomena ,Breathing ,Humans ,Medicine ,business - Abstract
This study classifies the mode of ventilation using respiratory rate, inhaled and exhaled carbon dioxide concentrations in anaesthetised patients. Thirty seven patients were breathing spontaneously (SPONT) and 50 were on a ventilator (intermittent positive pressure ventilation, IPPV). A data-based methodology for rule inference from trained neural networks, orthogonal search-based rule extraction, identified two sets of low-order Boolean rules for differential identification of the mode of ventilation. Combining both models produced three possible outcomes; IPPV, SPONT and 'Uncertain'. The true positive rates were approximately maintained at 96% for IPPV and 93% for SPONT, with false positive rates of 0.4% for each category and 4.3% 'Uncertain' inferences.
- Published
- 2009
- Full Text
- View/download PDF
184. An AI Walk from Pharmacokinetics to A Marketing.
- Author
-
José David Martín-Guerrero, Emilio Soria-Olivas, Paulo J. G. Lisboa, and Antonio J. Serrano-López
- Published
- 2009
185. Cluster-based visualisation with scatter matrices
- Author
-
Andrew R. Green, Ian O. Ellis, Federico Ambrogi, Paulo J. G. Lisboa, and M. B. Dias
- Subjects
education.field_of_study ,Trace (linear algebra) ,Basis (linear algebra) ,Population ,Covariance ,Space (mathematics) ,Combinatorics ,Projection (mathematics) ,Artificial Intelligence ,Scatter matrix ,Signal Processing ,Projective space ,Computer Vision and Pattern Recognition ,education ,Algorithm ,Software ,Mathematics - Abstract
The trace of the scatter matrix, which measures separation between population cohorts, is shown to be strictly preserved by sphering the data followed by a projection onto the space of population means. This result suggests using the space of means as a basis to calculate well-separating lower-dimensional projections of the data, derived from the scatter matrix in the projective space. In particular, it defines an approximation to the canonical decomposition of the scatter matrix that applies for singular covariance matrices. The method is illustrated with reference to k-means clusters in data sets from bioinformatics and marketing.
- Published
- 2008
- Full Text
- View/download PDF
186. Financial time series prediction using polynomial pipelined neural networks
- Author
-
Wael El-Deredy, Adam Knowles, Abir Hussain, and Paulo J. G. Lisboa
- Subjects
Polynomial ,Theoretical computer science ,Artificial neural network ,Computer science ,Time delay neural network ,General Engineering ,Computer Science Applications ,Set (abstract data type) ,Probabilistic neural network ,Nonlinear system ,Signal-to-noise ratio ,Artificial Intelligence ,Stochastic neural network ,Algorithm - Abstract
This paper proposes a novel type of higher-order pipelined neural network: the polynomial pipelined neural network. The proposed network is constructed from a number of higher-order neural networks concatenated with each other to predict highly nonlinear and nonstationary signals based on the engineering concept of divide and conquer. The polynomial pipelined neural network is used to predict the exchange rate between the US dollar and three other currencies. In this application, two sets of experiments are carried out. In the first set, the input data are pre-processed between 0 and 1 and passed to the neural networks as nonstationary data. In the second set of experiments, the nonstationary input signals are transformed into one step relative increase in price. The network demonstrates more accurate forecasting and an improvement in the signal to noise ratio over a number of benchmarked neural networks.
- Published
- 2008
- Full Text
- View/download PDF
187. An approach based on the Adaptive Resonance Theory for analysing the viability of recommender systems in a citizen Web portal
- Author
-
Emili Balaguer, José D. Martín-Guerrero, Paulo J. G. Lisboa, Emilio Soria-Olivas, and Alberto Palomares
- Subjects
Information retrieval ,Artificial neural network ,Computer science ,General Engineering ,Recommender system ,computer.software_genre ,Computer Science Applications ,Data set ,Adaptive resonance theory ,Artificial Intelligence ,Collaborative filtering ,Data mining ,Cluster analysis ,computer - Abstract
This paper proposes a methodology to optimise the future accuracy of a collaborative recommender application in a citizen Web portal. There are four stages namely, user modelling, benchmarking of clustering algorithms, prediction analysis and recommendation. The first stage is to develop analytical models of common characteristics of Web-user data. These artificial data sets are then used to evaluate the performance of clustering algorithms, in particular benchmarking the ART2 neural network with K-means clustering. Afterwards, it is evaluated the predictive accuracy of the clusters applied to a real-world data set derived from access logs to the citizen Web portal Infoville XXI (http://www.infoville.es). The results favour ART2 algorithms for cluster-based collaborative filtering on this Web portal. Finally, a recommender based on ART2 is developed. The follow-up of real recommendations will allow to improve recommendations by including new behaviours that are observed when users interact with the recommender system.
- Published
- 2007
- Full Text
- View/download PDF
188. Comparison of neural network and binary logistic regression methods in conceptual design of tall steel buildings
- Author
-
Hassan Al Nageim, Ravindra Nagar, and Paulo J. G. Lisboa
- Subjects
Engineering ,General Computer Science ,Artificial neural network ,business.industry ,Feed forward ,Binary logic ,Regression analysis ,Building and Construction ,Logistic regression ,computer.software_genre ,Machine learning ,McNemar's test ,Conceptual design ,Control and Systems Engineering ,Architecture ,Software design ,Artificial intelligence ,Data mining ,business ,computer ,Civil and Structural Engineering - Abstract
PurposeTo investigate the feasibility of using artificial neural networks for conceptual design of bracings systems for tall steel buildings.Design/methodology/approachDatabase of 234 design examples has been developed using commercially available detailed design software. These examples represent building up to 20 storeys. Feed forward back‐propagation neural network is trained on these examples. The results obtained from the artificial neural network are evaluated by re‐substitution, hold‐out and ten‐fold cross‐validation techniques.FindingsResults indicate that artificial neural network would give a performance of 97.91 percent (ten‐fold cross‐validation). The performance of this system is benchmarked by developing a binary logistic regression model from the same data. Performance of the two models has been compared using McNemar's test and receiver operation characteristics curves. Artificial neural network shows a better performance. The difference is found to be statically significant.Research limitations/implicationsThe developed model is applicable only to steel building up to 20 storeys. The feasibility of using artificial neural networks for conceptual design of bracings systems for tall steel buildings more than 20 storeys has not been investigated.Practical implicationsImplementation of the broad methodology outlined for the use of neural networks can be accomplished by conducting short training courses. This will provide personnel with flexibility in addressing buildings‐specifics bracing conditions and limitations.Originality/valueIn tall building design a lot of progress has been made in the development of software tools for numerical intensive tasks of analysis, design and optimization, however, professional software tools are not available to help the designer to choose an optimum building configuration at the conceptual design stage. The presented research provides a methodology to investigate the feasibility of using artificial neural networks for conceptual design of bracings systems for tall buildings. It is found that this approach for the selection of bracings in tall buildings is a better and cost effective option compared with database generated on the basis of expert opinion. It also correctly classifies and recommends the type of trussed bracing system.
- Published
- 2007
- Full Text
- View/download PDF
189. Gait quality assessment using self-organising artificial neural networks
- Author
-
Paulo J. G. Lisboa, Adrian Lees, Steve Attfield, and Gabor Barton
- Subjects
Self-organizing map ,Computer science ,Biophysics ,Machine learning ,computer.software_genre ,Pelvis ,Self organisation ,Reference Values ,Humans ,Orthopedics and Sports Medicine ,Sensitivity (control systems) ,Child ,Gait ,Gait Disorders, Neurologic ,Artificial neural network ,business.industry ,Quality assessment ,Rehabilitation ,Biomechanical Phenomena ,Weighting ,Lower Extremity ,Case-Control Studies ,Gait analysis ,Joints ,Neural Networks, Computer ,Artificial intelligence ,business ,computer - Abstract
In this study, the challenge to maximise the potential of gait analysis by employing advanced methods was addressed by using self-organising neural networks to quantify the deviation of patients' gait from normal. Data including three-dimensional joint angles, moments and powers of the two lower limbs and the pelvis were used to train Kohonen artificial neural networks to learn an abstract definition of normal gait. Subsequently, data from patients with gait problems were presented to the network which quantified the quality of gait in the form of a single curve by calculating the quantisation error during the gait cycle. A sensitivity analysis involving the manipulation of gait variables' weighting was able to highlight specific causes of the deviation including the anatomical location and the timing of wrong gait patterns. Use of the quantisation error can be regarded as an extension of previously described gait indices because it measures the goodness of gait and additionally provides information related to the causes underlying gait deviations.
- Published
- 2007
- Full Text
- View/download PDF
190. Probabilistic Modeling in Machine Learning
- Author
-
Paulo J. G. Lisboa, Alessandro Sperduti, Thomas Villmann, and Davide Bacciu
- Subjects
Computational Intelligence, Probabilistic Models, Machine Learning ,Computer science ,business.industry ,Probabilistic logic ,Statistical relational learning ,Bayesian network ,Machine learning ,computer.software_genre ,Latent Dirichlet allocation ,Relevance vector machine ,Machine Learning ,symbols.namesake ,Probabilistic method ,Stochastic grammar ,symbols ,Computational Intelligence ,Artificial intelligence ,Graphical model ,business ,computer ,Probabilistic Models - Abstract
Probabilistic methods are the heart of machine learning. This chapter shows links between core principles of information theory and probabilistic methods, with a short overview of historical and current examples of unsupervised and inferential models. Probabilistic models are introduced as a powerful idiom to describe the world, using random variables as building blocks held together by probabilistic relationships. The chapter discusses how such probabilistic interactions can be mapped to directed and undirected graph structures, which are the Bayesian and Markov networks. We show how these networks are subsumed by the broader class of the probabilistic graphical models, a general framework that provides concepts and methodological tools to encode, manipulate and process probabilistic knowledge in a computationally efficient way. The chapter then introduces, in more detail, two topical methodologies that are central to probabilistic modeling in machine learning. First, it discusses latent variable models, a probabilistic approach to capture complex relationships between a large number of observable and measurable events (data, in general), under the assumption that these are generated by an unknown, nonobservable process. We show how the parameters of a probabilistic model involving such nonobservable information can be efficiently estimated using the concepts underlying the expectation–maximization algorithms. Second, the chapter introduces a notable example of latent variable model, that is of particular relevance for representing the time evolution of sequence data, that is the hidden Markov model . The chapter ends with a discussion on advanced approaches for modeling complex data-generating processes comprising nontrivial probabilistic interactions between latent variables and observed information.
- Published
- 2015
191. Handling outliers in brain tumour MRS data analysis through robust topographic mapping
- Author
-
Paulo J. G. Lisboa and Alfredo Vellido
- Subjects
Multivariate statistics ,Magnetic Resonance Spectroscopy ,Outliers, DRG ,Computer science ,Health Informatics ,Basis function ,computer.software_genre ,Multivariate data visualization ,Diagnosis, Differential ,Cluster Analysis ,Humans ,Diagnosis, Computer-Assisted ,Cluster analysis ,Mathematical Computing ,Brain Mapping ,Models, Statistical ,Brain Neoplasms ,Data Collection ,Uncertainty ,Brain ,Medical decision making ,Decision Support Systems, Clinical ,Prognosis ,Computer Science Applications ,Visualization ,Multivariate Analysis ,Outlier ,Generative topographic mapping ,Data mining ,computer - Abstract
Uncertainty is inherent in medical decision making and poses a challenge for intelligent technologies. This paper focuses on magnetic resonance spectra (MRS) for discrimination of brain tumour types and grades. Modelling of this type of high-dimensional data is commonly affected by uncertainty caused by the presence of outliers. Multivariate data clustering and visualization of MRS data is proposed using the GTM framework with basis functions comprising Student t-distributions in order to minimize the negative impact on the model from outliers. The effectiveness of this model on the MRS data is demonstrated empirically.
- Published
- 2006
- Full Text
- View/download PDF
192. Visualisation of gait data with Kohonen self-organising neural maps
- Author
-
Steve Attfield, Paulo J. G. Lisboa, Adrian Lees, and Gabor Barton
- Subjects
Self-organizing map ,Databases, Factual ,Artificial neural network ,Relation (database) ,Computer science ,business.industry ,Dimensionality reduction ,Rehabilitation ,Biophysics ,Kinematics ,Gait ,Visualization ,Gait analysis ,Humans ,Orthopedics and Sports Medicine ,Computer vision ,Neural Networks, Computer ,Artificial intelligence ,business ,Software - Abstract
Self-organising artificial neural networks were used to reduce the complexity of joint kinematic and kinetic data, which form part of a typical instrumented gait assessment. Three-dimensional joint angles, moments and powers during the gait cycle were projected from the multi-dimensional data space onto a topological neural map, which thereby identified gait stem-patterns. Patients were positioned on the map in relation to each other and this enabled them to be compared from their gait patterns. The visualisation of large amounts of complex data in a two-dimensional map labelled with gait patterns is a step towards more objective analysis protocols which may enhance decision making.
- Published
- 2006
- Full Text
- View/download PDF
193. The use of artificial neural networks in decision support in cancer: A systematic review
- Author
-
Azzam Taktak and Paulo J. G. Lisboa
- Subjects
PubMed ,Decision support system ,Computer science ,Cognitive Neuroscience ,Decision Making ,Outcome assessment ,Medical Oncology ,Machine learning ,computer.software_genre ,Field (computer science) ,Artificial Intelligence ,Neoplasms ,Outcome Assessment, Health Care ,Health care ,medicine ,Animals ,Humans ,Computer Simulation ,Clinical Trials as Topic ,Artificial neural network ,business.industry ,Cancer ,medicine.disease ,Clinical trial ,Support vector machine ,Neural Networks, Computer ,Artificial intelligence ,business ,computer - Abstract
Artificial neural networks have featured in a wide range of medical journals, often with promising results. This paper reports on a systematic review that was conducted to assess the benefit of artificial neural networks (ANNs) as decision making tools in the field of cancer. The number of clinical trials (CTs) and randomised controlled trials (RCTs) involving the use of ANNs in diagnosis and prognosis increased from 1 to 38 in the last decade. However, out of 396 studies involving the use of ANNs in cancer, only 27 were either CTs or RCTs. Out of these trials, 21 showed an increase in benefit to healthcare provision and 6 did not. None of these studies however showed a decrease in benefit. This paper reviews the clinical fields where neural network methods figure most prominently, the main algorithms featured, methodologies for model selection and the need for rigorous evaluation of results.
- Published
- 2006
- Full Text
- View/download PDF
194. Robust analysis of MRS brain tumour data using t-GTM
- Author
-
Dolores Vicente, Paulo J. G. Lisboa, and Alfredo Vellido
- Subjects
Multivariate statistics ,Decision support system ,business.industry ,Computer science ,Cognitive Neuroscience ,Physics::Medical Physics ,Intelligent decision support system ,Machine learning ,computer.software_genre ,Missing data ,Computer Science Applications ,Visualization ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Outlier ,Artificial intelligence ,Data mining ,Imputation (statistics) ,business ,Cluster analysis ,computer - Abstract
This paper proposes a principled, self-organized, framework to manage two sources of uncertainty that are inherent in intelligent systems for medical decision support, namely outliers and missing data. The framework is applied to magnetic resonance spectra (MRS), which are indicators of the grade of malignancy in brain tumours. A model for multivariate data clustering and visualization, the generative topographic mapping (GTM), is re-formulated as a mixture of Student's t-distributions making it more robust to outliers while supporting the imputation of missing values. An important new development is the extension of the model to provide automatic feature relevance determination. Its effectiveness on the MRS data is demonstrated empirically.
- Published
- 2006
- Full Text
- View/download PDF
195. Probability distributions and leveraged trading strategies: an application of Gaussian mixture models to the Morgan Stanley Technology Index Tracking Fund
- Author
-
Andreas Lindemann, Paulo J. G. Lisboa, and Christian L. Dunis
- Subjects
Moving average ,Computer science ,Logit ,Statistics ,Econometrics ,Probability distribution ,Trading strategy ,Autoregressive–moving-average model ,Mixture model ,General Economics, Econometrics and Finance ,Finance ,Network model ,MACD - Abstract
The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or direction forecasts for one-day-ahead forecasts of the Morgan Stanley Technology Index Tracking Fund (MTK). This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naive model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi-layer perceptron network (MLP). Secondly, we examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the two network models outperform all of the benchmark models, the Gaussian mixture model does best: it is worth noting that it does well on a time series where the training period is showing a strong uptrend while the out-of-sample period is characterized by a downtrend.
- Published
- 2005
- Full Text
- View/download PDF
196. Level estimation, classification and probability distribution architectures for trading the EUR/USD exchange rate
- Author
-
L. Dunis, Paulo J. G. Lisboa, and Andreas Lindemann
- Subjects
Leverage (finance) ,Computer science ,Sharpe ratio ,Logit ,Logistic regression ,Mixture model ,Perceptron ,Moving-average model ,Exchange rate ,Cross entropy ,Autoregressive model ,Derivative (finance) ,Artificial Intelligence ,Moving average ,Statistics ,Softmax function ,Econometrics ,Liberian dollar ,Probability distribution ,Trading strategy ,Autoregressive–moving-average model ,Software ,MACD - Abstract
Dunis and Williams (Derivatives: use, trading and regulation 8(3):211–239, 2002; Applied quantitative methods for trading and investment. Wiley, Chichester, 2003) have shown the superiority of a Multi-layer perceptron network (MLP), outperforming its benchmark models such as a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA) and a logistic regression model (LOGIT) on a Euro/Dollar (EUR/USD) time series. The motivation for this paper is to investigate the use of different neural network architectures. This is done by benchmarking three different neural network designs representing a level estimator, a classification model and a probability distribution predictor. More specifically, we present the Mulit-layer perceptron network, the Softmax cross entropy model and the Gaussian mixture model and benchmark their respective performance on the Euro/Dollar (EUR/USD) time series as reported by Dunis and Williams. As it turns out, the Multi-layer perceptron does best when used without confirmation filters and leverage, while the Softmax cross entropy model and the Gaussian mixture model outperforms the Multi-layer perceptron when using more sophisticated trading strategies and leverage. This might be due to the ability of both models using probability distributions to identify successfully trades with a high Sharpe ratio.
- Published
- 2005
- Full Text
- View/download PDF
197. Extending the variance ratio test to visualize structure in data: an application to the S&P 100 Index
- Author
-
Andreas Lindemann, Christian L. Dunis, and Paulo J. G. Lisboa
- Subjects
Economics and Econometrics ,Index (economics) ,Series (mathematics) ,Autocorrelation ,Statistics ,Structure (category theory) ,Random walk ,Finance ,Mathematics ,Variance ratio - Abstract
The aim of this paper is to present a method able to graphically describe the amount of structure in a time series. In the following, ‘structure' is defined as the extent to which a time series is either trending or mean-reverting (that is showing pockets of positive as well as negative autocorrelation). What is defined as being trending, respectively mean-reverting, should be seen in relation to the characteristics of a random walk. Testing most of the constituents of the Standard & Poor's 100 index for structure and using a modified variance ratio that focuses on the whole ratio profile rather than an individual ratio, trending is detected as well as mean-reverting structure over a time period of more than 10 years.
- Published
- 2005
- Full Text
- View/download PDF
198. Probability distributions, trading strategies and leverage: an application of Gaussian mixture models
- Author
-
Paulo J. G. Lisboa, Christian L. Dunis, and Andreas Lindemann
- Subjects
Computer science ,Strategy and Management ,Sharpe ratio ,Management Science and Operations Research ,Mixture model ,Computer Science Applications ,Moving average ,Modeling and Simulation ,Econometrics ,Leverage (statistics) ,Probability distribution ,Autoregressive–moving-average model ,Trading strategy ,Statistics, Probability and Uncertainty ,MACD - Abstract
The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or classification estimations on a one-day-ahead forecasting task of the EUR/USD time series. This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naive model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi-layer perceptron network (MLP). Secondly, to examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage. This might be due to the ability of the Gaussian mixture model to identify successfully trades with a high Sharpe ratio. Copyright © 2004 John Wiley & Sons, Ltd.
- Published
- 2004
- Full Text
- View/download PDF
199. Minimal MLPs do not model the XOR logic
- Author
-
Terence A. Etchells, Dave C. Pountney, and Paulo J. G. Lisboa
- Subjects
Record locking ,Artificial neural network ,Artificial Intelligence ,Cognitive Neuroscience ,Xor problem ,Exclusive or ,Pruning (decision trees) ,Perceptron ,XOR gate ,Algorithm ,Subspace topology ,Computer Science Applications ,Mathematics - Abstract
Fitting the continuous valid logic of the exclusive OR (XOR) problem requires more than a minimal neural network configuration. This letter shows that the simplest logic to fit the XOR problem involves boundaries that cannot be mapped by a multi-layer perceptron with just two hidden nodes. This observation calls into question rule extraction methodologies based on pruning, since this practice can lock the network into a subspace of achievable continuous valued logic functions that prevent it from mapping the simplest logic to explain the data.
- Published
- 2002
- Full Text
- View/download PDF
200. A Systematic Review and Meta-Analysis of Proteomics Literature on the Response of Human Skeletal Muscle to Obesity/Type 2 Diabetes Mellitus (T2DM) Versus Exercise Training
- Author
-
Kanchana Srisawat, Jatin G. Burniston, Sam O. Shepherd, and Paulo J. G. Lisboa
- Subjects
0301 basic medicine ,medicine.medical_specialty ,meta-analysis (MA) ,Clinical Biochemistry ,lcsh:QR1-502 ,high intensity exercise ,Context (language use) ,Review ,Type 2 diabetes ,Bioinformatics ,Biochemistry ,lcsh:Microbiology ,RC1200 ,Impaired glucose tolerance ,03 medical and health sciences ,Insulin resistance ,systematic review (SR) ,Structural Biology ,insulin resistance ,Diabetes mellitus ,Internal medicine ,Medicine ,Mitochondrial respiratory chain complex I ,Molecular Biology ,Preferred Reporting items for Systematic Review and Meta-analyses (PRISMA) ,mass spectrometry ,obese ,protein abundance ,business.industry ,Type 2 Diabetes Mellitus ,Skeletal muscle ,medicine.disease ,QP ,030104 developmental biology ,Endocrinology ,medicine.anatomical_structure ,type 2 diabetes ,business - Abstract
We performed a systematic review and meta-analysis of proteomics literature that reports human skeletal muscle responses in the context of either pathological decline associated with obesity/T2DM and physiological adaptations to exercise training. Literature was collected from PubMed and DOAJ databases following PRISMA guidelines using the search terms ‘proteom*’, and ‘skeletal muscle’ combined with either ‘obesity, insulin resistance, diabetes, impaired glucose tolerance’ or ‘exercise, training’. Eleven studies were included in the systematic review, and meta-analysis was performed on a sub-set (four studies) of the reviewed literature that reported the necessary primary data. The majority of proteins (n = 73) more abundant in the muscle of obese/T2DM individuals were unique to this group and not reported to be responsive to exercise training. The main response of skeletal muscle to exercise training was a greater abundance of proteins of the mitochondrial electron transport chain, tricarboxylic acid cycle and mitochondrial respiratory chain complex I assembly. In total, five proteins were less abundant in muscle of obese/T2DM individuals and were also reported to be more abundant in the muscle of endurance-trained individuals, suggesting one of the major mechanisms of exercise-induced protection against the deleterious effects of obesity/T2DM occurs at complex I of the electron transport chain.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.