49 results on '"Hugo Hiden"'
Search Results
2. Mobilise-D insights to estimate real-world walking speed in multiple conditions with a wearable device
- Author
-
Cameron Kirk, Arne Küderle, M. Encarna Micó-Amigo, Tecla Bonci, Anisoara Paraschiv-Ionescu, Martin Ullrich, Abolfazl Soltani, Eran Gazit, Francesca Salis, Lisa Alcock, Kamiar Aminian, Clemens Becker, Stefano Bertuletti, Philip Brown, Ellen Buckley, Alma Cantu, Anne-Elie Carsin, Marco Caruso, Brian Caulfield, Andrea Cereatti, Lorenzo Chiari, Ilaria D’Ascanio, Judith Garcia-Aymerich, Clint Hansen, Jeffrey M. Hausdorff, Hugo Hiden, Emily Hume, Alison Keogh, Felix Kluge, Sarah Koch, Walter Maetzler, Dimitrios Megaritis, Arne Mueller, Martijn Niessen, Luca Palmerini, Lars Schwickert, Kirsty Scott, Basil Sharrack, Henrik Sillén, David Singleton, Beatrix Vereijken, Ioannis Vogiatzis, Alison J. Yarnall, Lynn Rochester, Claudia Mazzà, Bjoern M. Eskofier, Silvia Del Din, and Mobilise-D consortium
- Subjects
Medicine ,Science - Abstract
Abstract This study aimed to validate a wearable device’s walking speed estimation pipeline, considering complexity, speed, and walking bout duration. The goal was to provide recommendations on the use of wearable devices for real-world mobility analysis. Participants with Parkinson’s Disease, Multiple Sclerosis, Proximal Femoral Fracture, Chronic Obstructive Pulmonary Disease, Congestive Heart Failure, and healthy older adults (n = 97) were monitored in the laboratory and the real-world (2.5 h), using a lower back wearable device. Two walking speed estimation pipelines were validated across 4408/1298 (2.5 h/laboratory) detected walking bouts, compared to 4620/1365 bouts detected by a multi-sensor reference system. In the laboratory, the mean absolute error (MAE) and mean relative error (MRE) for walking speed estimation ranged from 0.06 to 0.12 m/s and − 2.1 to 14.4%, with ICCs (Intraclass correlation coefficients) between good (0.79) and excellent (0.91). Real-world MAE ranged from 0.09 to 0.13, MARE from 1.3 to 22.7%, with ICCs indicating moderate (0.57) to good (0.88) agreement. Lower errors were observed for cohorts without major gait impairments, less complex tasks, and longer walking bouts. The analytical pipelines demonstrated moderate to good accuracy in estimating walking speed. Accuracy depended on confounding factors, emphasizing the need for robust technical validation before clinical application. Trial registration: ISRCTN – 12246987.
- Published
- 2024
- Full Text
- View/download PDF
3. Real-World Gait Detection Using a Wrist-Worn Inertial Sensor: Validation Study
- Author
-
Felix Kluge, Yonatan E Brand, M Encarna Micó-Amigo, Stefano Bertuletti, Ilaria D'Ascanio, Eran Gazit, Tecla Bonci, Cameron Kirk, Arne Küderle, Luca Palmerini, Anisoara Paraschiv-Ionescu, Francesca Salis, Abolfazl Soltani, Martin Ullrich, Lisa Alcock, Kamiar Aminian, Clemens Becker, Philip Brown, Joren Buekers, Anne-Elie Carsin, Marco Caruso, Brian Caulfield, Andrea Cereatti, Lorenzo Chiari, Carlos Echevarria, Bjoern Eskofier, Jordi Evers, Judith Garcia-Aymerich, Tilo Hache, Clint Hansen, Jeffrey M Hausdorff, Hugo Hiden, Emily Hume, Alison Keogh, Sarah Koch, Walter Maetzler, Dimitrios Megaritis, Martijn Niessen, Or Perlman, Lars Schwickert, Kirsty Scott, Basil Sharrack, David Singleton, Beatrix Vereijken, Ioannis Vogiatzis, Alison Yarnall, Lynn Rochester, Claudia Mazzà, Silvia Del Din, and Arne Mueller
- Subjects
Medicine - Abstract
BackgroundWrist-worn inertial sensors are used in digital health for evaluating mobility in real-world environments. Preceding the estimation of spatiotemporal gait parameters within long-term recordings, gait detection is an important step to identify regions of interest where gait occurs, which requires robust algorithms due to the complexity of arm movements. While algorithms exist for other sensor positions, a comparative validation of algorithms applied to the wrist position on real-world data sets across different disease populations is missing. Furthermore, gait detection performance differences between the wrist and lower back position have not yet been explored but could yield valuable information regarding sensor position choice in clinical studies. ObjectiveThe aim of this study was to validate gait sequence (GS) detection algorithms developed for the wrist position against reference data acquired in a real-world context. In addition, this study aimed to compare the performance of algorithms applied to the wrist position to those applied to lower back–worn inertial sensors. MethodsParticipants with Parkinson disease, multiple sclerosis, proximal femoral fracture (hip fracture recovery), chronic obstructive pulmonary disease, and congestive heart failure and healthy older adults (N=83) were monitored for 2.5 hours in the real-world using inertial sensors on the wrist, lower back, and feet including pressure insoles and infrared distance sensors as reference. In total, 10 algorithms for wrist-based gait detection were validated against a multisensor reference system and compared to gait detection performance using lower back–worn inertial sensors. ResultsThe best-performing GS detection algorithm for the wrist showed a mean (per disease group) sensitivity ranging between 0.55 (SD 0.29) and 0.81 (SD 0.09) and a mean (per disease group) specificity ranging between 0.95 (SD 0.06) and 0.98 (SD 0.02). The mean relative absolute error of estimated walking time ranged between 8.9% (SD 7.1%) and 32.7% (SD 19.2%) per disease group for this algorithm as compared to the reference system. Gait detection performance from the best algorithm applied to the wrist inertial sensors was lower than for the best algorithms applied to the lower back, which yielded mean sensitivity between 0.71 (SD 0.12) and 0.91 (SD 0.04), mean specificity between 0.96 (SD 0.03) and 0.99 (SD 0.01), and a mean relative absolute error of estimated walking time between 6.3% (SD 5.4%) and 23.5% (SD 13%). Performance was lower in disease groups with major gait impairments (eg, patients recovering from hip fracture) and for patients using bilateral walking aids. ConclusionsAlgorithms applied to the wrist position can detect GSs with high performance in real-world environments. Those periods of interest in real-world recordings can facilitate gait parameter extraction and allow the quantification of gait duration distribution in everyday life. Our findings allow taking informed decisions on alternative positions for gait recording in clinical studies and public health. Trial RegistrationISRCTN Registry 12246987; https://www.isrctn.com/ISRCTN12246987 International Registered Report Identifier (IRRID)RR2-10.1136/bmjopen-2021-050785
- Published
- 2024
- Full Text
- View/download PDF
4. Assessing real-world gait with digital technology? Validation, insights and recommendations from the Mobilise-D consortium
- Author
-
M. Encarna Micó-Amigo, Tecla Bonci, Anisoara Paraschiv-Ionescu, Martin Ullrich, Cameron Kirk, Abolfazl Soltani, Arne Küderle, Eran Gazit, Francesca Salis, Lisa Alcock, Kamiar Aminian, Clemens Becker, Stefano Bertuletti, Philip Brown, Ellen Buckley, Alma Cantu, Anne-Elie Carsin, Marco Caruso, Brian Caulfield, Andrea Cereatti, Lorenzo Chiari, Ilaria D’Ascanio, Bjoern Eskofier, Sara Fernstad, Marcel Froehlich, Judith Garcia-Aymerich, Clint Hansen, Jeffrey M. Hausdorff, Hugo Hiden, Emily Hume, Alison Keogh, Felix Kluge, Sarah Koch, Walter Maetzler, Dimitrios Megaritis, Arne Mueller, Martijn Niessen, Luca Palmerini, Lars Schwickert, Kirsty Scott, Basil Sharrack, Henrik Sillén, David Singleton, Beatrix Vereijken, Ioannis Vogiatzis, Alison J. Yarnall, Lynn Rochester, Claudia Mazzà, Silvia Del Din, and for the Mobilise-D consortium
- Subjects
Real-world gait ,Algorithms ,DMOs ,Validation ,Wearable sensor ,Walking ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Abstract Background Although digital mobility outcomes (DMOs) can be readily calculated from real-world data collected with wearable devices and ad-hoc algorithms, technical validation is still required. The aim of this paper is to comparatively assess and validate DMOs estimated using real-world gait data from six different cohorts, focusing on gait sequence detection, foot initial contact detection (ICD), cadence (CAD) and stride length (SL) estimates. Methods Twenty healthy older adults, 20 people with Parkinson’s disease, 20 with multiple sclerosis, 19 with proximal femoral fracture, 17 with chronic obstructive pulmonary disease and 12 with congestive heart failure were monitored for 2.5 h in the real-world, using a single wearable device worn on the lower back. A reference system combining inertial modules with distance sensors and pressure insoles was used for comparison of DMOs from the single wearable device. We assessed and validated three algorithms for gait sequence detection, four for ICD, three for CAD and four for SL by concurrently comparing their performances (e.g., accuracy, specificity, sensitivity, absolute and relative errors). Additionally, the effects of walking bout (WB) speed and duration on algorithm performance were investigated. Results We identified two cohort-specific top performing algorithms for gait sequence detection and CAD, and a single best for ICD and SL. Best gait sequence detection algorithms showed good performances (sensitivity > 0.73, positive predictive values > 0.75, specificity > 0.95, accuracy > 0.94). ICD and CAD algorithms presented excellent results, with sensitivity > 0.79, positive predictive values > 0.89 and relative errors
- Published
- 2023
- Full Text
- View/download PDF
5. Ecological validity of a deep learning algorithm to detect gait events from real-life walking bouts in mobility-limiting diseases
- Author
-
Robbin Romijnders, Francesca Salis, Clint Hansen, Arne Küderle, Anisoara Paraschiv-Ionescu, Andrea Cereatti, Lisa Alcock, Kamiar Aminian, Clemens Becker, Stefano Bertuletti, Tecla Bonci, Philip Brown, Ellen Buckley, Alma Cantu, Anne-Elie Carsin, Marco Caruso, Brian Caulfield, Lorenzo Chiari, Ilaria D'Ascanio, Silvia Del Din, Björn Eskofier, Sara Johansson Fernstad, Marceli Stanislaw Fröhlich, Judith Garcia Aymerich, Eran Gazit, Jeffrey M. Hausdorff, Hugo Hiden, Emily Hume, Alison Keogh, Cameron Kirk, Felix Kluge, Sarah Koch, Claudia Mazzà, Dimitrios Megaritis, Encarna Micó-Amigo, Arne Müller, Luca Palmerini, Lynn Rochester, Lars Schwickert, Kirsty Scott, Basil Sharrack, David Singleton, Abolfazl Soltani, Martin Ullrich, Beatrix Vereijken, Ioannis Vogiatzis, Alison Yarnall, Gerhard Schmidt, and Walter Maetzler
- Subjects
deep learning (artificial intelligence) ,free-living ,gait analysis ,gait events detection ,inertial measurement unit (IMU) ,mobility ,Neurology. Diseases of the nervous system ,RC346-429 - Abstract
IntroductionThe clinical assessment of mobility, and walking specifically, is still mainly based on functional tests that lack ecological validity. Thanks to inertial measurement units (IMUs), gait analysis is shifting to unsupervised monitoring in naturalistic and unconstrained settings. However, the extraction of clinically relevant gait parameters from IMU data often depends on heuristics-based algorithms that rely on empirically determined thresholds. These were mainly validated on small cohorts in supervised settings.MethodsHere, a deep learning (DL) algorithm was developed and validated for gait event detection in a heterogeneous population of different mobility-limiting disease cohorts and a cohort of healthy adults. Participants wore pressure insoles and IMUs on both feet for 2.5 h in their habitual environment. The raw accelerometer and gyroscope data from both feet were used as input to a deep convolutional neural network, while reference timings for gait events were based on the combined IMU and pressure insoles data.Results and discussionThe results showed a high-detection performance for initial contacts (ICs) (recall: 98%, precision: 96%) and final contacts (FCs) (recall: 99%, precision: 94%) and a maximum median time error of −0.02 s for ICs and 0.03 s for FCs. Subsequently derived temporal gait parameters were in good agreement with a pressure insoles-based reference with a maximum mean difference of 0.07, −0.07, and
- Published
- 2023
- Full Text
- View/download PDF
6. Mobility recorded by wearable devices and gold standards: the Mobilise-D procedure for data standardization
- Author
-
Luca Palmerini, Luca Reggi, Tecla Bonci, Silvia Del Din, M. Encarna Micó-Amigo, Francesca Salis, Stefano Bertuletti, Marco Caruso, Andrea Cereatti, Eran Gazit, Anisoara Paraschiv-Ionescu, Abolfazl Soltani, Felix Kluge, Arne Küderle, Martin Ullrich, Cameron Kirk, Hugo Hiden, Ilaria D’Ascanio, Clint Hansen, Lynn Rochester, Claudia Mazzà, and Lorenzo Chiari
- Subjects
Science - Abstract
Abstract Wearable devices are used in movement analysis and physical activity research to extract clinically relevant information about an individual’s mobility. Still, heterogeneity in protocols, sensor characteristics, data formats, and gold standards represent a barrier for data sharing, reproducibility, and external validation. In this study, we aim at providing an example of how movement data (from the real-world and the laboratory) recorded from different wearables and gold standard technologies can be organized, integrated, and stored. We leveraged on our experience from a large multi-centric study (Mobilise-D) to provide guidelines that can prove useful to access, understand, and re-use the data that will be made available from the study. These guidelines highlight the encountered challenges and the adopted solutions with the final aim of supporting standardization and integration of data in other studies and, in turn, to increase and facilitate comparison of data recorded in the scientific community. We also provide samples of standardized data, so that both the structure of the data and the procedure can be easily understood and reproduced.
- Published
- 2023
- Full Text
- View/download PDF
7. Technical validation of real-world monitoring of gait: a multicentric observational study
- Author
-
Sarah Koch, Clint Hansen, Walter Maetzler, Anne-Elie Carsin, Kristin Taraldsen, Kamiar Aminian, Clemens Becker, Lorenzo Chiari, Anisoara Paraschiv-Ionescu, Jorunn L Helbostad, Beatrix Vereijken, Lynn Rochester, Philip Brown, Judith Garcia Aymerich, David Singleton, Basil Sharrack, Brian Caulfield, Ellen Buckley, Claudia Mazza, Nikolaos Chynkiamis, Felix Kluge, M Encarna Micó-Amigo, Francesca Salis, Lars Schwickert, Kirsty Scott, Ioannis Vogiatzis, Alison Yarnall, Alison Keogh, Silvia Del Din, Björn Eskofier, Lisa Alcock, Stefano Bertuletti, Tecla Bonci, Marina Brozgol, Marco Caruso, Andrea Cereatti, Fabio Ciravegna, Jordi Evers, Eran Gazit, Jeffrey M Hausdorff, Hugo Hiden, Emily Hume, Neil Ireson, Cameron Kirk, Arne Küderle, Vitaveska Lanfranchi, Arne Mueller, Isabel Neatrour, Martijn Niessen, Luca Palmerini, Lucas Pluimgraaff, Luca Reggi, Henrik Sillen, Abolfazi Soltani, Martin Ullrich, Linda Van Gelder, and Elke Warmerdam
- Subjects
Medicine - Published
- 2021
- Full Text
- View/download PDF
8. Objective sleep assessment in >80,000 UK mid-life adults: Associations with sociodemographic characteristics, physical activity and caffeine.
- Author
-
Gewei Zhu, Michael Catt, Sophie Cassidy, Mark Birch-Machin, Michael Trenell, Hugo Hiden, Simon Woodman, and Kirstie N Anderson
- Subjects
Medicine ,Science - Abstract
Study objectivesNormal timing and duration of sleep is vital for all physical and mental health. However, many sleep-related studies depend on self-reported sleep measurements, which have limitations. This study aims to investigate the association of physical activity and sociodemographic characteristics including age, gender, coffee intake and social status with objective sleep measurements.MethodsA cross-sectional analysis was carried out on 82995 participants within the UK Biobank cohort. Sociodemographic and lifestyle information were collected through touch-screen questionnaires in 2007-2010. Sleep and physical activity parameters were later measured objectively using wrist-worn accelerometers in 2013-2015 (participants were aged 43-79 years and wore watches for 7 days). Participants were divided into 5 groups based on their objective sleep duration per night (8 hours). Binary logistic models were adjusted for age, gender and Townsend Deprivation Index.ResultsParticipants who slept 6-7 hours/night were the most frequent (33.5%). Females had longer objective sleep duration than males. Short objective sleep duration (ConclusionsObjectively determined short sleep duration was associated with male gender, older age, low social status and high coffee intake. An inverse 'U-shaped' relationship between sleep duration and physical activity was also established. Optimal sleep duration for health in those over 60 may therefore be shorter than younger groups.
- Published
- 2019
- Full Text
- View/download PDF
9. A Data Efficient Vision Transformer for Robust Human Activity Recognition from the Spectrograms of Wearable Sensor Data.
- Author
-
Jamie McQuire, Paul Watson 0001, Nick Wright, Hugo Hiden, and Michael Catt
- Published
- 2023
- Full Text
- View/download PDF
10. The e-Science Central Study Data Platform.
- Author
-
Paul Watson 0001 and Hugo Hiden
- Published
- 2022
- Full Text
- View/download PDF
11. Mobilise-D: Experiences of Processing Large Medical Data Sets Using Cloud Computing Resources.
- Author
-
Hugo Hiden and Paul Watson 0001
- Published
- 2023
- Full Text
- View/download PDF
12. Uneven and Irregular Surface Condition Prediction from Human Walking Data using both Centralized and Decentralized Machine Learning Approaches.
- Author
-
Jamie McQuire, Paul Watson 0001, Nick Wright, Hugo Hiden, and Michael Catt
- Published
- 2021
- Full Text
- View/download PDF
13. A Platform for the Analysis of Qualitative and Quantitative Data about the Built Environment and Its Users.
- Author
-
Mike Simpson, Simon Woodman, Hugo Hiden, Sebastian Stein 0003, Stephen Dowsland, Mark Turner 0007, Vicki L. Hanson, and Paul Watson 0001
- Published
- 2017
- Full Text
- View/download PDF
14. Prediction of workflow execution time using provenance traces: Practical applications in medical data processing.
- Author
-
Hugo Hiden, Simon Woodman, and Paul Watson 0001
- Published
- 2016
- Full Text
- View/download PDF
15. Accelerometer-based gait assessment: Pragmatic deployment on an international scale.
- Author
-
Silvia Del Din, Aodhán Hickey, Simon Woodman, Hugo Hiden, Rosie Morris, Paul Watson 0001, Kianoush Nazarpour, Michael Catt, Lynn Rochester, and Alan Godfrey
- Published
- 2016
- Full Text
- View/download PDF
16. Monitoring of Upper Limb Rehabilitation and Recovery after Stroke: An Architecture for a Cloud-Based Therapy Platform.
- Author
-
Simon Woodman, Hugo Hiden, Mark Turner 0007, Stephen Dowsland, and Paul Watson 0001
- Published
- 2015
- Full Text
- View/download PDF
17. Workflow provenance: an analysis of long term storage costs.
- Author
-
Simon Woodman, Hugo Hiden, and Paul Watson 0001
- Published
- 2015
- Full Text
- View/download PDF
18. A framework for dynamically generating predictive models of workflow execution.
- Author
-
Hugo Hiden, Simon Woodman, and Paul Watson 0001
- Published
- 2013
- Full Text
- View/download PDF
19. Exploration of Sleep as a Specific Risk Factor for Poor Metabolic and Mental Health: A UK Biobank Study of 84,404 Participants
- Author
-
Sophie Cassidy, Mark A. Birch-Machin, David A. Gunn, Hugo Hiden, Simon Woodman, Michael Catt, Michael I. Trenell, Gewei Zhu, and Kirstie N. Anderson
- Subjects
Gerontology ,Longitudinal study ,UK Biobank ,Heart disease ,business.industry ,education ,health ,Disease ,medicine.disease ,Sleep in non-human animals ,Mental health ,Behavioral Neuroscience ,primary care ,Nature and Science of Sleep ,Medicine ,Sleep onset ,sleep ,Adverse effect ,business ,Applied Psychology ,Depression (differential diagnoses) ,Original Research - Abstract
Gewei Zhu,1 Sophie Cassidy,2 Hugo Hiden,3 Simon Woodman,3 Michael Trenell,4 David A Gunn,5 Michael Catt,6 Mark Birch-Machin,1,6 Kirstie N Anderson7 1Faculty of Medical Sciences, Translational and Clinical Research Institute, Newcastle University, Newcastle Upon Tyne, UK; 2Central Clinical School, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia; 3National Innovation Centre for Data, School of Computing, The Catalyst, Newcastle Helix, Newcastle Upon Tyne, UK; 4NIHR Innovation Observatory, The Catalyst, Newcastle Helix, Newcastle upon Tyne, UK; 5Colworth Science Park, Sharnbrook, Bedfordshire, UK; 6National Innovation Centre for Ageing, The Catalyst, Newcastle Helix, Newcastle upon Tyne, UK; 7Department of Neurology, Royal Victoria Infirmary, Newcastle upon Tyne, UKCorrespondence: Kirstie N AndersonDepartment of Neurology, Royal Victoria Infirmary, Newcastle upon Tyne, UKTel +44 01912823833Email kirstieanderson@nhs.netPurpose: Short and long sleep durations have adverse effects on physical and mental health. However, most studies are based on self-reported sleep duration and health status. Therefore, this longitudinal study aims to investigate objectively measured sleep duration and subsequent primary health care records in older adults to investigate the impact of sleep duration and fragmentation on physical and mental health.Methods: Data on objective sleep duration were measured using accelerometry. Primary care health records were then obtained from the UK Biobank (n=84,404). Participants (mean age, 62.4 years) were divided into five groups according to their sleep duration derived from the accelerometry data: < 5 hours, 5â 6 hours, 6â 7 hours, 7â 8 hours and > 8 hours. ICD-10 codes were used for the analysis of primary care data. Wake after sleep onset, activity level during the least active 5 hours and episodes of movement during sleep were analysed as an indication for sleep fragmentation. Binary regression models were adjusted for age, gender and Townsend deprivation score.Results: A âU-shapedâ relationship was found between sleep duration and diseases including diabetes, hypertension and heart disease and depression. Short and long sleep durations and fragmented sleep were associated with increased odds of disease.Conclusion: Six to eight hours of sleep, as well as less fragmented sleep, predicted better long-term metabolic and mental health.Keywords: sleep, health, UK Biobank, primary care
- Published
- 2021
20. Achieving reproducibility by combining provenance with service and workflow versioning.
- Author
-
Simon Woodman, Hugo Hiden, Paul Watson 0001, and Paolo Missier
- Published
- 2011
- Full Text
- View/download PDF
21. Technical validation of real-world monitoring of gait: A multicentric observational study
- Author
-
Martin Ullrich, Silvia Del Din, Lars Schwickert, Vitaveska Lanfranchi, Eran Gazit, Alison Keogh, Tecla Bonci, S. Bertuletti, Claudia Mazzà, Felix Kluge, David Singleton, Lynn Rochester, Lucas Pluimgraaff, Francesca Salis, Luca Palmerini, Basil Sharrack, Nikolaos Chynkiamis, Jordi Evers, Sarah Koch, Elke Warmerdam, Clemens Becker, Philip M. Brown, M. Encarna Micó-Amigo, Neil Ireson, Judith Garcia Aymerich, Arne Küderle, Jeffrey M Hausdorff, Emily Hume, Lorenzo Chiari, Fabio Ciravegna, Luca Reggi, Anne-Elie Carsin, Isabel Neatrour, Linda Van Gelder, Cameron Kirk, Walter Maetzler, Andrea Cereatti, Abolfazi Soltani, Beatrix Vereijken, Björn M. Eskofier, Martijn Niessen, Arne Mueller, Jorunn L. Helbostad, Alison J. Yarnall, Ioannis Vogiatzis, Marina Brozgol, Hugo Hiden, Kristin Taraldsen, Kirsty Scott, Henrik Sillen, Lisa Alcock, M. Caruso, Anisoara Paraschiv-Ionescu, Kamiar Aminian, Clint Hansen, Brian Caulfield, Ellen Buckley, Mazza C., Alcock L., Aminian K., Becker C., Bertuletti S., Bonci T., Brown P., Brozgol M., Buckley E., Carsin A.-E., Caruso M., Caulfield B., Cereatti A., Chiari L., Chynkiamis N., Ciravegna F., Del Din S., Eskofier B., Evers J., Garcia Aymerich J., Gazit E., Hansen C., Hausdorff J.M., Helbostad J.L., Hiden H., Hume E., Paraschiv-Ionescu A., Ireson N., Keogh A., Kirk C., Kluge F., Koch S., Kuderle A., Lanfranchi V., Maetzler W., Mico-Amigo M.E., Mueller A., Neatrour I., Niessen M., Palmerini L., Pluimgraaff L., Reggi L., Salis F., Schwickert L., Scott K., Sharrack B., Sillen H., Singleton D., Soltani A., Taraldsen K., Ullrich M., Van Gelder L., Vereijken B., Vogiatzis I., Warmerdam E., Yarnall A., and Rochester L.
- Subjects
medicine.medical_specialty ,hip ,Population ,heart failure ,multiple sclerosis ,01 natural sciences ,Wearable Electronic Devices ,03 medical and health sciences ,Units of measurement ,0302 clinical medicine ,disability instrument ,medicine ,Humans ,media_common.cataloged_instance ,Relevance (information retrieval) ,Medical physics ,European union ,education ,Gait ,Diagnostics ,Wearable technology ,Aged ,media_common ,Protocol (science) ,education.field_of_study ,Research ethics ,business.industry ,010401 analytical chemistry ,Parkinson Disease ,speed ,General Medicine ,calibration ,C600 ,0104 chemical sciences ,3. Good health ,Research Design ,chronic airways disease ,multiple sclerosi ,Medicine ,Observational study ,parkinson-s disease ,late-life function ,business ,Parkinson-s disease ,030217 neurology & neurosurgery - Abstract
Introduction: Existing mobility endpoints based on functional performance, physical assessments and patient self-reporting are often affected by lack of sensitivity, limiting their utility in clinical practice. Wearable devices including inertial measurement units (IMUs) can overcome these limitations by quantifying digital mobility outcomes (DMOs) both during supervised structured assessments and in real-world conditions. The validity of IMU-based methods in the real-world, however, is still limited in patient populations. Rigorous validation procedures should cover the device metrological verification, the validation of the algorithms for the DMOs computation specifically for the population of interest and in daily life situations, and the users' perspective on the device. Methods and analysis: This protocol was designed to establish the technical validity and patient acceptability of the approach used to quantify digital mobility in the real world by Mobilise-D, a consortium funded by the European Union (EU) as part of the Innovative Medicine Initiative, aiming at fostering regulatory approval and clinical adoption of DMOs.After defining the procedures for the metrological verification of an IMU-based device, the experimental procedures for the validation of algorithms used to calculate the DMOs are presented. These include laboratory and real-world assessment in 120 participants from five groups: healthy older adults; chronic obstructive pulmonary disease, Parkinson's disease, multiple sclerosis, proximal femoral fracture and congestive heart failure. DMOs extracted from the monitoring device will be compared with those from different reference systems, chosen according to the contexts of observation. Questionnaires and interviews will evaluate the users' perspective on the deployed technology and relevance of the mobility assessment. Ethics and dissemination: The study has been granted ethics approval by the centre's committees (London-Bloomsbury Research Ethics committee; Helsinki Committee, Tel Aviv Sourasky Medical Centre; Medical Faculties of The University of Tübingen and of the University of Kiel). Data and algorithms will be made publicly available. Trial registration number: ISRCTN (12246987). We acknowledge support from the Spanish Ministry of Science and Innovation through the “Centro de Excelencia Severo Ochoa 2019-2023” Program (CEX2018-000806-S), and support from the Generalitat de Catalunya through the CERCA Program. SDD, AY and LRo are also supported by the Newcastle Biomedical Research Centre (BRC) based at Newcastle upon Tyne and Newcastle University. CM, BS, LVG and EB are also supported by the Sheffield Biomedical Research Centre (BRC) based at the Sheffield Teaching Hospital and the University of Sheffield. The work was also supported by the NIHR/Wellcome Trust Clinical Research Facility (CRF) infrastructure at Newcastle upon Tyne Hospitals NHS Foundation Trust and the CRF at the Sheffield Teaching Hospital. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care or the funders.This study was co-funded by the European Union’s Horizon 2020 research and innovation programme and EFPIA via the Innovative Medicine Initiative 2 (Mobilise-D project, grant number IMI22017-13-7-820820). The views expressed are those of the authors and not necessarily those of the IMI, the European Union, the EFPIA, or any Associated Partners. We acknowledge the support of Grünenthal GmbH via the funding of a PhD scholarship directly dedicated to the technical validation protocol.
- Published
- 2021
22. Applications of provenance in performance prediction and data storage optimisation
- Author
-
Simon Woodman, Paul Watson, and Hugo Hiden
- Subjects
0301 basic medicine ,Graph database ,Exploit ,Database ,Computer Networks and Communications ,business.industry ,Computer science ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Data science ,03 medical and health sciences ,030104 developmental biology ,Workflow ,Hardware and Architecture ,Computer data storage ,e-Science ,0202 electrical engineering, electronic engineering, information engineering ,Performance prediction ,020201 artificial intelligence & image processing ,business ,computer ,Software - Abstract
Accurate and comprehensive storage of provenance information is a basic requirement for modern scientific computing. A significant effort in recent years has developed robust theories and standards for the representation of these traces across a variety of execution platforms. Whilst these are necessary to enable repeatability they do not exploit the captured information to its full potential. This data is increasingly being captured from applications hosted on Cloud Computing platforms, which offer large scale computing resources without significant up front costs. Medical applications, which generate large datasets are also suited to cloud computing as the practicalities of storing and processing such data locally are becoming increasingly challenging. This paper shows how provenance can be captured from medical applications, stored using a graph database and then used to answer audit questions and enable repeatability. This static provenance will then be combined with performance data to predict future workloads, inform decision makers and reduce latency. Finally, cost models which are based on real world cloud computing costs will be used to determine optimum strategies for data retention over potentially extended periods of time.
- Published
- 2017
23. A Platform for the Analysis of Qualitative and Quantitative Data about the Built Environment and its Users
- Author
-
Vicki L. Hanson, Simon Woodman, Mike Simpson, Hugo Hiden, Sebastian Stein, Paul Watson, Stephen Dowsland, and Mark Turner
- Subjects
Computer science ,business.industry ,Qualitative property ,Cloud computing ,030229 sport sciences ,Data science ,Data type ,Data warehouse ,03 medical and health sciences ,Upload ,0302 clinical medicine ,Data visualization ,Human–computer interaction ,Analytics ,030212 general & internal medicine ,business ,Built environment - Abstract
There are many scenarios in which it is necessary to collect data from multiple sources in order to evaluate a system, including the collection of both quantitative data - from sensors and smart devices - and qualitative data - such as observations and interview results. However, there are currently very few systems that enable both of these data types to be combined in such a way that they can be analysed side-by-side. \ud \ud This paper describes an end-to-end system for the collection, analysis, storage and visualisation of qualitative and quantitative data, developed using the e-Science Central cloud analytics platform. We describe the experience of developing the system, based on a case study that involved collecting data about the built environment and its users. In this case study, data is collected from older adults living in residential care. Sensors were placed throughout the care home and smart devices were issued to the residents. This sensor data is uploaded to the analytics platform and the processed results are stored in a data warehouse, where it is integrated with qualitative data collected by healthcare and architecture researchers. Visualisations are also presented which were intended to allow the data to be explored and for potential correlations between the quantitative and qualitative data to be investigated.
- Published
- 2017
24. Cloud computing for fast prediction of chemical activity
- Author
-
Simon Woodman, Hugo Hiden, Paul Watson, and Jacek Cala
- Subjects
Range (mathematics) ,Computer Networks and Communications ,Hardware and Architecture ,Computer science ,Process (engineering) ,business.industry ,Distributed computing ,Scalability ,Cloud computing ,business ,Software - Abstract
Quantitative Structure-Activity Relationships (QSAR) is a method for creating models that can predict certain properties of compounds. It is of growing importance in the design of new drugs. The quantity of data now available for building models is increasing rapidly, which has the advantage that more accurate models can be created, for a wider range of properties. However the disadvantage is that the amount of computation required for model building has also dramatically increased. Therefore, it became vital to find a way to accelerate this process. We have achieved this by exploiting parallelism in searching the QSAR model space for the best models. This paper shows how the cloud computing paradigm can be a good fit to this approach. It describes the design and implementation of a tool for exploring the model space that exploits our e-Science Central cloud platform. We report on the scalability achieved and the experiences gained when designing the solution. The acceleration and absolute performance achieved is much greater than for existing QSAR solutions, creating the potential for new, interesting research, and the exploitation of this approach to accelerate other types of applications.
- Published
- 2013
25. Provenance and data differencing for workflow reproducibility analysis
- Author
-
Paul Watson, Simon Woodman, Hugo Hiden, and Paolo Missier
- Subjects
Information retrieval ,Computer Networks and Communications ,Computer science ,business.industry ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Semantic data model ,Field (computer science) ,Computer Science Applications ,Theoretical Computer Science ,Data differencing ,Workflow ,Computational Theory and Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Software ,TRACE (psycholinguistics) - Abstract
One of the foundations of science is that researchers must publish the methodology used to achieve their results so that others can attempt to reproduce them. This has the added benefit of allowing methods to be adopted and adapted for other purposes. In the field of e-Science, services -- often choreographed through workflow, process data to generate results. The reproduction of results is often not straightforward as the computational objects may not be made available or may have been updated since the results were generated. For example, services are often updated to fix bugs or improve algorithms. This paper addresses these problems in three ways. Firstly, it introduces a new framework to clarify the range of meanings of "reproducibility". Secondly, it describes a new algorithm, \PDIFF, that uses a comparison of workflow provenance traces to determine whether an experiment has been reproduced; the main innovation is that if this is not the case then the specific point(s) of divergence are identified through graph analysis, assisting any researcher wishing to understand those differences. One key feature is support for user-defined, semantic data comparison operators. Finally, the paper describes an implementation of \PDIFF that leverages the power of the e-Science Central platform which enacts workflows in the cloud. As well as automatically generating a provenance trace for consumption by \PDIFF, the platform supports the storage and re-use of old versions of workflows, data and services; the paper shows how this can be powerfully exploited in order to achieve reproduction and re-use.
- Published
- 2013
26. Prediction of workflow execution time using provenance traces: Practical applications in medical data processing
- Author
-
Paul Watson, Simon Woodman, and Hugo Hiden
- Subjects
Data processing ,Database ,Process (engineering) ,business.industry ,Computer science ,Cloud computing ,Context (language use) ,030229 sport sciences ,Benchmarking ,computer.software_genre ,Data modeling ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,Workflow ,030212 general & internal medicine ,Data mining ,business ,computer - Abstract
The use of cloud resources for processing and analysing medical data has the potential to revolutionise the treatment of a number of chronic conditions. For example, it has been shown that it is possible to manage conditions such as diabetes, obesity and cardiovascular disease by increasing the right forms of physical activity for the patient. Typically, movement data is collected for a patient over a period of several weeks using a wrist worn accelerometer. This data, however, is large and its analysis can require significant computational resources. Cloud computing offers a convenient solution as it can be paid for as needed and is capable of scaling to store and process large numbers of data sets simultaneously. However, because the charging model for the cloud represents, to some extent, an unknown cost and therefore risk to project managers, it is important to have an estimate of the likely data processing and storage costs that will be required to analyse a set of data. This could take the form of data collected from a patient in clinic or of entire cohorts of data collected from large studies. If, however, an accurate model was available that could predict the compute and storage requirements associated with a piece of analysis code, decisions could be made as to the scale of resources required in order to obtain results within a known timescale. This paper makes use of provenance and performance data collected as part of routine e-Science Central workflow executions to examine the feasibility of automatically generating predictive models for workflow execution times based solely on observed characteristics such as data volumes processed, algorithm settings and execution durations. The utility of this approach will be demonstrated via a set of benchmarking examples before being used to model workflow executions performed as part of two large medical movement analysis studies.
- Published
- 2016
27. Accelerometer-based gait assessment: Pragmatic deployment on an international scale
- Author
-
Kianoush Nazarpour, Silvia Del Din, Hugo Hiden, Lynn Rochester, Paul Watson, Aodhán Hickey, Simon Woodman, Alan Godfrey, Rosie Morris, and Michael Catt
- Subjects
030506 rehabilitation ,medicine.medical_specialty ,Computer science ,business.industry ,International scale ,Wearable computer ,G900 ,Accelerometer ,B900 ,03 medical and health sciences ,0302 clinical medicine ,Gait (human) ,Workflow ,Physical medicine and rehabilitation ,Software deployment ,Gait analysis ,medicine ,Artificial intelligence ,0305 other medical science ,business ,030217 neurology & neurosurgery ,Wearable technology - Abstract
Gait is emerging as a powerful tool to detect early disease and monitor progression across a number of pathologies. Typically quantitative gait assessment has been limited to specialised laboratory facilities. However, measuring gait in home and community settings may provide a more accurate reflection of gait performance because: (1) it will not be confounded by attention which may be heightened during formal testing; and (2) it allows performance to be captured over time. This work addresses the feasibility and challenges of measuring gait characteristics with a single accelerometer based wearable device during free-living activity. Moreover, it describes the current methodological and statistical processes required to quantify those sensitive surrogate markers for ageing and pathology. A unified framework for large scale analysis is proposed. We present data and workflows from healthy older adults and those with Parkinson's disease (PD) while presenting current algorithms and scope within modern pervasive healthcare. Our findings suggested that free-living conditions heighten between group differences showing greater sensitivity to PD, and provided encouraging results to support the use of the suggested framework for large clinical application.
- Published
- 2016
28. e-Science Central for CARMEN: science as a service
- Author
-
Paul Watson, Hugo Hiden, and Simon Woodman
- Subjects
Computational Theory and Mathematics ,Computer Networks and Communications ,Software ,Computer Science Applications ,Theoretical Computer Science - Published
- 2010
29. Orchestration of Grid-Enabled Geospatial Web Services in Geoscientific Workflows
- Author
-
Gobe Hobona, Hugo Hiden, Philip James, and David Fairbairn
- Subjects
Distributed GIS ,computer.internet_protocol ,Computer science ,WS-I Basic Profile ,Web Feature Service ,Services computing ,computer.file_format ,GeneralLiterature_MISCELLANEOUS ,World Wide Web ,Business Process Execution Language ,Control and Systems Engineering ,Geospatial PDF ,Web Coverage Service ,Electrical and Electronic Engineering ,computer ,Open Grid Services Architecture - Abstract
The need for computational resources capable of processing geospatial data has accelerated the uptake of geospatial web services. Several academic and commercial organizations now offer geospatial web services for data provision, coordinate transformation, geocoding and several other tasks. These web services adopt specifications developed by the Open Geospatial Consortium (OGC) - the leading standardization body for Geographic Information Systems. In parallel with efforts of the OGC, the Grid computing community has published specifications for developing Grid applications. The Open Grid Forum (OGF) is the main body that promotes interoperability between Grid computing systems. This study examines the integration of Grid services and geospatial web services into workflows for Geoscientific processing. An architecture is proposed that bridges web services based on the abstract geospatial architecture (ISO19119) and the Open Grid Services Architecture (OGSA). The paper presents a workflow management system, called SAW-GEO, that supports orchestration of Grid-enabled geospatial web services. An implementation of SAW-GEO is presented, based on both the Simple Conceptual Unified Flow Language (SCUFL) and the Business Process Execution Language for Web Services (WS-BPEL or BPEL for short).
- Published
- 2010
30. Workflow provenance
- Author
-
Simon Woodman, Hugo Hiden, and Paul Watson
- Subjects
Data processing ,Workflow ,Database ,Computer science ,Order (business) ,Transparency (human–computer interaction) ,Processing ,computer.software_genre ,computer ,computer.programming_language ,Data processing system ,TRACE (psycholinguistics) ,Term (time) - Abstract
The storage and retrieval of provenance is a critical piece of functionality for many data processing systems. There are numerous cases where, in order to satisfy regulatory requirements (such as drug development and medical data processing), accurately reproduce results (scientific research) or to maintain financial transparency (for example to meet Sarbanes Oxley regulations in the US), a full and accurate provenance trace is vital.Whilst it is always possible to meet these requirements by storing every piece of intermediate data generated by a sequence of calculations, the costs associated with retaining data that may have a low probability of future retrieval is significant. There is, however, an opportunity for a reduction in the cost of storage by opting not to store certain intermediate results that can be regenerated given a knowledge of the processing code and input data that generated them.This paper presents a approach which is able, via a collection of past performance and provenance data, to make decisions based on the underlying storage and computation costs as to which intermediate data to retain and which to regenerate on demand.
- Published
- 2015
31. Automated Production Support for the Bioprocess Industry
- Author
-
Gary Montague, Kathryn Kipling, Barry Lennox, Hugo Hiden, Jarka Glassey, and Mark J. Willis
- Subjects
Quality Control ,Stochastic Processes ,Computer science ,business.industry ,Process (engineering) ,Industrial production ,Expert Systems ,Process variable ,computer.software_genre ,Industrial engineering ,Automation ,Expert system ,Industrial Microbiology ,Consistency (database systems) ,Fermentation ,Multivariate Analysis ,Production (economics) ,Bioprocess ,business ,computer ,Algorithms ,Biotechnology - Abstract
This paper describes the application of Artificial Intelligence and Multivariate Statistical Techniques to two industrial fermentation systems. In the first example, an Expert System is shown to provide tighter control of an important process parameter. This is shown to lead to improved consistency of operation. In the second application, Principal Component Analysis is applied to a final stage fermentation production facility. The results presented indicate that the algorithm can provide concise indicators of process faults that can be presented to the operators to assist them in taking suitable corrective actions.
- Published
- 2002
32. Inferential quality assessment in breakfast cereal production
- Author
-
Hugo Hiden, Gary Montague, Steven M. Albert, Elaine Martin, A. K. Conlin, and A.J. Morris
- Subjects
Production line ,Engineering ,Process modeling ,Operations research ,Process (engineering) ,business.industry ,media_common.quotation_subject ,Breakfast cereal ,food.food ,food ,Partial least squares regression ,Production (economics) ,Quality (business) ,Product (category theory) ,business ,Food Science ,media_common - Abstract
This paper describes the development of inferential models for the provision of real-time, on-line estimates of the quality of a breakfast cereal for production line operators. Five quality variables were selected and on-line measurements reflective of the key process conditions were identified. Following process data logging, a number of linear and non-linear data-based modelling methods were applied to identify relationships between the on-line measurements and the product quality. Off-line verification of the models indicated that the prediction accuracy achieved was sufficient to offer the opportunity for quality control improvements. The models were subsequently implemented on-line to provide the process operators with frequent estimates of product quality. Performance assessment has indicated a reduction in the variability of all five quality parameters. In addition to details of the modelling, the decisions relating to the development strategy and justification for implementation are considered.
- Published
- 2001
33. Moving Window MSPC and Its Application to Batch Processes
- Author
-
Hugo Hiden, Georg KornfeId, Barry Lennox, and Gary Montague
- Subjects
Multivariate statistical process control ,Engineering ,business.industry ,Principal component analysis ,Statistical model ,Moving window ,Data mining ,Batch operation ,business ,computer.software_genre ,computer ,Fermentation system ,Fault detection and isolation - Abstract
Multivariate statistical process control (MSPC) has been demonstrated to provide assistance in the monitoring offed-batch fermentation systems. The application of MSPC to batch systems is complicated by the non-linear profiles that the systems exhibit. These complications are typically overcome through what is termed unfolding the data . This approach however does pose two problems. The first is that the future profile of batch operation must be assumed in the analysis and the second is that the length of each batch must be the same. In this paper an alternative approach to unfolding the data is presented. This approach, termed moving window MSPC, determines the statistical model on a moving window of data that is continuously updated. The approach is applied to an industrial fed-batch fermentation system and its benefits over traditional data unfolding techniques are presented.
- Published
- 2001
34. Application of multivariate statistical process control to batch operations
- Author
-
Gary Montague, Hugo Hiden, Peter R. Goulding, Barry Lennox, and G. Kornfeld
- Subjects
Engineering ,Operations research ,business.industry ,Process (engineering) ,General Chemical Engineering ,media_common.quotation_subject ,Condition monitoring ,Industrial engineering ,Fermentation system ,Fault detection and isolation ,Computer Science Applications ,Multivariate statistical process control ,Quality (business) ,Product (category theory) ,business ,Monitoring tool ,media_common - Abstract
This paper summarises the results of a 2-year study focusing on the development of a condition monitoring system for a fed-batch fermentation system operated by Biochemie Ltd. in Austria. Consumer pressure has esulted in a greater emphasis in industry on product quality. As a direct consequence, the importance of accurate process monitoring has increased steadily in recent years. This paper demonstrates the application of multivariate statistical routines to provide process operators with a monitoring tool capable of detecting process abnormalities.
- Published
- 2000
35. Case study investigating multivariate statistical techniques for fermentation supervision
- Author
-
Gary Montague, Barry Lennox, Hugo Hiden, and G. Kornfeld
- Subjects
Supervisory systems ,Engineering ,Process (engineering) ,business.industry ,General Chemical Engineering ,Rule-based system ,computer.software_genre ,Fault detection and isolation ,Computer Science Applications ,Multivariate statistical process control ,Interfacing ,Principal component analysis ,Data mining ,Multivariate statistical ,business ,computer - Abstract
This paper describes a case study in which multivariate statistical procedures have been developed to assist in the supervision of an industrial fed-batch fermentation process operated by Biochemie in Austria. The procedures have been developed to enhance the monitoring capabilities of the current system by interfacing directly into the present G2 real-time knowledge based supervisory system. While the G2 rule based system is useful for detecting deviations in single variables, it has been found to be unable to detect some of the more subtle deviations caused by the complex interactions between the process variables. Multivariate statistical techniques have been utilised in this study to provide early indications of deviations from nominal batch behaviour. The cause of these deviations can subsequently be determined by interrogating the information produced by these algorithms. Although the multivariate statistical techniques adopted in this paper are not new, their integration within the industrial supervisory system and the on-line application to the industrial fermentation process is novel.
- Published
- 1999
36. Non-linear principal components analysis using genetic programming
- Author
-
Gary Montague, Hugo Hiden, Mark J. Willis, and M.T. Tham
- Subjects
Multivariate statistics ,Process (engineering) ,Computer science ,General Chemical Engineering ,Genetic programming ,computer.software_genre ,Computer Science Applications ,Nonlinear system ,Fractionating column ,Principal component analysis ,Relevance (information retrieval) ,Data mining ,Limit (mathematics) ,computer - Abstract
Principal components analysis (PCA) is a standard statistical technique, which is frequently employed in the analysis of large highly correlated data sets. As it stands, PCA is a linear technique which can limit its relevance to the non-linear systems frequently encountered in the chemical process industries. Several attempts to extend linear PCA to cover non-linear data sets have been made, and will be briefly reviewed in this paper. We propose a symbolically oriented technique for non-linear PCA, which is based on the genetic programming (GP) paradigm. Its applicability will be demonstrated using two simple non-linear systems and data collected from an industrial distillation column.
- Published
- 1999
37. Multivariate Statistical Monitoring Procedures for Fermentation Supervision: An Industrial Case Study
- Author
-
Gary Montague, Hugo Hiden, and G. Kornfeld
- Subjects
Engineering ,business.industry ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Rule-based system ,Industrial fermentation ,Industrial engineering ,Fault detection and isolation ,Knowledge-based systems ,Supervisory control ,Principal component analysis ,Batch processing ,Multivariate statistical ,business ,Process engineering - Abstract
This paper describes a case study in which multivariate statistical procedures have been developed to assist in the supervision of an industrial fed-batch fermentation process. Currently supervisory control of the industrial fermentation is aided through use of the G2 realtime knowledge based system. The rule based system is complemented by a number of algorithmic methods. While rules are useful for detecting deviations in single variables, complex interactions between fermentation conditions during batch operation can lead to more subtle deviations. One approach that can be used in such circumstances is Multi-way principal component analysis. This provides early indications of deviations from nominal batch process behaviour and subsequently contribution plots can be utilised to assist in identifying the causes.
- Published
- 1998
38. Developing Inferential Estimation Algorithms using Genetic Programming
- Author
-
Mark J. Willis, Hugo Hiden, and Gary Montague
- Subjects
Structure (mathematical logic) ,Artificial neural network ,Finite impulse response ,Computer science ,business.industry ,Feed forward ,Experimental data ,Genetic programming ,Machine learning ,computer.software_genre ,Root mean square ,Genetic algorithm ,Artificial intelligence ,business ,Algorithm ,computer ,Selection (genetic algorithm) - Abstract
In this contribution, Genetic Programming (GP) is used to develop inferential estimation models using experimental data. GP performs symbolic optimisation, automatically determining both the structure and the complexity of an empirical model. After a tutorial example, the usefulness of the technique is demonstrated by the development of an inferential estimation model of a plasticating extruder. A statistical analysis procedure is used as a guide in the selection of the final model structure. For the industrial case study, the inferential models obtained using the GP algorithm are compared to those obtained using a linear, finite impulse response model and a feedforward artificial neural network (FANN). For this application, the GP technique produces models with a significantly lower Root Mean Square (RMS) error than the other techniques.
- Published
- 1997
39. Systems modelling using genetic programming
- Author
-
Geoffrey W. Barton, Ben McKay, Hugo Hiden, Mark Hinchliffe, and Mark J. Willis
- Subjects
Structure (mathematical logic) ,Vacuum distillation ,General Chemical Engineering ,Empirical modelling ,Model development ,Genetic programming ,Symbolic regression ,Process systems ,Column (database) ,Algorithm ,Computer Science Applications ,Mathematics - Abstract
In this contribution, a Genetic Programming (GP) algorithm is used to develop empirical models of chemical process systems. GP performs symbolic regression, determining both the structure and the complexity of a model. Initially, steady-state model development using a GP algorithm is considered, next the methodology is extended to the development of dynamic input-output models. The usefulness of the technique is demonstrated by the development of inferential estimation models for two typical processes: a vacuum distillation column and a twin screw cooking extruder.
- Published
- 1997
40. A framework for dynamically generating predictive models of workflow execution
- Author
-
Paul Watson, Simon Woodman, and Hugo Hiden
- Subjects
Workflow ,business.industry ,Computer science ,Distributed computing ,Automatic identification and data capture ,Component-based software engineering ,Cloud computing ,business ,Workflow engine ,Predictive modelling ,Workflow management system ,Workflow technology - Abstract
The ability to accurately predict the performance of software components executing within a Cloud environment is an area of intense interest to many researchers. The availability of an accurate prediction of the time taken for a piece of code to execute would be beneficial for both planning and cost optimisation purposes. To that end, this paper proposes a performance data capture and modelling architecture that can be used to generate models of code execution time that are dynamically updated as additional performance data is collected. To demonstrate the utility of this approach, the workflow engine within the e-Science Central Cloud platform has been instrumented to capture execution data with a view to generating predictive models of workflow performance. Models have been generated for both simple and more complex workflow components operating on local hardware and within a virtualised Cloud environment and the ability to generate accurate performance predictions given a number of caveats is demonstrated.
- Published
- 2013
41. Supporting NGS pipelines in the cloud
- Author
-
Goetz Brasche, Kenji Takeda, Andrés Tomás, Fabrizio Gagliardi, Jacek Cala, Simon Woodman, Dennis Gannon, Hakan Soncu, Ignacio Blanquer, and Hugo Hiden
- Subjects
Engineering ,Data curation ,business.industry ,Bioinformatics ,Data management ,Best practice ,Globe ,Cloud computing ,World Wide Web ,Pipeline transport ,medicine.anatomical_structure ,Mutation analysis ,NGS ,medicine ,CIENCIAS DE LA COMPUTACION E INTELIGENCIA ARTIFICIAL ,business ,Cloud ,LENGUAJES Y SISTEMAS INFORMATICOS - Abstract
[EN] Cloud4Science is a research activity funded by Microsoft that develops a unique online platform providing cloud services, datasets, tools, documentations, tutorial and best practices to meet the needs of researchers across the globe in terms of storing and managing datasets. Cloud4Science initially focuses on dedicated services for the bioinformatics community. Its ultimate goal is to support a wide range of scientific communities as the natural first choice for scientific data curation, analysis and, The authors want to thank Microsoft and the cloud4Science project for funding this research activity.
- Published
- 2013
42. Developing cloud applications using the e-Science Central platform
- Author
-
Paul Watson, Hugo Hiden, Jacek Cala, and Simon Woodman
- Subjects
Representational state transfer ,Computer science ,computer.internet_protocol ,workflow ,General Mathematics ,Data management ,General Physics and Astronomy ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Upload ,0202 electrical engineering, electronic engineering, information engineering ,Database ,Application programming interface ,business.industry ,Software as a service ,cloud computing ,General Engineering ,e-Science ,020206 networking & telecommunications ,Articles ,Microsoft Windows ,020201 artificial intelligence & image processing ,Data mining ,business ,computer ,Research Article - Abstract
This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.
- Published
- 2012
43. Achieving reproducibility by combining provenance with service and workflow versioning
- Author
-
Hugo Hiden, Paolo Missier, Simon Woodman, and Paul Watson
- Subjects
Service (systems architecture) ,Database ,Computer science ,business.industry ,Cloud computing ,computer.file_format ,computer.software_genre ,Upload ,Workflow ,e-Science ,Web application ,Executable ,business ,computer ,Software versioning - Abstract
Capturing and exploiting provenance information is considered to be important across a range of scientific, medical, commercial and Web applications, including recent trends towards publishing provenance-rich, executable papers. This article shows how the range of useful questions that provenance can answer is greatly increased when it is encapsulated into a system that can store and execute both current and old versions of workflows and services. e- Science Central provides a scalable, secure cloud platform for application developers. They can use it to upload data -- for storage on the cloud -- and services, which can be written in a variety of languages. These services can then be combined through workflows which are enacted in the cloud to compute over the data. When a workflow runs, a complete provenance trace is recorded. This paper shows how this provenance trace, used in conjunction with the ability to execute old versions of services and workflows (rather than just the latest versions) can provide useful information that would otherwise not be possible, including the key ability to reproduce experiments and to compare the effects of old and new versions of services on computations.
- Published
- 2011
44. The panel of experts cloud pattern
- Author
-
David E. Leahy, Jacek Cala, Simon Woodman, Hugo Hiden, Paolo Missier, and Paul Watson
- Subjects
Set (abstract data type) ,Range (mathematics) ,Workflow ,business.industry ,Computer science ,Cloud computing ,Function (mathematics) ,Data mining ,business ,computer.software_genre ,computer ,Dual (category theory) - Abstract
In this paper we describe the Panel of Experts cloud pattern and give examples of its use. The Panel of Experts pattern can be applied when a range of algorithms "compete" to provide the best solution to a problem. It can therefore be viewed as the dual of map-reduce [2] as it applies a set of functions to one data item, whereas map-reduce applies one function to a set of data items. The pattern has been evaluated through a large chemical informatics application which can take days to run on tens of cloud nodes.
- Published
- 2011
45. AMUC: Associated Motion capture User Categories
- Author
-
Martyn Dade-Robertson, Dave Green, Patrick Olivier, Paul Dunphy, Paul Watson, Hugo Hiden, Sally Jane Norman, Jonathan Hook, Dan Jackson, Anita M.-A. Chan, and Sian E.M. Lawson
- Subjects
Data stream mining ,Computer science ,General Mathematics ,Movement ,General Engineering ,Process (computing) ,General Physics and Astronomy ,Information Storage and Retrieval ,Grid ,Motion capture ,Sketch ,Motion (physics) ,Identification (information) ,Motion ,User-Computer Interface ,Human–computer interaction ,Digital image processing ,Image Processing, Computer-Assisted - Abstract
The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.
- Published
- 2009
46. CARMEN: A Neuroscience Portal for Collaboration via Tools and Data sharing
- Author
-
Hugo Hiden, Mark Jessop, Jim Austin, Paul Watson, Colin D. Ingram, Leslie S. Smith, Phil Lord, Thomas Jackson, Martyn Fletcher, and Bojian Liang
- Subjects
Data sharing ,World Wide Web ,Computer science ,Biomedical Engineering ,Neuroscience (miscellaneous) ,Data science ,Computer Science Applications - Published
- 2009
47. A computer architecture to support the operation of virtual organisations for the chemical development lifecycle
- Author
-
A Conlin, Julian Morris, Hugo Hiden, AR Wright, R Smith, and Philip English
- Subjects
Process management ,Operations research ,Computer science ,business.industry ,Time to market ,New product development ,Key (cryptography) ,Context (language use) ,System lifecycle ,Architecture ,business ,Application lifecycle management ,Outsourcing - Abstract
Fine chemical and pharmaceutical manufacturers focus on new product development as a means of growth, with time to market as key driver, Time to market improvements are, however, limited by a company's infrastructure; for example, in-house design and manufacturing capacity, ability of respond to a dynamic environment etc. In many marketplaces, there is an increasing trend towards outsourcing and operating networks of specialist providers. Different specialist companies may be involved at all stages in the R&D lifecycle, providing services ranging from basic research or safety testing, to industrial scale manufacturing. This outsourcing concept can be generalised to support advanced notions of collaboration such as loosely-coupled but highly integrated networks of companies cooperating as a single enterprise. These networks can be described as dynamic Virtual Organisations (VOs) (Demchenko, 2004) . This paper will outline a computer based architecture developed to address the operational difficulties associated with the creation and operation of this type of VO and present it within the context of the chemical development lifecycle.
- Published
- 2005
48. Non-linear principal components analysis using genetic programming
- Author
-
Gary Montague, P. Turner, Mark J. Willis, Hugo Hiden, and M.T. Tham
- Subjects
Computer science ,business.industry ,Sparse PCA ,Genetic programming ,Pattern recognition ,Nonlinear system ,Fractionating column ,Principal component analysis ,Genetic algorithm ,Limit (mathematics) ,Artificial intelligence ,business ,Algorithm ,Matrix calculus - Abstract
Principal components analysis (PCA) is a standard statistical technique, which is frequently employed in the analysis of large highly correlated data-sets. As it stands, PCA is a linear technique which can limit its relevance to the highly nonlinear systems frequently encountered in the chemical process industries. Several attempts to extend linear PCA to cover nonlinear data sets have been made, and will be briefly reviewed in this paper. We propose a symbolically oriented technique for nonlinear PCA, which is based on the genetic programming (GP) paradigm. Its applicability will be demonstrated using two simple nonlinear systems and industrial data collected from a distillation column. It is suggested that the use of the GP-based nonlinear PCA algorithm achieves the objectives of nonlinear PCA, while giving high a degree of structural parsimony.
- Published
- 1997
49. Genetic programming: an introduction and survey of applications
- Author
-
Mark J. Willis, Gary Montague, Peter Marenbach, Ben McKay, and Hugo Hiden
- Subjects
Signal processing ,Operations research ,Computer science ,business.industry ,Genetic algorithm ,Computer Aided Design ,Genetic programming ,computer.software_genre ,Software engineering ,business ,computer ,Scheduling (computing) - Abstract
The aim of this paper is to provide an introduction to the rapidly developing field of genetic programming (GP). Particular emphasis is placed on the application of GP to engineering problem solving. First, the basic methodology is introduced. This is followed by a review of applications in the areas of systems modelling, control, optimisation and scheduling, design and signal processing. The paper concludes by suggesting potential avenues of research.
- Published
- 1997
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.