32 results
Search Results
2. Should AI allocate livers for transplant? Public attitudes and ethical considerations.
- Author
-
Drezga-Kleiminger, Max, Demaree-Cotton, Joanna, Koplin, Julian, Savulescu, Julian, and Wilkinson, Dominic
- Subjects
PUBLIC opinion ,MORAL attitudes ,LIVER transplantation ,ARTIFICIAL intelligence ,PATIENT compliance - Abstract
Background: Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. Methods: We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses from 172 UK laypeople, recruited through Prolific Academic. Findings: Most participants found AI in liver allocation acceptable (69.2%) and would not be less likely to donate their organs if AI was used in allocation (72.7%). Respondents thought AI was more likely to be consistent and less biased compared to humans, although were concerned about the "dehumanisation of healthcare" and whether AI could consider important nuances in allocation decisions. Participants valued accuracy, impartiality, and consistency in a decision-maker, more than interpretability and empathy. Respondents were split on whether AI should be trained on previous decisions or programmed with specific objectives. Whether allocation decisions were made by transplant committee or AI, participants valued consideration of urgency, survival likelihood, life years gained, age, future medication compliance, quality of life, future alcohol use and past alcohol use. On the other hand, the majority thought the following factors were not relevant to prioritisation: past crime, future crime, future societal contribution, social disadvantage, and gender. Conclusions: There are good reasons to use AI in liver allocation, and our sample of participants appeared to support its use. If confirmed, this support would give democratic legitimacy to the use of AI in this context and reduce the risk that donation rates could be affected negatively. Our findings on specific ethical concerns also identify potential expectations and reservations laypeople have regarding AI in this area, which can inform how AI in liver allocation could be best implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. A survey on clinical natural language processing in the United Kingdom from 2007 to 2022.
- Author
-
Wu, Honghan, Wang, Minhong, Wu, Jinge, Francis, Farah, Chang, Yun-Hsuan, Shavick, Alex, Dong, Hang, Poon, Michael T. C., Fitzpatrick, Natalie, Levine, Adam P., Slater, Luke T., Handy, Alex, Karwath, Andreas, Gkoutos, Georgios V., Chelala, Claude, Shah, Anoop Dinesh, Stewart, Robert, Collier, Nigel, Alex, Beatrice, and Whiteley, William
- Subjects
COMPUTER software ,COMPUTERS ,NATURAL language processing ,STAKEHOLDER analysis ,MEDICAL care ,HEALTH status indicators ,TASK performance ,MACHINE learning ,DATABASE management ,BUSINESS networks ,NATIONAL health services ,INFORMATION retrieval ,INTERPROFESSIONAL relations ,RESEARCH funding ,BUDGET ,ENDOWMENTS ,ADVERSE health care events ,ELECTRONIC health records ,INFORMATION technology ,PHENOTYPES ,ALGORITHMS ,EVALUATION - Abstract
Much of the knowledge and information needed for enabling high-quality clinical research is stored in free-text format. Natural language processing (NLP) has been used to extract information from these sources at scale for several decades. This paper aims to present a comprehensive review of clinical NLP for the past 15 years in the UK to identify the community, depict its evolution, analyse methodologies and applications, and identify the main barriers. We collect a dataset of clinical NLP projects (n = 94; £ = 41.97 m) funded by UK funders or the European Union's funding programmes. Additionally, we extract details on 9 funders, 137 organisations, 139 persons and 431 research papers. Networks are created from timestamped data interlinking all entities, and network analysis is subsequently applied to generate insights. 431 publications are identified as part of a literature review, of which 107 are eligible for final analysis. Results show, not surprisingly, clinical NLP in the UK has increased substantially in the last 15 years: the total budget in the period of 2019–2022 was 80 times that of 2007–2010. However, the effort is required to deepen areas such as disease (sub-)phenotyping and broaden application domains. There is also a need to improve links between academia and industry and enable deployments in real-world settings for the realisation of clinical NLP's great potential in care delivery. The major barriers include research and development access to hospital data, lack of capable computational resources in the right places, the scarcity of labelled data and barriers to sharing of pretrained models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. GESLM algorithm for detecting causal SNPs in GWAS with multiple phenotypes.
- Author
-
Lyu, Ruiqi, Sun, Jianle, Xu, Dong, Jiang, Qianxue, Wei, Chaochun, and Zhang, Yue
- Subjects
GENOME-wide association studies ,PHENOTYPES ,DIRECTED acyclic graphs ,BAYESIAN analysis ,SEARCH algorithms ,ALGORITHMS - Abstract
With the development of genome-wide association studies, how to gain information from a large scale of data has become an issue of common concern, since traditional methods are not fully developed to solve problems such as identifying loci-to-loci interactions (also known as epistasis). Previous epistatic studies mainly focused on local information with a single outcome (phenotype), while in this paper, we developed a two-stage global search algorithm, Greedy Equivalence Search with Local Modification (GESLM), to implement a global search of directed acyclic graph in order to identify genome-wide epistatic interactions with multiple outcome variables (phenotypes) in a case–control design. GESLM integrates the advantages of score-based methods and constraint-based methods to learn the phenotype-related Bayesian network and is powerful and robust to find the interaction structures that display both genetic associations with phenotypes and gene interactions. We compared GESLM with some common phenotype-related loci detecting methods in simulation studies. The results showed that our method improved the accuracy and efficiency compared with others, especially in an unbalanced case–control study. Besides, its application on the UK Biobank dataset suggested that our algorithm has great performance when handling genome-wide association data with more than one phenotype. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. An Alternative Method for Traffic Accident Severity Prediction: Using Deep Forests Algorithm.
- Author
-
Gan, Jing, Li, Linheng, Zhang, Dapeng, Yi, Ziwei, and Xiang, Qiaojun
- Subjects
ALGORITHMS ,FORECASTING ,TRAFFIC accidents ,SUSTAINABLE transportation ,DECISION trees ,TRAFFIC safety ,MACHINE learning ,ROAD safety measures - Abstract
Traffic safety has always been an important issue in sustainable transportation development, and the prediction of traffic accident severity remains a crucial challenging issue in the domain of traffic safety. A huge variety of forecasting models have been proposed to meet this challenge. These models gradually evolved from linear to nonlinear forms and from traditional statistical regression models to current popular machine learning models. Recently, a machine learning algorithm called Deep Forests based on the decision tree ensemble has aroused widespread concern, which was proposed for the first time by a research team of Nanjing University. This algorithm was proved to be more accurate and robust in comparison with other machine learning algorithms. Motivated by this benefit, this study employs the UK road safety dataset to propose a novel method for predicting the severity of traffic accidents based on the Deep Forests algorithm. To verify the superiority of our proposed method, several other machine learning algorithm-based perdition models were implemented to predict traffic accident severity with the same dataset, and the prediction results show that the Deep Forests algorithm present good stability, fewer hyper-parameters, and the highest accuracy under different level of training data volume. It is expected that the findings from this study would be helpful for the establishment or improvement of effective traffic safety system within a sustainable transportation system, which is of great significance for helping government managers to establish timely proactive strategies in traffic accident prevention and effectively improve road traffic safety. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Association of blood pressure with incident diabetic microvascular complications among diabetic patients: Longitudinal findings from the UK Biobank.
- Author
-
Cong Li, Honghua Yu, Zhuoting Zhu, Xianwen Shang, Yu Huang, Sabanayagam, Charumathi, Xiaohong Yang, and Lei Liu
- Subjects
DIABETES complications ,HYPERTENSION ,GLYCOSYLATED hemoglobin ,CONFIDENCE intervals ,MULTIVARIATE analysis ,DISEASE incidence ,MANN Whitney U Test ,FISHER exact test ,RISK assessment ,T-test (Statistics) ,RESEARCH funding ,BLOOD pressure measurement ,DATA analysis software ,DIABETIC angiopathies ,PROPORTIONAL hazards models ,ALGORITHMS ,DOSE-response relationship in biochemistry ,LONGITUDINAL method ,DISEASE risk factors ,DISEASE complications - Abstract
Background Evidence suggests a correlation of blood pressure (BP) level with presence of diabetic microvascular complications (DMCs), but the effect of BP on DMCs incidence is not well-established. We aimed to explore the associations between BP and DMCs (diabetic retinopathy, diabetic kidney disease, and diabetic neuropathy) risk in participants with diabetes. Methods This study included 23030 participants, free of any DMCs at baseline, from the UK Biobank. We applied multivariable-adjusted Cox regression models to estimate BP-DMCs association and constructed BP genetic risk scores (GRSs) to test their association with DMCs phenotypes. Differences in incidences of DMCs were also compared between the 2017 ACC/AHA and JNC 7 guidelines (traditional criteria) of hypertension. Results Compared to systolic blood pressure (SBP)<120 mm Hg, participants with SBP≥160 mm Hg had a hazard ratio (HR) of 1.50 (95% confidence interval (CI)=1.09, 2.06) for DMCs. Similarly, DMCs risk increased by 9% for every 10 mm Hg of higher SBP at baseline (95% CI=1.04, 1.13). The highest tercile SBP GRS was associated with 32% higher DMCs risk (95% CI=1.11, 1.56) compared to the lowest tercile. We found no significant differences in DMCs incidence between JNC 7 and 2017 ACC/AHA guidelines. Conclusions Genetic and epidemiological evidence suggests participants with higher SBP had an increased risk of DMCs, but hypertension defined by 2017 ACC/AHA guidelines may not impact DMCs incidence compared with JNC 7 criteria, contributing to the care and prevention of DMCs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Transatlantic transferability and replicability of machine-learning algorithms to predict mental health crises.
- Author
-
Guerreiro, João, Garriga, Roger, Lozano Bagén, Toni, Sharma, Brihat, Karnik, Niranjan S., and Matić, Aleksandar
- Subjects
COMPETENCY assessment (Law) ,PREDICTION models ,RECEIVER operating characteristic curves ,RESEARCH funding ,DESCRIPTIVE statistics ,ELECTRONIC health records ,MACHINE learning ,ALGORITHMS - Abstract
Transferring and replicating predictive algorithms across healthcare systems constitutes a unique yet crucial challenge that needs to be addressed to enable the widespread adoption of machine learning in healthcare. In this study, we explored the impact of important differences across healthcare systems and the associated Electronic Health Records (EHRs) on machine-learning algorithms to predict mental health crises, up to 28 days in advance. We evaluated both the transferability and replicability of such machine learning models, and for this purpose, we trained six models using features and methods developed on EHR data from the Birmingham and Solihull Mental Health NHS Foundation Trust in the UK. These machine learning models were then used to predict the mental health crises of 2907 patients seen at the Rush University System for Health in the US between 2018 and 2020. The best one was trained on a combination of US-specific structured features and frequency features from anonymized patient notes and achieved an AUROC of 0.837. A model with comparable performance, originally trained using UK structured data, was transferred and then tuned using US data, achieving an AUROC of 0.826. Our findings establish the feasibility of transferring and replicating machine learning models to predict mental health crises across diverse hospital systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Contextualizing remote fall risk: Video data capture and implementing ethical AI.
- Author
-
Moore, Jason, McMeekin, Peter, Parkes, Thomas, Walker, Richard, Morris, Rosie, Stuart, Samuel, Hetherington, Victoria, and Godfrey, Alan
- Subjects
RISK assessment ,COMPUTERS ,PREDICTION models ,RESEARCH funding ,ARTIFICIAL intelligence ,PRIVACY ,PRODUCT design ,PILOT projects ,WEARABLE technology ,DESCRIPTIVE statistics ,DIAGNOSIS ,GAIT in humans ,HOME environment ,TELEMEDICINE ,SOUND recordings ,NEUROLOGICAL disorders ,ACQUISITION of data ,ARTIFICIAL neural networks ,MEDICAL needs assessment ,DATA analysis software ,COMPARATIVE studies ,INDIVIDUALIZED medicine ,ACCIDENTAL falls ,VIDEO recording ,MEDICAL ethics ,SENSITIVITY & specificity (Statistics) ,ALGORITHMS ,OPTICAL head-mounted displays ,EVALUATION - Abstract
Wearable inertial measurement units (IMUs) are being used to quantify gait characteristics that are associated with increased fall risk, but the current limitation is the lack of contextual information that would clarify IMU data. Use of wearable video-based cameras would provide a comprehensive understanding of an individual's habitual fall risk, adding context to clarify abnormal IMU data. Generally, there is taboo when suggesting the use of wearable cameras to capture real-world video, clinical and patient apprehension due to ethical and privacy concerns. This perspective proposes that routine use of wearable cameras could be realized within digital medicine through AI-based computer vision models to obfuscate/blur/shade sensitive information while preserving helpful contextual information for a comprehensive patient assessment. Specifically, no person sees the raw video data to understand context, rather AI interprets the raw video data first to blur sensitive objects and uphold privacy. That may be more routinely achieved than one imagines as contemporary resources exist. Here, to showcase/display the potential an exemplar model is suggested via off-the-shelf methods to detect and blur sensitive objects (e.g., people) with an accuracy of 88%. Here, the benefit of the proposed approach includes a more comprehensive understanding of an individual's free-living fall risk (from free-living IMU-based gait) without compromising privacy. More generally, the video and AI approach could be used beyond fall risk to better inform habitual experiences and challenges across a range of clinical cohorts. Medicine is becoming more receptive to wearables as a helpful toolbox, camera-based devices should be plausible instruments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Persisting variation in testing and reporting Clostridium difficile cases.
- Author
-
Satta, Giovanni, Parekh, Sejal, Dabrowski, Hannah, and Petkar, Hawabibee
- Subjects
CLOSTRIDIUM diseases ,ANTIBIOTICS ,ACADEMIC medical centers ,ALGORITHMS ,CLOSTRIDIOIDES difficile ,HEALTH policy ,MICROBIAL sensitivity tests ,QUESTIONNAIRES ,DIAGNOSIS - Abstract
Previous evidence suggested a significant variation in the testing algorithms used across the United Kingdom for the diagnosis of Clostridium difficile infection (CDI) and new national guidelines were issued in 2012. The main aim of this paper was to explore if such variation in testing and reporting is still present, to compare the management of CDI cases, and to investigate if there is any significant variation in the antibiotic policies among different hospitals. Using London hospitals as a sample, results show that there is still a wide variation of testing methods and reporting used, making comparisons difficult. It is likely that the overall variability in practices would be greater at a national and, even more, at international level. The relationship between broad-spectrum antibiotics and C. difficile incidence and alternative approaches in antibiotic guidelines may require further studies. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
10. Efficient Matching under Distributional Constraints: Theory and Applications†.
- Author
-
Kamada, Yuichiro and Kojima, Fuhito
- Subjects
MATCHING theory ,MEDICAL care ,RESIDENTS (Medicine) ,HOSPITALS ,ALGORITHMS ,GRADUATE education - Abstract
Many real matching markets are subject to distributional constraints. These constraints often take the form of restrictions on the numbers of agents on one side of the market matched to certain subsets on the other side. Real-life examples include restrictions on regions in medical matching, academic master's programs in graduate admission, and state-financed seats for college admission. Motivated by these markets, we study design of matching mechanisms under distributional constraints. We show that existing matching mechanisms suffer from inefficiency and instability, and propose a mechanism that is better in terms of efficiency, stability, and incentives while respecting the distributional constraints. (JEL C70, D61, D63) [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. Let the Algorithm Decide? Algorithms fail their first test to replace student exams.
- Author
-
Edwards, Chris
- Subjects
ALGORITHMS ,STANDARDIZED tests ,ACADEMIC achievement - Abstract
The article discusses the use and limitations of algorithms in Great Britain and Europe for grading students as compared to teacher assessments of students. It identifies problems with the evidence base utilized by the British agency the Office of Qualifications and Examinations Regulation (Ofqual) in algorithms and flawed statistical analysis.
- Published
- 2021
- Full Text
- View/download PDF
12. "The algorithm will screw you": Blame, social actors and the 2020 A Level results algorithm on Twitter.
- Author
-
Heaton, Dan, Nichele, Elena, Clos, Jeremie, and Fischer, Joel E.
- Subjects
CRITICAL discourse analysis ,CORPORA ,ALGORITHMS ,SENTIMENT analysis ,PUBLIC officers - Abstract
In August 2020, the UK government and regulation body Ofqual replaced school examinations with automatically computed A Level grades in England and Wales. This algorithm factored in school attainment in each subject over the previous three years. Government officials initially stated that the algorithm was used to combat grade inflation. After public outcry, teacher assessment grades used instead. Views concerning who was to blame for this scandal were expressed on the social media website Twitter. While previous work used NLP-based opinion mining computational linguistic tools to analyse this discourse, shortcomings included accuracy issues, difficulties in interpretation and limited conclusions on who authors blamed. Thus, we chose to complement this research by analysing 18,239 tweets relating to the A Level algorithm using Corpus Linguistics (CL) and Critical Discourse Analysis (CDA), underpinned by social actor representation. We examined how blame was attributed to different entities who were presented as social actors or having social agency. Through analysing transitivity in this discourse, we found the algorithm itself, the UK government and Ofqual were all implicated as potentially responsible as social actors through active agency, agency metaphor possession and instances of passive constructions. According to our results, students were found to have limited blame through the same analysis. We discuss how this builds upon existing research where the algorithm is implicated and how such a wide range of constructions obscure blame. Methodologically, we demonstrated that CL and CDA complement existing NLP-based computational linguistic tools in researching the 2020 A Level algorithm; however, there is further scope for how these approaches can be used in an iterative manner. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Usability Testing of the iPhone App to Improve Pain Assessment for Older Adults with Cognitive Impairment (Prehospital Setting): A Qualitative Study.
- Author
-
Docking, Rachael E., Lane, Matthew, and Schofield, Pat A.
- Subjects
MOBILE apps ,MEDICAL students ,ALGORITHMS ,AMBULANCES ,COGNITION disorders ,DELPHI method ,DEMENTIA ,EMERGENCY medical technicians ,EMERGENCY medicine ,FOCUS groups ,SURVEYS ,QUALITATIVE research ,PAIN measurement ,THEMATIC analysis ,USER-centered system design ,SMARTPHONES ,HEALTH literacy ,OLD age - Abstract
Objectives. Pain assessment in older adults with cognitive impairment is often challenging, and paramedics are not given sufficient tools/training to assess pain. The development of a mobile app may improve pain assessment and management in this vulnerable population. We conducted usability testing of a newly developed iPhone pain assessment application with potential users, in this case as a tool for clinical paramedic practice to improve pain assessment of older adults with cognitive impairment. Methods. We conducted usability testing with paramedic students and a Delphi panel of qualified paramedics. Participants studied the app and paper-based algorithm from which the app was developed. The potential use for the app was discussed. Usability testing focus groups were recorded, transcribed verbatim, and analyzed using a thematic approach. Proposed recommendations were disseminated to the Delphi panel that reviewed and confirmed them. Results. Twenty-four paramedic students from two UK ambulance services participated in the focus groups. Usability of the app and its potential were viewed positively. Four major themes were identified: 1) overall opinion of the app for use in paramedic services; 2) incorporating technological applications into the health care setting; 3) improving knowledge and governance; and 4) alternative uses for the app. Subthemes were identified and are presented. Discussion. Our results indicate that the pain assessment app constitutes a potentially useful tool in the prehospital setting. By providing access to a tool specifically developed to help identify/assess pain in a user-friendly format, paramedics are likely to have increased knowledge and confidence in assessing pain in patients with dementia. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Understanding Contrail Business Processes through Hierarchical Clustering: A Multi-Stage Framework.
- Author
-
Tariq, Zeeshan, Khan, Naveed, Charles, Darryl, McClean, Sally, McChesney, Ian, and Taylor, Paul
- Subjects
CONDENSATION trails ,PROCESS mining ,ALGORITHMS - Abstract
Real-world business processes are dynamic, with event logs that are generally unstructured and contain heterogeneous business classes. Process mining techniques derive useful knowledge from such logs but translating them into simplified and logical segments is crucial. Complexity is increased when dealing with business processes with a large number of events with no outcome labels. Techniques such as trace clustering and event clustering, tend to simplify the complex business logs but the resulting clusters are generally not understandable to the business users as the business aspects of the process are not considered while clustering the process log. In this paper, we provided a multi-stage hierarchical framework for business-logic driven clustering of highly variable process logs with extensively large number of events. Firstly, we introduced a term contrail processes for describing the characteristics of such complex real-world business processes and their logs presenting contrail-like models. Secondly, we proposed an algorithm Novel Hierarchical Clustering (NoHiC) to discover business-logic driven clusters from these contrail processes. For clustering, the raw event log is initially decomposed into high-level business classes, and later feature engineering is performed exclusively based on the business-context features, to support the discovery of meaningful business clusters. We used a hybrid approach which combines rule-based mining technique with a novel form of agglomerative hierarchical clustering for the experiments. A case-study of a CRM process of the UK's renowned telecommunication firm is presented and the quality of the proposed framework is verified through several measures, such as cluster segregation, classification accuracy, and fitness of the log. We compared NoHiC technique with two trace clustering techniques using two real world process logs. The discovered clusters through NoHiC are found to have improved fitness as compared to the other techniques, and they also hold valuable information about the business context of the process log. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. A Method of Estimating Time-to-Recovery for a Disease Caused by a Contagious Pathogen Such as SARS-CoV-2 Using a Time Series of Aggregated Case Reports.
- Author
-
Koutsouris, Dimitrios-Dionysios, Pitoglou, Stavros, Anastasiou, Athanasios, and Koumpouros, Yiannis
- Subjects
DISEASE progression ,COMPUTER software ,COVID-19 ,CONFIDENCE intervals ,TIME ,CONVALESCENCE ,WORLD health ,EPIDEMICS ,TIME series analysis ,DESCRIPTIVE statistics ,SENSITIVITY & specificity (Statistics) ,PREDICTION models ,COVID-19 pandemic ,ALGORITHMS - Abstract
During the outbreak of a disease caused by a pathogen with unknown characteristics, the uncertainty of its progression parameters can be reduced by devising methods that, based on rational assumptions, exploit available information to provide actionable insights. In this study, performed a few (~6) weeks into the outbreak of COVID-19 (caused by SARS-CoV-2), one of the most important disease parameters, the average time-to-recovery, was calculated using data publicly available on the internet (daily reported cases of confirmed infections, deaths, and recoveries), and fed into an algorithm that matches confirmed cases with deaths and recoveries. Unmatched cases were adjusted based on the matched cases calculation. The mean time-to-recovery, calculated from all globally reported cases, was found to be 18.01 days (SD 3.31 days) for the matched cases and 18.29 days (SD 2.73 days) taking into consideration the adjusted unmatched cases as well. The proposed method used limited data and provided experimental results in the same region as clinical studies published several months later. This indicates that the proposed method, combined with expert knowledge and informed calculated assumptions, could provide a meaningful calculated average time-to-recovery figure, which can be used as an evidence-based estimation to support containment and mitigation policy decisions, even at the very early stages of an outbreak. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. BACE and BMA Variable Selection and Forecasting for UK Money Demand and Inflation with Gretl.
- Author
-
Błażejowski, Marcin, Kwiatkowski, Jacek, and Kufel, Paweł
- Subjects
DEMAND for money ,ALGORITHMS ,NUMERICAL calculations ,PARALLEL processing - Abstract
In this paper, we apply Bayesian averaging of classical estimates (BACE) and Bayesian model averaging (BMA) as an automatic modeling procedures for two well-known macroeconometric models: UK demand for narrow money and long-term inflation. Empirical results verify the correctness of BACE and BMA selection and exhibit similar or better forecasting performance compared with a non-pooling approach. As a benchmark, we use Autometrics—an algorithm for automatic model selection. Our study is implemented in the easy-to-use gretl packages, which support parallel processing, automates numerical calculations, and allows for efficient computations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Longitudinal cardio-respiratory fitness prediction through wearables in free-living environments.
- Author
-
Spathis, Dimitris, Perez-Pozuelo, Ignacio, Gonzales, Tomas I., Wu, Yu, Brage, Soren, Wareham, Nicholas, and Mascolo, Cecilia
- Subjects
BIOMARKERS ,DEEP learning ,CONFIDENCE intervals ,CARDIOPULMONARY fitness ,OXYGEN consumption ,RESEARCH methodology ,WEARABLE technology ,DESCRIPTIVE statistics ,ELECTROCARDIOGRAPHY ,PREDICTION models ,ALGORITHMS ,SECONDARY analysis - Abstract
Cardiorespiratory fitness is an established predictor of metabolic disease and mortality. Fitness is directly measured as maximal oxygen consumption (VO
2 max), or indirectly assessed using heart rate responses to standard exercise tests. However, such testing is costly and burdensome because it requires specialized equipment such as treadmills and oxygen masks, limiting its utility. Modern wearables capture dynamic real-world data which could improve fitness prediction. In this work, we design algorithms and models that convert raw wearable sensor data into cardiorespiratory fitness estimates. We validate these estimates' ability to capture fitness profiles in free-living conditions using the Fenland Study (N=11,059), along with its longitudinal cohort (N = 2675), and a third external cohort using the UK Biobank Validation Study (N = 181) who underwent maximal VO2 max testing, the gold standard measurement of fitness. Our results show that the combination of wearables and other biomarkers as inputs to neural networks yields a strong correlation to ground truth in a holdout sample (r = 0.82, 95CI 0.80–0.83), outperforming other approaches and models and detects fitness change over time (e.g., after 7 years). We also show how the model's latent space can be used for fitness-aware patient subtyping paving the way to scalable interventions and personalized trial recruitment. These results demonstrate the value of wearables for fitness estimation that today can be measured only with laboratory tests. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
18. Untitled.
- Subjects
HEART disease diagnosis ,ARTIFICIAL intelligence ,MACHINE learning ,CONFERENCES & conventions ,DIAGNOSTIC imaging ,ALGORITHMS - Published
- 2022
19. Higher prevalence of non-skeletal comorbidity related to X-linked hypophosphataemia: a UK parallel cohort study using CPRD.
- Author
-
Hawley, Samuel, Shaw, Nick J, Delmestri, Antonella, Prieto-Alhambra, Daniel, Cooper, Cyrus, Pinedo-Villanueva, Rafael, and Javaid, M Kassim
- Subjects
STATISTICS ,X-linked genetic disorders ,CONFIDENCE intervals ,RICKETS ,AGE distribution ,SEX distribution ,HYPOPHOSPHATEMIA ,DISEASE prevalence ,MENTAL depression ,ODDS ratio ,DATA analysis ,COMORBIDITY ,LONGITUDINAL method ,PHENOTYPES ,ALGORITHMS - Abstract
Objectives X-Linked hypophosphataemic rickets (XLH) is a rare multi-systemic disease of mineral homeostasis that has a prominent skeletal phenotype. The aim of this study was to describe additional comorbidities in XLH patients compared with general population controls. Methods The Clinical Practice Research Datalink (CPRD) GOLD was used to identify a cohort of XLH patients (1995–2016), along with a non-XLH cohort matched (1 : 4) on age, sex and GP practice. Using the CALIBER portal, phenotyping algorithms were used to identify the first diagnosis (and associated age) of 273 comorbid conditions during patient follow-up. Fifteen major disease categories were used and the proportion of patients having ≥1 diagnosis was compared between cohorts for each category and condition. Main analyses were repeated according to the Index of Multiple Deprivation (IMD). Results There were 64 and 256 patients in the XLH and non-XLH cohorts, respectively. There was increased prevalence of endocrine [OR 3.46 (95% CI: 1.44, 8.31)] and neurological [OR 3.01 (95% CI: 1.41, 6.44)] disorders among XLH patients. Across all specific comorbidities, four were at least twice as likely to be present in XLH cases, but only depression met the Bonferroni threshold: OR 2.95 (95% CI: 1.47, 5.92). Distribution of IMD among XLH cases indicated greater deprivation than the general population. Conclusion We describe a higher risk of mental illness in XLH patients compared with matched controls, and greater than expected deprivation. These findings may have implications for clinical practice guidelines and decisions around health and social care provision for these patients. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Deep learning for automated river-level monitoring through river-camera images: an approach based on water segmentation and transfer learning.
- Author
-
Vandaele, Remy, Dance, Sarah L., and Ojha, Varun
- Subjects
DEEP learning ,WATER transfer ,PEARSON correlation (Statistics) ,ALGORITHMS ,WILDLIFE monitoring ,IMAGE segmentation - Abstract
River-level estimation is a critical task required for the understanding of flood events and is often complicated by the scarcity of available data. Recent studies have proposed to take advantage of large networks of river-camera images to estimate river levels but, currently, the utility of this approach remains limited as it requires a large amount of manual intervention (ground topographic surveys and water image annotation). We have developed an approach using an automated water semantic segmentation method to ease the process of river-level estimation from river-camera images. Our method is based on the application of a transfer learning methodology to deep semantic neural networks designed for water segmentation. Using datasets of image series extracted from four river cameras and manually annotated for the observation of a flood event on the rivers Severn and Avon, UK (21 November–5 December 2012), we show that this algorithm is able to automate the annotation process with an accuracy greater than 91%. Then, we apply our approach to year-long image series from the same cameras observing the rivers Severn and Avon (from 1 June 2019 to 31 May 2020) and compare the results with nearby river-gauge measurements. Given the high correlation (Pearson's correlation coefficient >0.94) between these results and the river-gauge measurements, it is clear that our approach to automation of the water segmentation on river-camera images could allow for straightforward, inexpensive observation of flood events, especially at ungauged locations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Validity of an algorithm to identify cardiovascular deaths from administrative health records: a multi-database population-based cohort study.
- Author
-
Lix, Lisa M., Sobhan, Shamsia, St-Jean, Audray, Daigle, Jean-Marc, Fisher, Anat, Yu, Oriana H. Y., Dell'Aniello, Sophie, Hu, Nianping, Bugden, Shawn C., Shah, Baiju R., Ronksley, Paul E., Alessi-Severini, Silvia, Douros, Antonios, Ernst, Pierre, and Filion, Kristian B.
- Subjects
CARDIOVASCULAR disease related mortality ,MEDICAL records ,ALGORITHMS ,COHORT analysis ,VITAL statistics - Abstract
Background: Cardiovascular death is a common outcome in population-based studies about new healthcare interventions or treatments, such as new prescription medications. Vital statistics registration systems are often the preferred source of information about cause-specific mortality because they capture verified information about the deceased, but they may not always be accessible for linkage with other sources of population-based data. We assessed the validity of an algorithm applied to administrative health records for identifying cardiovascular deaths in population-based data.Methods: Administrative health records were from an existing multi-database cohort study about sodium-glucose cotransporter-2 (SGLT2) inhibitors, a new class of antidiabetic medications. Data were from 2013 to 2018 for five Canadian provinces (Alberta, British Columbia, Manitoba, Ontario, Quebec) and the United Kingdom (UK) Clinical Practice Research Datalink (CPRD). The cardiovascular mortality algorithm was based on in-hospital cardiovascular deaths identified from diagnosis codes and select out-of-hospital deaths. Sensitivity, specificity, and positive and negative predictive values (PPV, NPV) were calculated for the cardiovascular mortality algorithm using vital statistics registrations as the reference standard. Overall and stratified estimates and 95% confidence intervals (CIs) were computed; the latter were produced by site, location of death, sex, and age.Results: The cohort included 20,607 individuals (58.3% male; 77.2% ≥70 years). When compared to vital statistics registrations, the cardiovascular mortality algorithm had overall sensitivity of 64.8% (95% CI 63.6, 66.0); site-specific estimates ranged from 54.8 to 87.3%. Overall specificity was 74.9% (95% CI 74.1, 75.6) and overall PPV was 54.5% (95% CI 53.7, 55.3), while site-specific PPV ranged from 33.9 to 72.8%. The cardiovascular mortality algorithm had sensitivity of 57.1% (95% CI 55.4, 58.8) for in-hospital deaths and 72.3% (95% CI 70.8, 73.9) for out-of-hospital deaths; specificity was 88.8% (95% CI 88.1, 89.5) for in-hospital deaths and 58.5% (95% CI 57.3, 59.7) for out-of-hospital deaths.Conclusions: A cardiovascular mortality algorithm applied to administrative health records had moderate validity when compared to vital statistics data. Substantial variation existed across study sites representing different geographic locations and two healthcare systems. These variations may reflect different diagnostic coding practices and healthcare utilization patterns. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
22. A Neural Network Quality-Control Scheme for Improved Quantitative Precipitation Estimation Accuracy on the U.K. Weather Radar Network.
- Author
-
Husnoo, Nawal, Darlington, Timothy, Torres, Sebastián, and Warde, David
- Subjects
WEATHER radar networks ,CLUTTER (Radar) ,RADAR signal processing ,RADAR meteorology ,QUALITY control ,ALGORITHMS ,WEATHER ,RADAR - Abstract
In this work, we present a new quantitative precipitation estimation (QPE) quality-control (QC) algorithm for the U.K. weather radar network. The real-time adaptive algorithm uses a neural network (NN) to select data from the lowest useable elevation scan to optimize the combined performance of two other radar data correction algorithms: ground-clutter mitigation [using Clutter Environment Analysis using Adaptive Processing (CLEAN-AP)] and vertical profile of reflectivity (VPR) correction. The NN is trained using 3D tiles of observed uncontaminated weather signals that are systematically combined with ground-clutter signals collected under dry weather conditions. This approach provides a way to simulate radar signals with a wide range of clutter contamination conditions and with realistic spatial structures while providing the uncontaminated "truth" with respect to which the performance of the QC algorithm can be measured. An evaluation of QPE products obtained with the proposed QC algorithm demonstrates superior performance as compared to those obtained with the QC algorithm currently used in operations. Similar improvements are also illustrated using radar observations from two periods of prolonged precipitation, showing a better balance between overestimation errors from using clutter-contaminated low-elevation radar data and VPR-induced errors from using high-elevation radar data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Abundance of undiagnosed cardiometabolic risk within the population of a long-stay prison in the UK.
- Author
-
Gray, Benjamin J, Craddock, Christie, Couzens, Zoe, Bain, Evie, Dunseath, Gareth J, Shankar, Ananda Giri, Luzio, Stephen D, and Perrett, Stephanie E
- Subjects
CARDIOVASCULAR diseases risk factors ,BIOMARKERS ,OBESITY ,HYPERTENSION ,HDL cholesterol ,CORRECTIONAL institutions ,PRISONERS ,ANTHROPOMETRY ,CARDIOVASCULAR diseases ,HYPERCHOLESTEREMIA ,RISK assessment ,TYPE 2 diabetes ,ALGORITHMS - Abstract
Background The health of people in prisons is a public health issue. It is well known that those in prison experience poorer health outcomes than those in the general community. One such example is the burden of non-communicable diseases, more specifically cardiovascular disease (CVD), stroke and type 2 diabetes (T2DM). However, there is limited evidence research on the extent of cardiometabolic risk factors in the prison environment in Wales, the wider UK or globally. Methods Risk assessments were performed on a representative sample of 299 men at HMP Parc, Bridgend. The risk assessments were 30 min in duration and men aged 25–84 years old and free from pre-existing CVD and T2DM were eligible. During the risk assessment, a number of demographic, anthropometric and clinical markers were obtained. The 10-year risk of CVD and T2DM was predicted using the QRISK2 algorithm and Diabetes UK Risk Score, respectively. Results The majority of the men was found to be either overweight (43.5%) or obese (37.5%) and/or demonstrated evidence of central obesity (40.1%). Cardiometabolic risk factors including systolic hypertension (25.1%), high cholesterol (29.8%), low HDL cholesterol (56.2%) and elevated total cholesterol: HDL ratios (23.1%) were observed in a considerable number of men. Ultimately, 15.4% were calculated at increased risk of CVD, and 31.8% predicted at moderate or high risk of T2DM. Conclusions Overall, a substantial prevalence of previously undiagnosed cardiometabolic risk factors was observed and men in prison are at elevated risk of cardiometabolic disease at a younger age than current screening guidelines. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Evaluation of the performance of algorithms mapping EORTC QLQ-C30 onto the EQ-5D index in a metastatic colorectal cancer cost-effectiveness model.
- Author
-
Franken, Mira D., de Hond, Anne, Degeling, Koen, Punt, Cornelis J. A., Koopman, Miriam, Uyl-de Groot, Carin A., Versteegh, Matthijs M., and van Oijen, Martijn G. H.
- Subjects
COLORECTAL cancer ,METASTASIS ,RANDOM effects model ,ALGORITHMS ,DIRECT costing - Abstract
Background: Cost-effectiveness models require quality of life utilities calculated from generic preference-based questionnaires, such as EQ-5D. We evaluated the performance of available algorithms for QLQ-C30 conversion into EQ-5D-3L based utilities in a metastatic colorectal cancer (mCRC) patient population and subsequently developed a mCRC specific algorithm. Influence of mapping on cost-effectiveness was evaluated.Methods: Three available algorithms were compared with observed utilities from the CAIRO3 study. Six models were developed using 5-fold cross-validation: predicting EQ-5D-3L tariffs from QLQ-C30 functional scale scores, continuous QLQ-C30 scores or dummy levels with a random effects model (RE), a most likely probability method on EQ-5D-3L functional scale scores, a beta regression model on QLQ-C30 functional scale scores and a separate equations subgroup approach on QLQ-C30 functional scale scores. Performance was assessed, and algorithms were tested on incomplete QLQ-C30 questionnaires. Influence of utility mapping on incremental cost/QALY gained (ICER) was evaluated in an existing Dutch mCRC cost-effectiveness model.Results: The available algorithms yielded mean utilities of 1: 0.87 ± sd:0.14,2: 0.81 ± 0.15 (both Dutch tariff) and 3: 0.81 ± sd:0.19. Algorithm 1 and 3 were significantly different from the mean observed utility (0.83 ± 0.17 with Dutch tariff, 0.80 ± 0.20 with U.K. tariff). All new models yielded predicted utilities drawing close to observed utilities; differences were not statistically significant. The existing algorithms resulted in an ICER difference of €10,140 less and €1765 more compared to the observed EQ-5D-3L based ICER (€168,048). The preferred newly developed algorithm was €5094 higher than the observed EQ-5D-3L based ICER. Disparity was explained by minimal diffences in incremental QALYs between models.Conclusion: Available mapping algorithms sufficiently accurately predict utilities. With the commonly used statistical methods, we did not succeed in developping an improved mapping algorithm. Importantly, cost-effectiveness outcomes in this study were comparable to the original model outcomes between different mapping algorithms. Therefore, mapping can be an adequate solution for cost-effectiveness studies using either a previously designed and validated algorithm or an algorithm developed in this study. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
25. Development and validation of predictive models for QUiPP App v.2: tool for predicting preterm birth in asymptomatic high-risk women.
- Author
-
Watson, H. A., Seed, P. T., Carter, J., Hezelgrave, N. L., Kuhrt, K., Tribe, R. M., and Shennan, A. H.
- Subjects
PREMATURE labor ,PREDICTION models ,MULTIPLE pregnancy ,MODEL validation ,BIOMARKERS ,FIBRONECTINS ,RESEARCH ,PREMATURE infants ,PRENATAL diagnosis ,PREDICTIVE tests ,MOBILE apps ,RESEARCH methodology ,HIGH-risk pregnancy ,GESTATIONAL age ,PHARMACOKINETICS ,MEDICAL cooperation ,EVALUATION research ,RISK assessment ,COMPARATIVE studies ,SYMPTOMS ,RESEARCH funding ,RECEIVER operating characteristic curves ,LONGITUDINAL method ,FETAL ultrasonic imaging ,ALGORITHMS - Abstract
Objectives: Accurate mid-pregnancy prediction of spontaneous preterm birth (sPTB) is essential to ensure appropriate surveillance of high-risk women. Advancing the QUiPP App prototype, QUiPP App v.2 aimed to provide individualized risk of delivery based on cervical length (CL), quantitative fetal fibronectin (qfFN) or both tests combined, taking into account further risk factors, such as multiple pregnancy. Here we report development of the QUiPP App v.2 predictive models for use in asymptomatic high-risk women, and validation using a distinct dataset in order to confirm the accuracy and transportability of the QUiPP App, overall and within specific clinically relevant time frames.Methods: This was a prospective secondary analysis of data of asymptomatic women at high risk of sPTB recruited in 13 UK preterm birth clinics. Women were offered longitudinal qfFN testing every 2-4 weeks and/or transvaginal ultrasound CL measurement between 18 + 0 and 36 + 6 weeks' gestation. A total of 1803 women (3878 visits) were included in the training set and 904 women (1400 visits) in the validation set. Prediction models were created based on the training set for use in three groups: patients with risk factors for sPTB and CL measurement alone, with risk factors for sPTB and qfFN measurement alone, and those with risk factors for sPTB and both CL and qfFN measurements. Survival analysis was used to identify the significant predictors of sPTB, and parametric structures for survival models were compared and the best selected. The estimated overall probability of delivery before six clinically important time points (< 30, < 34 and < 37 weeks' gestation and within 1, 2 and 4 weeks after testing) was calculated for each woman and analyzed as a predictive test for the actual occurrence of each event. This allowed receiver-operating-characteristics curves to be plotted, and areas under the curve (AUC) to be calculated. Calibration was performed to measure the agreement between expected and observed outcomes.Results: All three algorithms demonstrated high accuracy for the prediction of sPTB at < 30, < 34 and < 37 weeks' gestation and within 1, 2 and 4 weeks of testing, with AUCs between 0.75 and 0.90 for the use of qfFN and CL combined, between 0.68 and 0.90 for qfFN alone, and between 0.71 and 0.87 for CL alone. The differences between the three algorithms were not statistically significant. Calibration confirmed no significant differences between expected and observed rates of sPTB within 4 weeks and a slight overestimation of risk with the use of CL measurement between 22 + 0 and 25 + 6 weeks' gestation.Conclusions: The QUiPP App v.2 is a highly accurate prediction tool for sPTB that is based on a unique combination of biomarkers, symptoms and statistical algorithms. It can be used reliably in the context of communicating to patients the risk of sPTB. Whilst further work is required to determine its role in identifying women requiring prophylactic interventions, it is a reliable and convenient screening tool for planning follow-up or hospitalization for high-risk women. Copyright © 2019 ISUOG. Published by John Wiley & Sons Ltd. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
26. Development and validation of predictive models for QUiPP App v.2: tool for predicting preterm birth in women with symptoms of threatened preterm labor.
- Author
-
Carter, J., Seed, P. T., Watson, H. A., David, A. L., Sandall, J., Shennan, A. H., and Tribe, R. M.
- Subjects
PREMATURE labor ,PREDICTION models ,AMNIOTIC liquid ,AKAIKE information criterion ,MODEL validation ,FIBRONECTINS ,RESEARCH ,PRENATAL diagnosis ,PREMATURE infants ,PREDICTIVE tests ,MOBILE apps ,RESEARCH methodology ,HIGH-risk pregnancy ,GESTATIONAL age ,PHARMACOKINETICS ,MEDICAL cooperation ,EVALUATION research ,RISK assessment ,COMPARATIVE studies ,RESEARCH funding ,RECEIVER operating characteristic curves ,LONGITUDINAL method ,FETAL ultrasonic imaging ,PROBABILITY theory ,ALGORITHMS - Abstract
Copyright of Ultrasound in Obstetrics & Gynecology is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
27. Cardiovascular disease risk prediction using automated machine learning: A prospective study of 423,604 UK Biobank participants.
- Author
-
Alaa, Ahmed M., Bolton, Thomas, Di Angelantonio, Emanuele, Rudd, James H. F., and van der Schaar, Mihaela
- Subjects
ELECTION forecasting ,RECEIVER operating characteristic curves ,RISK perception ,MACHINE learning ,LONGITUDINAL method ,SYSTOLIC blood pressure ,CARDIOVASCULAR diseases - Abstract
Background: Identifying people at risk of cardiovascular diseases (CVD) is a cornerstone of preventative cardiology. Risk prediction models currently recommended by clinical guidelines are typically based on a limited number of predictors with sub-optimal performance across all patient groups. Data-driven techniques based on machine learning (ML) might improve the performance of risk predictions by agnostically discovering novel risk predictors and learning the complex interactions between them. We tested (1) whether ML techniques based on a state-of-the-art automated ML framework (AutoPrognosis) could improve CVD risk prediction compared to traditional approaches, and (2) whether considering non-traditional variables could increase the accuracy of CVD risk predictions. Methods and findings: Using data on 423,604 participants without CVD at baseline in UK Biobank, we developed a ML-based model for predicting CVD risk based on 473 available variables. Our ML-based model was derived using AutoPrognosis, an algorithmic tool that automatically selects and tunes ensembles of ML modeling pipelines (comprising data imputation, feature processing, classification and calibration algorithms). We compared our model with a well-established risk prediction algorithm based on conventional CVD risk factors (Framingham score), a Cox proportional hazards (PH) model based on familiar risk factors (i.e, age, gender, smoking status, systolic blood pressure, history of diabetes, reception of treatments for hypertension and body mass index), and a Cox PH model based on all of the 473 available variables. Predictive performances were assessed using area under the receiver operating characteristic curve (AUC-ROC). Overall, our AutoPrognosis model improved risk prediction (AUC-ROC: 0.774, 95% CI: 0.768-0.780) compared to Framingham score (AUC-ROC: 0.724, 95% CI: 0.720-0.728, p < 0.001), Cox PH model with conventional risk factors (AUC-ROC: 0.734, 95% CI: 0.729-0.739, p < 0.001), and Cox PH model with all UK Biobank variables (AUC-ROC: 0.758, 95% CI: 0.753-0.763, p < 0.001). Out of 4,801 CVD cases recorded within 5 years of baseline, AutoPrognosis was able to correctly predict 368 more cases compared to the Framingham score. Our AutoPrognosis model included predictors that are not usually considered in existing risk prediction models, such as the individuals’ usual walking pace and their self-reported overall health rating. Furthermore, our model improved risk prediction in potentially relevant sub-populations, such as in individuals with history of diabetes. We also highlight the relative benefits accrued from including more information into a predictive model (information gain) as compared to the benefits of using more complex models (modeling gain). Conclusions: Our AutoPrognosis model improves the accuracy of CVD risk prediction in the UK Biobank population. This approach performs well in traditionally poorly served patient subgroups. Additionally, AutoPrognosis uncovered novel predictors for CVD disease that may now be tested in prospective studies. We found that the “information gain” achieved by considering more risk factors in the predictive model was significantly higher than the “modeling gain” achieved by adopting complex predictive models. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Automatic parameter tuning for functional regionalization methods.
- Author
-
Casado ‐ Díaz, José Manuel, Martínez ‐ Bernabéu, Lucas, and Flórez ‐ Revuelta, Francisco
- Subjects
LABOR market ,MATHEMATICAL optimization ,GENETIC algorithms ,ALGORITHMS - Abstract
Copyright of Papers in Regional Science is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2017
- Full Text
- View/download PDF
29. Development of an algorithm for determining smoking status and behaviour over the life course from UK electronic primary care records.
- Author
-
Atkinson, Mark D., Kennedy, Jonathan I., John, Ann, Lewis, Keir E., Lyons, Ronan A., Brophy, Sinead T., and DEMISTIFY Research Group
- Subjects
PRIMARY care ,FAMILY medicine ,MEDICAL care ,SMOKING cessation ,CIGARETTE smokers ,ALGORITHMS ,HEALTH behavior ,MEDICAL record linkage ,PRIMARY health care ,RESEARCH funding ,SMOKING ,DISEASE prevalence - Abstract
Background: Patients' smoking status is routinely collected by General Practitioners (GP) in UK primary health care. There is an abundance of Read codes pertaining to smoking, including those relating to smoking cessation therapy, prescription, and administration codes, in addition to the more regularly employed smoking status codes. Large databases of primary care data are increasingly used for epidemiological analysis; smoking status is an important covariate in many such analyses. However, the variable definition is rarely documented in the literature.Methods: The Secure Anonymised Information Linkage (SAIL) databank is a repository for a national collection of person-based anonymised health and socio-economic administrative data in Wales, UK. An exploration of GP smoking status data from the SAIL databank was carried out to explore the range of codes available and how they could be used in the identification of different categories of smokers, ex-smokers and never smokers. An algorithm was developed which addresses inconsistencies and changes in smoking status recording across the life course and compared with recorded smoking status as recorded in the Welsh Health Survey (WHS), 2013 and 2014 at individual level. However, the WHS could not be regarded as a "gold standard" for validation.Results: There were 6836 individuals in the linked dataset. Missing data were more common in GP records (6%) than in WHS (1.1%). Our algorithm assigns ex-smoker status to 34% of never-smokers, and detects 30% more smokers than are declared in the WHS data. When distinguishing between current smokers and non-smokers, the similarity between the WHS and GP data using the nearest date of comparison was κ = 0.78. When temporal conflicts had been accounted for, the similarity was κ = 0.64, showing the importance of addressing conflicts.Conclusions: We present an algorithm for the identification of a patient's smoking status using GP self-reported data. We have included sufficient details to allow others to replicate this work, thus increasing the standards of documentation within this research area and assessment of smoking status in routine data. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
30. Object Detection at Level Crossing Using Deep Learning.
- Author
-
Fayyaz, Muhammad Asad Bilal and Johnson, Christopher
- Subjects
RAILROADS ,SYSTEM integration ,DEPENDENCY (Psychology) ,INTERSECTION numbers ,SAFETY standards ,PEDESTRIANS ,RADARSAT satellites - Abstract
Multiple projects within the rail industry across different regions have been initiated to address the issue of over-population. These expansion plans and upgrade of technologies increases the number of intersections, junctions, and level crossings. A level crossing is where a railway line is crossed by a road or right of way on the level without the use of a tunnel or bridge. Level crossings still pose a significant risk to the public, which often leads to serious accidents between rail, road, and footpath users and the risk is dependent on their unpredictable behavior. For Great Britain, there were three fatalities and 385 near misses at level crossings in 2015–2016. Furthermore, in its annual safety report, the Rail Safety and Standards Board (RSSB) highlighted the risk of incidents at level crossings during 2016/17 with a further six fatalities at level crossings including four pedestrians and two road vehicles. The relevant authorities have suggested an upgrade of the existing sensing system and the integration of new novel technology at level crossings. The present work addresses this key issue and discusses the current sensing systems along with the relevant algorithms used for post-processing the information. The given information is adequate for a manual operator to make a decision or start an automated operational cycle. Traditional sensors have certain limitations and are often installed as a "single sensor". The single sensor does not provide sufficient information; hence another sensor is required. The algorithms integrated with these sensing systems rely on the traditional approach, where background pixels are compared with new pixels. Such an approach is not effective in a dynamic and complex environment. The proposed model integrates deep learning technology with the current Vision system (e.g., CCTV to detect and localize an object at a level crossing). The proposed sensing system should be able to detect and localize particular objects (e.g., pedestrians, bicycles, and vehicles at level crossing areas.) The radar system is also discussed for a "two out of two" logic interlocking system in case of fail-mechanism. Different techniques to train a deep learning model are discussed along with their respective results. The model achieved an accuracy of about 88% from the MobileNet model for classification and a loss metric of 0.092 for object detection. Some related future work is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Identity-by-descent detection across 487,409 British samples reveals fine scale population structure and ultra-rare variant associations.
- Author
-
Nait Saada, Juba, Kalantzis, Georgios, Shyr, Derek, Cooper, Fergus, Robinson, Martin, Gusev, Alexander, and Palamara, Pier Francesco
- Subjects
HEURISTIC ,ALGORITHMS ,GENEALOGY ,ANCESTORS - Abstract
Detection of Identical-By-Descent (IBD) segments provides a fundamental measure of genetic relatedness and plays a key role in a wide range of analyses. We develop FastSMC, an IBD detection algorithm that combines a fast heuristic search with accurate coalescent-based likelihood calculations. FastSMC enables biobank-scale detection and dating of IBD segments within several thousands of years in the past. We apply FastSMC to 487,409 UK Biobank samples and detect ~214 billion IBD segments transmitted by shared ancestors within the past 1500 years, obtaining a fine-grained picture of genetic relatedness in the UK. Sharing of common ancestors strongly correlates with geographic distance, enabling the use of genomic data to localize a sample's birth coordinates with a median error of 45 km. We seek evidence of recent positive selection by identifying loci with unusually strong shared ancestry and detect 12 genome-wide significant signals. We devise an IBD-based test for association between phenotype and ultra-rare loss-of-function variation, identifying 29 association signals in 7 blood-related traits. Accurately measuring genetic relatedness by Identical-By-Descent (IBD) segments is challenging in biobank-level genome data. The authors present IBD method FastSMC, which when applied to the UK Biobank gives a detailed picture of genetic relatedness and evolutionary history in the UK over the past 2000 years. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. An open-source tool to identify active travel from hip-worn accelerometer, GPS and GIS data.
- Author
-
Procter, Duncan S., Page, Angie S., Cooper, Ashley R., Nightingale, Claire M., Ram, Bina, Rudnicka, Alicja R., Whincup, Peter H., Clary, Christelle, Lewis, Daniel, Cummins, Steven, Ellaway, Anne, Giles-Corti, Billie, Cook, Derek G., and Owen, Christopher G.
- Subjects
ALGORITHMS ,AUTOMOBILES ,COMPUTER software ,CYCLING ,FORECASTING ,GEOGRAPHIC information systems ,GLOBAL Positioning System ,RESEARCH evaluation ,TIME ,TRANSPORTATION ,WALKING ,ACCELEROMETRY ,PHYSICAL activity ,DESCRIPTIVE statistics - Abstract
Background: Increases in physical activity through active travel have the potential to have large beneficial effects on populations, through both better health outcomes and reduced motorized traffic. However accurately identifying travel mode in large datasets is problematic. Here we provide an open source tool to quantify time spent stationary and in four travel modes(walking, cycling, train, motorised vehicle) from accelerometer measured physical activity data, combined with GPS and GIS data. Methods: The Examining Neighbourhood Activities in Built Living Environments in London study evaluates the effect of the built environment on health behaviours, including physical activity. Participants wore accelerometers and GPS receivers on the hip for 7 days. We time-matched accelerometer and GPS, and then extracted data from the commutes of 326 adult participants, using stated commute times and modes, which were manually checked to confirm stated travel mode. This yielded examples of five travel modes: walking, cycling, motorised vehicle, train and stationary. We used this example data to train a gradient boosted tree, a form of supervised machine learning algorithm, on each data point (131,537 points), rather than on journeys. Accuracy during training was assessed using five-fold cross-validation. We also manually identified the travel behaviour of both 21 participants from ENABLE London (402,749 points), and 10 participants from a separate study (STAMP-2, 210,936 points), who were not included in the training data. We compared our predictions against this manual identification to further test accuracy and test generalisability. Results: Applying the algorithm, we correctly identified travel mode 97.3% of the time in cross-validation (mean sensitivity 96.3%, mean active travel sensitivity 94.6%). We showed 96.0% agreement between manual identification and prediction of 21 individuals' travel modes (mean sensitivity 92.3%, mean active travel sensitivity 84.9%) and 96.5% agreement between the STAMP-2 study and predictions (mean sensitivity 85.5%, mean active travel sensitivity 78.9%). Conclusion: We present a generalizable tool that identifies time spent stationary and time spent walking with very high precision, time spent in trains or vehicles with good precision, and time spent cycling with moderate precisionIn studies where both accelerometer and GPS data are available this tool complements analyses of physical activity, showing whether differences in PA may be explained by differences in travel mode. All code necessary to replicate, fit and predict to other datasets is provided to facilitate use by other researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.