121 results on '"John McCall"'
Search Results
2. Prevalence, management and efficacy of treatment in portal vein obstruction after paediatric liver transplantation: protocol of the retrospective international multicentre PORTAL registry
- Author
-
Barbara E Wildhaber, Martin de Santibañes, Victoria Ardiles, Ruben H J de Kleine, Raj Prasad, Bhargava Mullapudi, Jai Patel, Reinoud P H Bokkers, Ekkehard Sturm, Simon McGuirk, Girish Gupte, John McCall, Richard J Hendrickson, Winita Hardikar, Helen Evans, Khalid Sharif, Marumbo Mtegha, Amar Mukund, David Duncan, Emmanuel Gonzales, Marco Spada, Mureo Kasahara, Denise Aldrian, Bader A Alfares, Hubert P J van der Doef, Thomas Casswall, Greg Nowak, Martin Delle, Valeria Berchtold, Georg F Vogel, Piotr Kaliciński, Malgorzata Markiewicz-Kijewska, Adam Kolesnik, Jesús Q Bernabeu, María Mercadal Hally, Mauricio Larrarte K, Paolo Marra, Michela Bravi, Domenico Pinelli, Seisuke Sakamoto, Hajime Uchida, Vidyadhar Mali, Marion Aw, Stéphanie Franchi-Abella, Florent Guérin, Guillermo Cervio, Julia Minetto, Sergio Sierre, Jimmy Walker Uno, Steffen Hartleif, Cristina T Ferreira, Luiza S Nader, Marco Farina, Catalina Jaramillo, Manuel I Rodriguez-Davalos, Peter Feola, Amit A Shah, Phoebe M Wood, Michael R Acord, Ryan T Fischer, Rajeev Khanna, Viniyendra Pamecha, Gilda Porta, Tommaso Alterio, Giuseppe Maggiore, Marisa Beretta, and Rudi Dierckx
- Subjects
Medicine - Abstract
Introduction Portal vein obstruction (PVO) consists of anastomotic stenosis and thrombosis, which occurs due to a progression of the former. The aim of this large-scale international study is to assess the prevalence, current management practices and efficacy of treatment in patients with PVO.Methods and analysis The Portal vein Obstruction Revascularisation Therapy After Liver transplantation registry will facilitate an international, retrospective, multicentre, observational study, with 25 centres around the world already actively involved. Paediatric patients (aged
- Published
- 2023
- Full Text
- View/download PDF
3. Persistent opioid use and opioid-related harm after hospital admissions for surgery and trauma in New Zealand: a population-based cohort study
- Author
-
Peter Jones, Matthew Moore, Amy Hai Yan Chan, Chris Frampton, Doug Campbell, Jiayi Gong, Alan Forbes Merry, Kebede A Beyene, and John McCall
- Subjects
Medicine - Abstract
Introduction Opioid use has increased globally for the management of chronic non-cancer-related pain. There are concerns regarding the misuse of opioids leading to persistent opioid use and subsequent hospitalisation and deaths in developed countries. Hospital admissions related to surgery or trauma have been identified as contributing to the increasing opioid use internationally. There are minimal data on persistent opioid use and opioid-related harm in New Zealand (NZ), and how hospital admission for surgery or trauma contributes to this. We aim to describe rates and identify predictors of persistent opioid use among opioid-naïve individuals following hospital discharge for surgery or trauma.Methods and analysis This is a population-based, retrospective cohort study using linked data from national health administrative databases for opioid-naïve patients who have had surgery or trauma in NZ between January 2006 and December 2019. Linked data will be used to identify variables of interest including all types of hospital surgeries in NZ, all trauma hospital admissions, opioid dispensing, comorbidities and sociodemographic variables. The primary outcome of this study will be the prevalence of persistent opioid use. Secondary outcomes will include mortality, opioid-related harms and hospitalisation. We will compare the secondary outcomes between persistent and non-persistent opioid user groups. To compute rates, we will divide the total number of outcome events by total follow-up time. Multivariable logistic regression will be used to identify predictors of persistent opioid use. Multivariable Cox regression models will be used to estimate the risk of opioid-related harms and hospitalisation as well as all-cause mortality among the study cohort in a year following hospital discharge for surgery or trauma.Ethics and dissemination This study has been approved by the Auckland Health Research Ethics Committee (AHREC- AH1159). Results will be reported in accordance with the Reporting of studies Conducted using Observational Routinely collected health data statement (RECORD).
- Published
- 2021
- Full Text
- View/download PDF
4. ∆133p53 isoform promotes tumour invasion and metastasis via interleukin-6 activation of JAK-STAT and RhoA-ROCK signalling
- Author
-
Hamish Campbell, Nicholas Fleming, Imogen Roth, Sunali Mehta, Anna Wiles, Gail Williams, Claire Vennin, Nikola Arsic, Ashleigh Parkin, Marina Pajic, Fran Munro, Les McNoe, Michael Black, John McCall, Tania L. Slatter, Paul Timpson, Roger Reddel, Pierre Roux, Cristin Print, Margaret A. Baird, and Antony W. Braithwaite
- Subjects
Science - Abstract
Aberrant expression of the Δ133p53 isoform is linked to many cancers. Here, the authors utilise a model of the Δ133p53 isoform that is prone to tumours and inflammation, showing that Δ133p53 promotes tumour cell invasion by activation of the JAK-STAT and RhoA-ROCK pathways in an IL-6 dependent manner.
- Published
- 2018
- Full Text
- View/download PDF
5. On the Elusivity of Dynamic Optimisation Problems
- Author
-
Joan Alza, Mark Bartlett, Josu Ceberio, and John McCall
- Subjects
General Computer Science ,Dynamic Optimization Problems, Elusivity, Adaptation Advantage, Online Solving, Restarting approach, Dynamic Benchmark Generators, Classification schemes ,General Mathematics - Abstract
This is a repository containing the code and results of the work "On the Elusivity of Dynamic Optimisation Problems". The repository divides the information into five folders:Code, Results, R_Filesand Plots. Please, see the Readme file for more information.
- Published
- 2022
- Full Text
- View/download PDF
6. Impact of Clinical Data Veracity on Cancer Genomic Research
- Author
-
Sunali Mehta, Deborah Wright, Michael A Black, Arend Merrie, Ahmad Anjomshoaa, Fran Munro, Anthony Reeve, John McCall, and Cristin Print
- Subjects
Cancer Research ,Oncology ,Neoplasms ,Humans ,Computer Simulation ,Genomics - Abstract
Genomic analysis of tumors is transforming our understanding of cancer. However, although a great deal of attention is paid to the accuracy of the cancer genomic data itself, less attention has been paid to the accuracy of the associated clinical information that renders the genomic data useful for research. In this brief communication, we suggest that omissions and errors in clinical annotations have a major impact on the interpretation of cancer genomic data. We describe our discovery of annotation omissions and errors when reviewing an already carefully annotated colorectal cancer gene expression dataset from our laboratory. The potential importance of clinical annotation omissions and errors was then explored using simulation analyses with an independent genomic dataset. We suggest that the completeness and veracity of clinical annotations accompanying cancer genomic data require renewed focus by the oncology research community, when planning new collections and when interpreting existing cancer genomic data.
- Published
- 2022
- Full Text
- View/download PDF
7. A comparative study of anomaly detection methods for gross error detection problems
- Author
-
Daniel Dobos, Tien Thanh Nguyen, Truong Dang, Allan Wilson, Helen Corbett, John McCall, and Phil Stockton
- Subjects
General Chemical Engineering ,Computer Science Applications - Published
- 2023
- Full Text
- View/download PDF
8. Expansion of Liver Transplantation Criteria for Hepatocellular Carcinoma from Milan to UCSF in Australia and New Zealand and Justification for Metroticket 2.0
- Author
-
Savio G. Barreto, Simone I. Strasser, Geoffrey W. McCaughan, Michael A. Fink, Robert Jones, John McCall, Stephen Munn, Graeme A. Macdonald, Peter Hodgkinson, Gary P. Jeffrey, Bryon Jaques, Michael Crawford, Mark E. Brooke-Smith, and John W. Chen
- Subjects
Cancer Research ,Oncology ,hepatitis ,outcomes ,survival ,Metroticket 2.0 ,Milan ,UCSF ,education - Abstract
Background: Expansion in liver transplantation (LT) criteria for HCC from Milan to UCSF has not adversely impacted overall survival, prompting further expansion towards Metroticket 2.0 (MT2). In this study, we compared patient survival post-transplant before and after 2007 and long-term outcomes for LT within Milan versus UCSF criteria (to determine the true benefit of the expansion of criteria) and retrospectively validated the MT2 criteria. Methods: Retrospective analysis of ANZLITR (including all patients transplanted for HCC since July 1997). The entire cohort was divided based on criteria used at the time of listing, namely, Milan era (1997–2006) and the UCSF era (2007–July 2015). Results: The overall 5- and 10-year cumulative survival rates for the entire cohort of 691 patients were 78% and 69%, respectively. Patients transplanted in UCSF era had significantly higher 5- and 10-year survival rates than in the Milan era (80% vs. 73% and 72% vs. 65%, respectively; p = 0.016). In the UCSF era, the 5-year survival rate for patients transplanted within Milan criteria was significantly better than those transplanted outside Milan but within UCSF criteria (83% vs. 73%; p < 0.024). Patients transplanted within the MT2 criteria had a significantly better 5- and 10-year survival rate as compared to those outside the criteria (81% vs. 64% and 73% vs. 50%, respectively; p = 0.001). Conclusion: Overall survival following LT for HCC has significantly improved over time despite expanding criteria from Milan to UCSF. Patients fulfilling the MT2 criteria have a survival comparable to the UCSF cohort. Thus, expansion of criteria to MT2 is justifiable.
- Published
- 2022
- Full Text
- View/download PDF
9. Analysing the Fitness Landscape Rotation for Combinatorial Optimisation
- Author
-
Joan Alza, Mark Bartlett, Josu Ceberio, and John McCall
- Subjects
Landscape Rotation, Combinatorial Optimisation, Group Theory, Local Search - Abstract
Fitness landscape rotation has been widely used in the field of dynamic combinatorial optimisation to generate test problems for academic purposes. This method changes the mapping between solutions and objective values, but preserves the structure of the fitness landscape. In this work, the rotation of the landscape in the combinatorial domain is theoretically analysed using concepts of discrete mathematics. Certainly, the preservation of the neighbourhood relationship between the solutions and the alterations of the adaptation of the landscape are studied in detail. Based on the theoretical insights obtained, landscape rotation is used to implement a strategy to escape local optima when local search-based algorithms get stuck. Conducted experiments confirm the efficiency of the landscape rotation applied to local search algorithms for the examination of local optima on the linear ordering problem.
- Published
- 2022
- Full Text
- View/download PDF
10. On the Elusivity of Dynamic CombinatorialOptimisation Problems
- Author
-
Joan Alza, Mark Bartlett, Josu Ceberio, and John McCall
- Subjects
Dynamic Optimization Problems, Elusivity, Adaptation Advantage, Online Solving, Restarting approach, Dynamic Benchmark Generators, Classification schemes - Abstract
This is a repository containing the code and results of the work "On the Elusivity of Dynamic Combinatorial Optimisation Problems". The repository divides the information into five folders: Input, Code, Results, R_code and Plots. Please, see the Readme file for more information.
- Published
- 2021
- Full Text
- View/download PDF
11. Abstract A033: The iCCARE Consortium for Prostate Cancer in Black men: Creating a survivorship care plan for Black prostate cancer survivors
- Author
-
Kimlin T. Ashing, Folakemi T. Odedina, Cassandra N. Moore, Che Ngufor, Getachew A. Dagne, Fornati Bedell, Diana Londoño, John McCall, Arnold Merriweather, JoAnne S. Oliver, and Rotimi Rotimi Oladapo
- Subjects
Oncology ,Epidemiology - Abstract
Background: Globally, Black men suffer the greatest prostate cancer (CaP) burden. Blacks are diagnosed more at advanced stages of CaP, greater morbidity and mortality and poorer survivorship outcomes. Blacks are exposed to adverse, discriminatory societal determinants. How these determinants impact the CaP disparities, in particular survivorship and health related quality of life inequities are understudied. Studies examining patient-reported outcomes of Black CaP survivors reveal heightened negative HRQOL sequelae, and depression, family, work and living situation instability due to cancer treatment and societal determinants of health (SDOH). SDOH is used mostly in Public Health to predict population health risk and outcomes, but rarely integrated into HRQOL research and practice. This project is novel by incorporating SDOH into the assessment and relief of HRQOL threats. Despite the Black CaP disparities in survivorship, there remains unacceptable lack of focus and prioritization to provide comprehensive relief. Methods: In response to this noticeable scientific gap, our requisite multidisciplinary investigatory team including Survivor-Advocate Investigators joins forces to achieve the study goal. We are employing community engaged research practice to create a CaP survivorship care plan (SCP) template targeted to Blacks. Our SCP will provide a comprehensive best-practice roadmap to document medical information with treatment history and status, along with relevant resources and health advisories to provide relief for unfavorable sequalae due to cancer and its treatments as well as improve quality of life for CaP survivors. We are building upon the Science of Survivorship, and Contextual Socioecological and the Behavioral Precision Medicine Models with full survivor-advocate partnership to create the SCP. This project to develop a SCP template employs an informative consensus panel to inform the initial SCP. Results: Based on advocate-survivor input and guidance, the SCP will include resources and tools focused on SDOH, treatment adherence, treatment side effects and symptom relief, co-occurring chronic conditions, cardio-protective strategies, and physical, emotional and social wellbeing towards improving in patient outcomes and HRQOL in Black CaP survivors. Data from the Engagement Core and Survivor-Advocate Community Advisory Board, and the preliminary SCP CaP for Black men will be presented at the meeting. Conclusion: Our Consortium builds upon the work of the multidisciplinary PIs, and provide team science approach with robust scientific methods to better understand and address the HRQOL needs of Black CaP survivors. Importantly, this research will make available a patient-centered SCP for CaP survivors, focused on Black men. The iCCaRE Consortium will use this SCP_CaP to inform the development of Artificial Intelligence interventions addressing medical, follow-up care, surveillance, social and emotional support, SDOH and health advisories that will be deployed on popular mobile platforms for greater reach and on-demand use. Citation Format: Kimlin T. Ashing, Folakemi T. Odedina, Cassandra N. Moore, Che Ngufor, Getachew A. Dagne, Fornati Bedell, Diana Londoño, John McCall, Arnold Merriweather, JoAnne S. Oliver, Rotimi Rotimi Oladapo. The iCCARE Consortium for Prostate Cancer in Black men: Creating a survivorship care plan for Black prostate cancer survivors [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr A033.
- Published
- 2023
- Full Text
- View/download PDF
12. Abstract B028: A point of prostate cancer diagnosis (PPCD) Virtual Robot Assistant (ViRA) intervention for newly diagnosed Black men: An iCCaRE consortium for prostate cancer in Black men project
- Author
-
Folakemi Odedina, Che Ngufor, Arnold Merriweather, Deidre Pereira, Jennifer Crook, Fathi Parisa, Roxana Dronca, Ernest Kaninjing, Solomon Rotimi, Kimlin Ashing, Manisha Salinas, Sha’Reff Rashad, John McCall, Ebenezer Erefah, Ayinde Yahaya, and Wes Sholes
- Subjects
Oncology ,Epidemiology - Abstract
Background: To date, the disparate burden of prostate cancer (CaP) in Black men (BM) is still poorly understood. More disconcerting is the limited access to effective, culturally tailored behavioral interventions to support BM diagnosed with CaP, especially at the point of prostate cancer diagnosis (PPCD). From the PPCD, the transition to CaP survivorship can be mentally and physically trying, especially for those lacking emotional and financial support. It is important to provide psycho-oncology support, address social determinants of health (SDOH) and make emotional support available to ethnically diverse BM at the PPCD. We are addressing the CaP disparities experienced by BM at the PPCD through the Inclusive Cancer Care Research Equity (iCCaRE) for Black Men Consortium. Aim: As part of the iCCaRE for Black men Consortium, the specific aim of our project is to develop a PPCD-based Augmented Reality (AR) intervention program, the iCCaRE PPCD Virtual Robot Assistant (ViRA). The ViRA is based on the established efficacy of six CaP care and survivorship (CaPCaS) video interventions, and will support BM at the PPCD. Methodology: The efficacy of six CaPCaS videos were established through formative research that included 17 BM in Florida. Based on pre- and post-test design, data were collected from participants using a structured survey tailored to each CaPCaS intervention: Prevention, Detection, Diagnosis, Treatment, Survivorship and Advocacy form. Following the efficacy study, the iCCaRE PPCD ViRA was proposed as one of five iCCaRE Science of Survivorship (S.O.S) projects to improve the quality of life of BM. Results: Most of the participants for the efficacy study reported that they were: US-born BM; married; college educated; earn less than $60,000; retired; have health insurance; have annual health examination; have a regular doctor; and were screened within the last year. The CaPCaS videos were found to be efficacious in improving attitude towards CaP screening, beliefs about CaP screening, perceived behavioral control and CaP knowledge. Participants also rated the quality of the videos high and expressed high satisfaction with the videos. We are currently in the development phase of the PPCD iCCaRE ViRA, which is based on behavioral science and health communications model. The ViRA will provide SDOH navigation services, psycho-oncology support and emotional support. The acceptance and usability of the iCCaRE ViRA will be established at urology clinics in Florida as part of the iCCaRE Consortium HEROICA Phase I study. We will test the central hypothesis that improving SDOH factors and CaPCaS-related factors will lead to an improvement in patient reported outcomes. Conclusion: We established the efficacy of the CaPCaS interventions in supporting BM across the CaP care continuum. The iCCaRE ViRA under development will target intervention at the PPCD and deliver a smart and connected personalized AR-enabled intervention system that will positively impact CaP diagnosis experience of BM. Citation Format: Folakemi Odedina, Che Ngufor, Arnold Merriweather, Deidre Pereira, Jennifer Crook, Fathi Parisa, Roxana Dronca, Ernest Kaninjing, Solomon Rotimi, Kimlin Ashing, Manisha Salinas, Sha’Reff Rashad, John McCall, Ebenezer Erefah, Ayinde Yahaya, Wes Sholes. A point of prostate cancer diagnosis (PPCD) Virtual Robot Assistant (ViRA) intervention for newly diagnosed Black men: An iCCaRE consortium for prostate cancer in Black men project [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr B028.
- Published
- 2023
- Full Text
- View/download PDF
13. Abstract A045: Investigating the biological determinants of poor mental health among ethnically diverse Black prostate cancer survivors: An iCCaRE consortium for prostate cancer in Black men project
- Author
-
Solomon O. Rotimi, Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Ernest Kaninjing, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Fathi Parisa, Sha’Reff Rashad, John McCall, Ebenezer Erefah, and Ayinde Yahaya
- Subjects
Oncology ,Epidemiology - Abstract
Background: Black men across the world continue to bear the burden of prostate cancer (PCa). Although several factors account for this, studies have demonstrated that the adaptive constitutive biological factors associated with African ancestry contribute to the risk of disease development, disease aggression, and poor disease outcomes in Black men. However, the limited inclusion of Blacks in cancer biology studies limits the understanding of the extent to which biology influence outcomes among Blacks. Furthermore, beyond White-Black dichotomous disparity, there exists an under-exploited within-group geographical and ancestral disparities in the burden, clinical presentation, and outcomes of PCa among Black men globally. Aside from urological and sexual problems, Black PCa survivors experience poor mental health, with a concomitant reduction in the quality of life and overall survival. Hence, improving the overall well-being of PCa survivors requires an understanding of the biological determinants of mental health among diverse groups of Black PCa survivors. Methods: Structured questionnaires are being used to access fatigue, pain, depressive symptoms, health-related quality of life, and PCa-specific symptoms burden among within a cohort of ethnically diverse Black PCa survivors in Africa. This is linked with the salivary biomarkers of inflammation, hypothalamic-pituitary-adrenal axis activity, and tryptophan-kynurenine pathway in our studies population. Results: Data from fifty PCa survivors across different African ethnic groups (including, Yoruba, Igbo, Hausa) will be presented. This study will measure biological variables (e.g, inflammatory markers, cortisol, tryptophan, kynurenine, and serotonin) and survivors self-reported (e.g. health-related quality of life and psychoneurological symptoms) measures to better examine the interconnectedness between cancer-associated biomarkers and poor mental health. Conclusion: This study will lay an essential foundation for understanding the extent to which cancer associated-biological determinants contribute to the poor mental health of PCa survivors, unravels the within-group disparities in the mental health of Black PCa survivors, establish the contribution of genetics to poor mental health in Black PCa survivors and provide the opportunity for pharmacogenomic intervention that will improve the quality of life. Citation Format: Solomon O. Rotimi, Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Ernest Kaninjing, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Fathi Parisa, Sha’Reff Rashad, John McCall, Ebenezer Erefah, Ayinde Yahaya. Investigating the biological determinants of poor mental health among ethnically diverse Black prostate cancer survivors: An iCCaRE consortium for prostate cancer in Black men project [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr A045.
- Published
- 2023
- Full Text
- View/download PDF
14. Abstract A042: Addressing prostate cancer disparities through the Inclusive Cancer Care Research Equity (iCCaRE) for Black men consortium
- Author
-
Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Ernest Kaninjing, Solomon Rotimi, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Fathi Parisa, Sha’Reff Rashad, John McCall, Ebenezer Erefah, and Ayinde Yahaya
- Subjects
Oncology ,Epidemiology - Abstract
Background: The prostate cancer (CaP) disparities experienced by Black men (BM) in the US is a microcosm of the burden of CaP seen in BM globally. The complexity of CaP disparities and the need for a unique approach to better understand and address a complex chronic disease, such as CaP, underscores the need for consortium research that is multilevel, collaborative, translational, and global. In response to this need, we brought together CaP survivors, advocates, scientists and clinicians to form the inclusive Cancer Care Research Equity (iCCaRE) for Black Men Consortium. Aim: Our overall goal is to optimize CaP diagnosis experiences, treatment and survivorship based on the Science of Survivorship (S.O.S). Our primary aim was to develop a consortium co-led by CaP scientists and survivors with multiple S.O.S research projects supported by research cores/services across multiple institutions. Methodology: The consortium was developed to comprise multiple individuals with complementary expertise and resources working collaboratively to achieve the common purpose of eliminating CaP disparities globally. Building on the expertise, resources, and relationships of collaborating CaP investigators, survivors and advocates, the iCCaRE Consortium was proposed to “advance health equity and reduce disparities in CaP” by studying the within-group differences among ethnically diverse BM and comparing US-born BM to their ancestral populations, including West African immigrant men in the US and indigenous West African men. Results: Through the Department of Defense Health Equity Research and Outcomes Improvement Consortium (HEROIC) Award, the iCCaRE Consortium was launched in 2022. There are five pilot projects (PPs), each co-led by a scientist PI and community PI: PP1 will plan and develop a Virtual Robot Assistant (ViRA) that will provide SDOH navigation services, psycho-oncology support and emotional support for BM newly diagnosed with CaP. PP2 will develop and analyze the impact of a patient-centered home cancer care system on health-related Quality of Life (HRQOL), access to healthcare, and PROs. PP3 will plan and develop a ViRA to improve HRQOL in Black CaP survivors. PP4 will employ the social determinants of migrant health framework to understand and address the needs of sub-Saharan African immigrant CaP survivors. PP5 will explore how biological factors induced by cancer cells contribute to poor mental health of CaP survivors. The PPs are supported by an Administrative Core, Translational Research & Clinical Intervention Service, Data Management and Analytics Services, Partnership Engagement Services, Pathology Resource & Biospecimen Core, Methodology and Measures Services and Digital Health & Human Services. Conclusion: The iCCaRE Consortium will “improve HRQOL to enhance outcomes and overall health and wellness for those impacted by CaP” by developing and implementing Artificial Intelligence interventions that address social determinants of health, psycho-oncology support and emotional support for newly diagnosed CaP patients. Citation Format: Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Ernest Kaninjing, Solomon Rotimi, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Fathi Parisa, Sha’Reff Rashad, John McCall, Ebenezer Erefah, Ayinde Yahaya. Addressing prostate cancer disparities through the Inclusive Cancer Care Research Equity (iCCaRE) for Black men consortium [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr A042.
- Published
- 2023
- Full Text
- View/download PDF
15. Abstract B046: Feasibility of patient-centered home care (PCHC) to reduce disparities in Black men (BM) with advanced prostate cancer (CaP): An iCCaRE Consortium for prostate cancer in Black men project
- Author
-
Roxana Dronca, Rohit Rao, Michael Maniaci, Folakemi Odedina, Ernest Kaninjing, Kimlin Ashing, Solomon Rotimi, Manisha Salinas, Sha’Reff Rashad, Arnold Merriweather, John McCall, Ebenezer Erefah, and Ayinde Yahaya
- Subjects
Oncology ,Epidemiology - Abstract
Background: A recent report from the AACR has found that black men (BM) men have prostate cancer (CaP) death rates that are more than 2 times those for men of any other race or ethnicity. Yet, BM are less likely to receive cutting-edge, or even standard of care treatment compared to their white counterparts. Limited access to care, transportation, patient perception and medical mistrust, as well as cost of care are major contributing factors. Meeting patients where they are and offering treatment in or closer to their homes should help reduce psychological distress and increase access to care and treatment compliance. As part of the iCCaRE for Black men Consortium, we propose a PCHC model which leverages Mayo Clinic Advanced Care at Home (ACH) program and provides a package of care to support the administration of cancer therapy and/or supportive care/symptom management to CaP patients in their homes by specialist healthcare professionals. The objectives of our pilot study are to understand patients’ choice of therapy in choosing the place of treatment and to evaluate the feasibility and impact of PCHC on clinical outcomes, as well as patient reported-outcomes (PRO) and health related quality of life (HQOL). Methodology: For Aim 1, structured questionnaires are used to assess patient preference for location of therapy, at the infusion center or in the home, as well as perceived difficulties and advantages; follow up qualitative data will be collected through semi-structured interviews to capture patients’ thoughts, feelings, attitudes, and questions towards cancer treatments being administered at home versus in a hospital setting. Aim 2 is an observational study of BM with advanced CaP who are participating in a pragmatic, practice-based randomized clinical trial to track acceptability by and impact of home administration of cancer directed or supportive therapy. Results: For Aim 1, data is used to identify themes regarding perceived advantages and concerns of PCHC and inform our understanding of the proportion of patients who are willing to receive and would benefit from this level of care at home. For Aim 2, the following outcomes are evaluated: (a) time to first ER visit and to first hospitalization; (b) HRQoL and quality-adjusted survival; (c) patient satisfaction and compliance, including home time, adherence with symptom self-reporting and compliance with treatment, level of comfort interacting with the care team by phone or tablet, ability to reach a team member for questions or concerns; (d) cost, including out of pocket cost for patients. Conclusion: Our project will provide data on patient understanding and acceptability of cancer care at home and strategies for overcoming cancer care delivery disparities and barriers of access to care for underserved communities. The expected outcome of our project is that PCHC intervention will positively impact CaP treatment, patients’ access and experience with healthcare by developing a new concept of closer-to home cancer care delivery that will reach more patients in underserved communities. Citation Format: Roxana Dronca, Rohit Rao, Michael Maniaci, Folakemi Odedina, Ernest Kaninjing, Kimlin Ashing, Solomon Rotimi, Manisha Salinas, Sha’Reff Rashad, Arnold Merriweather, John McCall, Ebenezer Erefah, Ayinde Yahaya. Feasibility of patient-centered home care (PCHC) to reduce disparities in Black men (BM) with advanced prostate cancer (CaP): An iCCaRE Consortium for prostate cancer in Black men project [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr B046.
- Published
- 2023
- Full Text
- View/download PDF
16. Abstract B050: The iCCARE consortium for prostate cancer in Black men: Grounded theory study of the social determinant of health factors among African immigrant men diagnosed with prostate cancer
- Author
-
Ernie Kaninjing, Gladys Asiedu, Mary Ellen Young, Ebenezer Erefah, Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Solomon Rotimi, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Parisa Fathi, Sha’Reff Rashad, John McCall, and Ayinde Yahaya
- Subjects
Oncology ,Epidemiology - Abstract
Background: Sub-Saharan African immigrants (SSAI) in the United States (US) constitute one of the fastest-growing segments of the immigrant population. Between 2010 and 2018, the SSAI population in the US increased by 53%, significantly outpacing the 12% growth rate for the overall foreign-born population in the US during that time frame. For the purposes of this study, we define SSAI as the 2 million individuals living in the US originally from the region of the African continent located south of the Saharan desert. Despite their growing numbers, little is published about the extent to which SSAI adapt to health behaviors more common in the US or remain immersed in the values, beliefs and practices reflective of their country of origin. Importantly, no study has comprehensively examined the social determinants of health factors among this population and its influence on health-seeking behaviors and decision-making regarding prostate cancer care and treatment options. This study employs the social determinant of migrant health framework to examine the impact of immigration on the health-seeking behaviors of SSAI including informed-decision making, psychosocial effects and coping mechanisms. This study will fill the void of research studies that link social determinant of health and immigration among SSAI. Methods: Grounded theory will guide the study design that include in-depth interviews to document the experiences of SSAI relative to health seeking behaviors, care, and treatment. We explore differences and similarities among participants based on country or region of origin in Africa. We employ the social determinants of migrant health framework to better understand the needs of SSAI prostate cancer survivors in the United States and how to effectively support these survivors. Results: We will present findings based on preliminary interviews conducted with SSAI prostate cancer survivors recruited from Florida or Minnesota. Additionally, we will report on the development of a SSAI social determinant of migration model that explains individual and structural factors that impact the health-seeking behaviors and coping mechanisms among this population. We will provide new understanding about specific social determinant of health factors that can be addressed in intervention studies among SSAI in the United States. Conclusion: This study examines the full range of racial, social, environmental, behavioral and structural factors that SSAI prostate cancer survivors experience. It will illuminate the decision-making process and overall quality of care among this understudied population. Citation Format: Ernie Kaninjing, Gladys Asiedu, Mary Ellen Young, Ebenezer Erefah, Folakemi Odedina, Roxana Dronca, Kimlin Ashing, Solomon Rotimi, Che Ngufor, Arnold Merriweather, Jennifer Crook, Manisha Salinas, Parisa Fathi, Sha’Reff Rashad, John McCall, Ayinde Yahaya. The iCCARE consortium for prostate cancer in Black men: Grounded theory study of the social determinant of health factors among African immigrant men diagnosed with prostate cancer [abstract]. In: Proceedings of the 15th AACR Conference on the Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved; 2022 Sep 16-19; Philadelphia, PA. Philadelphia (PA): AACR; Cancer Epidemiol Biomarkers Prev 2022;31(1 Suppl):Abstract nr B050.
- Published
- 2023
- Full Text
- View/download PDF
17. Multi-label classification via incremental clustering on an evolving data stream
- Author
-
Tiancai Liang, Manh Truong Dang, John McCall, Alan Wee-Chung Liew, Anh Vu Luong, and Tien Thanh Nguyen
- Subjects
Data stream ,Multi-label classification ,Concept drift ,Data stream mining ,Computer science ,Sample (statistics) ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,010306 general physics ,Cluster analysis ,computer ,Software ,Hoeffding's inequality - Abstract
With the advancement of storage and processing technology, an enormous amount of data is collected on a daily basis in many applications. Nowadays, advanced data analytics have been used to mine the collected data for useful information and make predictions, contributing to the competitive advantages of companies. The increasing data volume, however, has posed many problems to classical batch learning systems, such as the need to retrain the model completely with the newly arrived samples or the impracticality of storing and accessing a large volume of data. This has prompted interest on incremental learning that operates on data streams. In this study, we develop an incremental online multi-label classification (OMLC) method based on a weighted clustering model. The model is made to adapt to the change of data via the decay mechanism in which each sample's weight dwindles away over time. The clustering model therefore always focuses more on newly arrived samples. In the classification process, only clusters whose weights are greater than a threshold (called mature clusters) are employed to assign labels for the samples. In our method, not only is the clustering model incrementally maintained with the revealed ground truth labels of the arrived samples, the number of predicted labels in a sample are also adjusted based on the Hoeffding inequality and the label cardinality. The experimental results show that our method is competitive compared to several well-known benchmark algorithms on six performance measures in both the stationary and the concept drift settings.
- Published
- 2019
- Full Text
- View/download PDF
18. Outcomes of Liver Transplantation in Small Infants
- Author
-
Hidekazu Yamamoto, John McCall, Hector Vilca-Melendez, Anil Dhawan, Mohamed Rela, Shirin Elizabeth Khorsandi, Nigel Heaton, Yoichi Kawano, and Miriam Cortes-Cerisuelo
- Subjects
Male ,medicine.medical_specialty ,medicine.medical_treatment ,Renal function ,030230 surgery ,Liver transplantation ,Pediatrics ,03 medical and health sciences ,Postoperative Complications ,0302 clinical medicine ,Humans ,Medicine ,Prospective Studies ,Prospective cohort study ,Survival rate ,Transplantation ,Hepatology ,business.industry ,Incidence ,Incidence (epidemiology) ,Graft Survival ,Age Factors ,Liver failure ,Infant ,Original Articles ,Liver Failure, Acute ,medicine.disease ,Liver Transplantation ,Surgery ,Portal vein thrombosis ,Survival Rate ,Treatment Outcome ,Original Article ,Female ,030211 gastroenterology & hepatology ,Graft survival ,business - Abstract
Liver transplantation (LT) for small infants remains challenging because of the demands related to graft selection, surgical technique, and perioperative management. The aim of this study was to evaluate the short‐term and longterm outcomes of LT regarding vascular/biliary complications, renal function, growth, and patient/graft survival in infants ≤3 months compared with those of an age between >3 and 6 months at a single transplant center. A total of 64 infants ≤6 months underwent LT and were divided into 2 groups according to age at LT: those of age ≤3 months (range, 6‐118 days; XS group, n = 37) and those of age >3 to ≤6 months (range, 124‐179 days; S group, n = 27) between 1989 and 2014. Acute liver failure was the main indication for LT in the XS group (n = 31, 84%) versus S (n = 7, 26%). The overall incidence of hepatic artery thrombosis and portal vein thrombosis/stricture were 5.4% and 10.8% in the XS group and 7.4% and 11.1% in the S group, respectively (not significant). The overall incidence of biliary stricture and leakage were 5.4% and 2.7% in the XS group and 3.7% and 3.7% in the S group, respectively (not significant). There was no significant difference between the 2 groups in terms of renal function. No significant difference was found between the 2 groups for each year after LT in terms of height and weight z score. The 1‐, 5‐, and 10‐year patient survival rates were 70.3%, 70.3%, and 70.3% in the XS group compared with 92.6%, 88.9%, and 88.9% in the S group, respectively (not significant). In conclusion, LT for smaller infants has acceptable outcomes despite the challenges of surgical technique, including vascular reconstruction and graft preparation, and perioperative management.
- Published
- 2019
- Full Text
- View/download PDF
19. Optimising the introduction of connected and autonomous vehicles in a public transport system using macro-level mobility simulations and evolutionary algorithms
- Author
-
Alexandru-Ciprian Zavoianu, Kate Han, John McCall, and Lee A. Christie
- Subjects
Geographic information system ,business.industry ,Computer science ,Distributed computing ,media_common.quotation_subject ,Transport network ,Evolutionary algorithm ,Metropolitan area ,Open data ,Public transport ,Quality (business) ,business ,Dijkstra's algorithm ,media_common - Abstract
The past five years have seen a rapid development of plans and test pilots aimed at introducing connected and autonomous vehicles (CAVs) in public transport systems around the world. Using a real-world scenario from the Leeds Metropolitan Area as a case study, we demonstrate an effective way to combine macro-level mobility simulations based on open data (i.e., geographic information system information and transit timetables) with evolutionary optimisation techniques to discover realistic optimised integration routes for CAVs. The macro-level mobility simulations are used to assess the quality (i.e., fitness) of a potential CAV route by quantifying geographic accessibility improvements using an extended version of Dijkstra's algorithm on an abstract multi-modal transport network.
- Published
- 2021
- Full Text
- View/download PDF
20. Towards the landscape rotation as a perturbation strategy on the quadratic assignment problem
- Author
-
John McCall, Joan Alza, Mark Bartlett, and Josu Ceberio
- Subjects
Mathematical optimization ,Permutation ,Local optimum ,Quadratic assignment problem ,Computer science ,Heuristic (computer science) ,Fitness landscape ,Perturbation (astronomy) ,Rotation (mathematics) ,Local search (constraint satisfaction) - Abstract
Recent work in combinatorial optimisation have demonstrated that neighbouring solutions of a local optima may belong to more favourable attraction basins. In this sense, the perturbation strategy plays a critical role on local search based algorithms to kick the search of the algorithm into more prominent areas of the space. In this paper, we investigate the landscape rotation as a perturbation strategy to redirect the search of an stuck algorithm. This technique rearranges the mapping of solutions to different objective values without altering important properties of the problem's landscape such as the number and quality of optima, among others. Particularly, we investigate two rotation based perturbation strategies: (i) a profoundness rotation method and (ii) a broadness rotation method. These methods are applied into the stochastic hill-climbing heuristic and tested and compared on different instances of the quadratic assignment problem against other algorithm versions. Performed experiments reveal that the landscape rotation is an efficient perturbation strategy to shift the search in a controlled way. Nevertheless, an empirical investigation of the landscape rotation demonstrates that it needs to be cautiously manipulated in the permutation space since a small rotation does not necessarily mean a small disturbance in the fitness landscape.
- Published
- 2021
- Full Text
- View/download PDF
21. Weighted Ensemble of Deep Learning Models based on Comprehensive Learning Particle Swarm Optimization for Medical Image Segmentation
- Author
-
Carlos Francisco Moreno-García, Eyad Elyan, Tien Thanh Nguyen, Truong Dang, and John McCall
- Subjects
Statistical classification ,Sørensen–Dice coefficient ,business.industry ,Computer science ,Deep learning ,Classifier (linguistics) ,Particle swarm optimization ,Segmentation ,Pattern recognition ,Image segmentation ,Artificial intelligence ,business ,Swarm intelligence - Abstract
In recent years, deep learning has rapidly become a method of choice for segmentation of medical images. Deep neural architectures such as UNet and FPN have achieved high performances on many medical datasets. However, medical image analysis algorithms are required to be reliable, robust, and accurate for clinical applications which can be difficult to achieve for some single deep learning methods. In this study, we introduce an ensemble of classifiers for semantic segmentation of medical images. The ensemble of classifiers here is a set of various deep learning-based classifiers, aiming to achieve better performance than using a single classifier. We propose a weighted ensemble method in which the weighted sum of segmentation outputs by classifiers is used to choose the final segmentation decision. We use a swarm intelligence algorithm namely Comprehensive Learning Particle Swarm Optimization to optimize the combining weights. Dice coefficient, a popular performance metric for image segmentation, is used as the fitness criteria. Experiments conducted on some medical datasets of the CAMUS competition on cardiographic image segmentation show that our method achieves better results than both the constituent segmentation models and the reported model of the CAMUS competition.
- Published
- 2021
- Full Text
- View/download PDF
22. Integration of Stress–Strain Maps in Mineral Systems Targeting for IOCG Mineralisation within the Mt. Woods Inlier, Gawler Craton, South Australia
- Author
-
Jonathan Nicholas Gloyn-Jones, Ian James Basson, Ben Stoch, Corné Koegelenberg, and Michael-John McCall
- Subjects
structural control ,fluid flow ,IOCG mineralisation ,finite element analysis ,numerical modelling ,exploration ,target generation ,Mt. Woods Inlier ,Gawler Craton ,Geology ,Geotechnical Engineering and Engineering Geology - Abstract
The application of finite element analysis is used to simulate the relative distribution and magnitude of stress–strain conditions during a geologically brief, NNW-SSE-oriented, extensional event (1595 Ma to 1590 Ma), co-incident with IOCG-hydrothermal fluid flow and mineralisation across the Mt Woods Inlier, Gawler Craton, South Australia. Differential stress and shear strain maps across the modelled terrane highlight regions that were predisposed to strain localization, extensional failure and fluid throughput during the simulated mineralisation event. These maps are integrated with other datasets and interpretation layers, one of which is a proposed structural–geometrical relationship apparent in many world-class IOCG deposits, including Prominent Hill, Olympic Dam, Sossego, Salobo, Cristalino and Candelaria. These deposits occur at steeply plunging, pipe-like intersections of conjugate extensional systems of faults, shears and/or contacts, wherein the obtuse angle may have been bisected by the maximum principal extensional axis (viz., σ3) during mineralisation. Several other layers are also used for the generation of targets, such as distance from major shear zones, favourable host lithologies, and proximity to tectonostratigraphic contacts of markedly contrasting competency. The result is an integrated target index or heat map for IOCG prospectively across the Mt. Woods Inlier.
- Published
- 2022
- Full Text
- View/download PDF
23. Unsupervised Change Detection in Hyperspectral Images using Principal Components Space Data Clustering
- Author
-
Yinhe Li, Jinchang Ren, Yijun Yan, Qiaoyuan Liu, Andrei Petrovski, and John McCall
- Subjects
History ,Computer Science Applications ,Education - Abstract
Change detection of hyperspectral images is a very important subject in the field of remote sensing application. Due to the large number of bands and the high correlation between adjacent bands in the hyperspectral image cube, information redundancy is a big problem, which increases the computational complexity and brings negative factor to detection performance. To address this problem, the principal component analysis (PCA) has been widely used for dimension reduction. It has the capability of projecting the original multi-dimensional hyperspectral data into new eigenvector space which allows it to extract light but representative information. The difference image of the PCA components is obtained by subtracting the two dimensionality-reduced images, on which the change detection is considered as a binary classification problem. The first several principal components of each pixel are taken as a feature vector for data classification using k-means clustering with k=2, where the two classes are changed pixels and unchanged pixels, respectively. The centroids of two clusters are determined by iteratively finding the minimum Euclidean distance between pixel’s eigenvectors. Experiments on two publicly available datasets have been carried out and evaluated by overall accuracy. The results have validated the efficacy and efficiency of the proposed approach.
- Published
- 2022
- Full Text
- View/download PDF
24. VEGAS: A Variable Length-Based Genetic Algorithm for Ensemble Selection in Deep Ensemble Learning
- Author
-
Trung Hieu Vu, Truong Dang, Kate Han, John McCall, Tien Thanh Nguyen, and Tien Pham
- Subjects
Ensemble forecasting ,business.industry ,Computer science ,Deep learning ,Crossover ,02 engineering and technology ,Perceptron ,Ensemble learning ,Random forest ,ComputingMethodologies_PATTERNRECOGNITION ,Fitness proportionate selection ,020204 information systems ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm - Abstract
In this study, we introduce an ensemble selection method for deep ensemble systems called VEGAS. The deep ensemble models include multiple layers of the ensemble of classifiers (EoC). At each layer, we train the EoC and generates training data for the next layer by concatenating the predictions for training observations and the original training data. The predictions of the classifiers in the last layer are combined by a combining method to obtain the final collaborated prediction. We further improve the prediction accuracy of a deep ensemble model by searching for its optimal configuration, i.e., the optimal set of classifiers in each layer. The optimal configuration is obtained using the Variable-Length Genetic Algorithm (VLGA) to maximize the prediction accuracy of the deep ensemble model on the validation set. We developed three operators of VLGA: roulette wheel selection for breeding, a chunk-based crossover based on the number of classifiers to generate new offsprings, and multiple random points-based mutation on each offspring. The experiments on 20 datasets show that VEGAS outperforms selected benchmark algorithms, including two well-known ensemble methods (Random Forest and XgBoost) and three deep learning methods (Multiple Layer Perceptron, gcForest, and MULES).
- Published
- 2021
- Full Text
- View/download PDF
25. Evolved ensemble of detectors for gross error detection
- Author
-
Helen Corbett, Allan Wilson, John McCall, Tien Thanh Nguyen, Phil Stockton, and Laud Charles Ochei
- Subjects
Training set ,Physics::Instrumentation and Detectors ,Computer science ,Detector ,Particle swarm optimization ,Sample (statistics) ,0102 computer and information sciences ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,High Energy Physics::Experiment ,020201 artificial intelligence & image processing ,Fisher's method ,Error detection and correction ,Algorithm ,Selection (genetic algorithm) - Abstract
In this study, we evolve an ensemble of detectors to check the presence of gross systematic errors on measurement data. We use the Fisher method to combine the output of different detectors and then test the hypothesis about the presence of gross errors based on the combined value. We further develop a detector selection approach in which a subset of detectors is selected for each sample. The selection is conducted by comparing the output of each detector to its associated selection threshold. The thresholds are obtained by minimizing the 0-1 loss function on training data using the Particle Swarm Optimization method. Experiments conducted on a simulated system confirm the advantages of ensemble and evolved ensemble approach.
- Published
- 2020
- Full Text
- View/download PDF
26. Racing Strategy for the Dynamic-Customer Location-Allocation Problem
- Author
-
Gilbert Owusu, Andrew Hardwick, Reginald Ankrah, Anthony Conway, Benjamin Lacroix, and John McCall
- Subjects
Mathematical optimization ,education.field_of_study ,Model selection ,Population ,02 engineering and technology ,Evaluation function ,Cost savings ,03 medical and health sciences ,0302 clinical medicine ,Friedman test ,Robustness (computer science) ,Incremental learning ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Location-allocation ,education - Abstract
In previous work, we proposed and studied a new dynamic formulation of the Location-allocation (LA) problem called the Dynamic-Customer Location-allocation (DC-LA) problem. DC-LA is based on the idea of changes in customer distribution over a defined period, and these changes have to be taken into account when establishing facilities to service changing customers distributions. This necessitated a dynamic stochastic evaluation function Which came with a high computational cost due to a large number of simulations required in the evaluation process.In this paper, we investigate the use of racing, an approach used in model selection, to reduce the high computational cost by employing the minimum number of simulations for solution selection. Our adaptation of racing uses the Friedman test to compare solutions statistically. Racing allows simulations to be performed iteratively, ensuring that the minimum number of simulations is performed to detect a statistical difference.We present experiments using Population-Based Incremental Learning (PBIL) to explore the savings achievable from using racing in this way. Our results show that racing achieves improved cost savings over the dynamic stochastic evaluation function. We also observed that on average, the computational cost of racing was about 4.5 times loWer than the computational cost of the full dynamic stochastic evaluation.
- Published
- 2020
- Full Text
- View/download PDF
27. WEC: Weighted Ensemble of Text Classifiers
- Author
-
Tien Thanh Nguyen, Ashish Upadhyay, Stewart Massie, and John McCall
- Subjects
Training set ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Particle swarm optimization ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,ComputingMethodologies_PATTERNRECOGNITION ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,F1 score ,business ,computer ,0105 earth and related environmental sciences - Abstract
Text classification is one of the most important tasks in the field of Natural Language Processing. There are many approaches that focus on two main aspects: generating an effective representation; and selecting and refining algorithms to build the classification model. Traditional machine learning methods represent documents in vector space using features such as term frequencies, which have limitations in handling the order and semantics of words. Meanwhile, although achieving many successes, deep learning classifiers require substantial resources in terms of labelled data and computational complexity. In this work, a weighted ensemble of classifiers (WEC) is introduced to address the text classification problem. Instead of using majority vote as the combining method, we propose to associate each classifier’s prediction with a different weight when combining classifiers. The optimal weights are obtained by minimising a loss function on the training data with the Particle Swarm Optimisation algorithm. We conducted experiments on 5 popular datasets and report classification performance of algorithms with classification accuracy and macro F1 score. WEC was run with several different combinations of traditional machine learning and deep learning classifiers to show its flexibility and robustness. Experimental results confirm the advantage of WEC, especially on smaller datasets.
- Published
- 2020
- Full Text
- View/download PDF
28. Multi-layer heterogeneous ensemble with classifier and feature selection
- Author
-
Nang Van Pham, Manh Truong Dang, Anh Vu Luong, Tien Thanh Nguyen, Alan Wee-Chung Liew, and John McCall
- Subjects
Ensemble forecasting ,Computer science ,business.industry ,Deep learning ,Evolutionary algorithm ,Feature selection ,0102 computer and information sciences ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Ensemble learning ,ComputingMethodologies_PATTERNRECOGNITION ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Multi layer ,Classifier (UML) ,computer - Abstract
Deep Neural Networks have achieved many successes when applying to visual, text, and speech information in various domains. The crucial reasons behind these successes are the multi-layer architecture and the in-model feature transformation of deep learning models. These design principles have inspired other sub-fields of machine learning including ensemble learning. In recent years, there are some deep homogenous ensemble models introduced with a large number of classifiers in each layer. These models, thus, require a costly computational classification. Moreover, the existing deep ensemble models use all classifiers including unnecessary ones which can reduce the predictive accuracy of the ensemble. In this study, we propose a multi-layer ensemble learning framework called MUlti-Layer heterogeneous Ensemble System (MULES) to solve the classification problem. The proposed system works with a small number of heterogeneous classifiers to obtain ensemble diversity, therefore being efficiency in resource usage. We also propose an Evolutionary Algorithm-based selection method to select the subset of suitable classifiers and features at each layer to enhance the predictive performance of MULES. The selection method uses NSGA-II algorithm to optimize two objectives concerning classification accuracy and ensemble diversity. Experiments on 33 datasets confirm that MULES is better than a number of well-known benchmark algorithms.
- Published
- 2020
- Full Text
- View/download PDF
29. Confidence in Prediction: An Approach for Dynamic Weighted Ensemble
- Author
-
Duc Thuan Do, Tien Thanh Nguyen, The Trung Nguyen, Anh Vu Luong, Alan Wee-Chung Liew, and John McCall
- Subjects
Training set ,Computer science ,business.industry ,Supervised learning ,Pattern recognition ,02 engineering and technology ,ENCODE ,Ensemble learning ,Confidence interval ,ComputingMethodologies_PATTERNRECOGNITION ,020204 information systems ,Credibility ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Gradient descent ,business - Abstract
Combining classifiers in an ensemble is beneficial in achieving better prediction than using a single classifier. Furthermore, each classifier can be associated with a weight in the aggregation to boost the performance of the ensemble system. In this work, we propose a novel dynamic weighted ensemble method. Based on the observation that each classifier provides a different level of confidence in its prediction, we propose to encode the level of confidence of a classifier by associating with each classifier a credibility threshold, computed from the entire training set by minimizing the entropy loss function with the mini-batch gradient descent method. On each test sample, we measure the confidence of each classifier’s output and then compare it to the credibility threshold to determine whether a classifier should be attended in the aggregation. If the condition is satisfied, the confidence level and credibility threshold are used to compute the weight of contribution of the classifier in the aggregation. By this way, we are not only considering the presence but also the contribution of each classifier based on the confidence in its prediction on each test sample. The experiments conducted on a number of datasets show that the proposed method is better than some benchmark algorithms including a non-weighted ensemble method, two dynamic ensemble selection methods, and two Boosting methods.
- Published
- 2020
- Full Text
- View/download PDF
30. Ensemble-Based Relationship Discovery in Relational Databases
- Author
-
Mathias Kern, John McCall, Benjamin Lacroix, Gilbert Owusu, David Corsar, and Akinola Ogunsemi
- Subjects
Soundex ,Similarity (network science) ,Relational database ,Computer science ,Metric (mathematics) ,Cosine similarity ,Data mining ,computer.software_genre ,computer ,Ensemble learning ,Weighting ,Hierarchical clustering - Abstract
We performed an investigation of how several data relationship discovery algorithms can be combined to improve performance. We investigated eight relationship discovery algorithms like Cosine similarity, Soundex similarity, Name similarity, Value range similarity, etc., to identify potential links between database tables in different ways using different categories of database information. We proposed voting system and hierarchical clustering ensemble methods to reduce the generalization error of each algorithm. Voting scheme uses a given weighting metric to combine the predictions of each algorithm. Hierarchical clustering groups predictions into clusters based on similarities and then combine a member from each cluster together. We run experiments to validate the performance of each algorithm and compare performance with our ensemble methods and the state-of-the-art algorithms (FaskFK, Randomness and HoPF) using Precision, Recall and F-Measure evaluation metrics over TPCH and AdvWork datasets. Results show that performance of each algorithm is limited, indicating the importance of combining them to consolidate their strengths.
- Published
- 2020
- Full Text
- View/download PDF
31. Comparative Run-Time Performance of Evolutionary Algorithms on Multi-objective Interpolated Continuous Optimisation Problems
- Author
-
Benjamin Lacroix, Alexandru-Ciprian Zavoianu, and John McCall
- Subjects
050101 languages & linguistics ,Mathematical optimization ,Computer science ,Fitness landscape ,05 social sciences ,Pareto principle ,Evolutionary algorithm ,02 engineering and technology ,Weighting ,Set (abstract data type) ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Interpolation - Abstract
We propose a new class of multi-objective benchmark problems on which we analyse the performance of four well established multi-objective evolutionary algorithms (MOEAs) – each implementing a different search paradigm – by comparing run-time convergence behaviour over a set of 1200 problem instances. The new benchmarks are created by fusing previously proposed single-objective interpolated continuous optimisation problems (ICOPs) via a common set of Pareto non-dominated seeds. They thus inherit the ICOP property of having tunable fitness landscape features. The benchmarks are of intrinsic interest as they derive from interpolation methods and so can approximate general problem instances. This property is revealed to be of particular importance as our extensive set of numerical experiments indicates that choices pertaining to (i) the weighting of the inverse distance interpolation function and (ii) the problem dimension can be used to construct problems that are challenging to all tested multi-objective search paradigms. This in turn means that the new multi-objective ICOPs problems (MO-ICOPs) can be used to construct well-balanced benchmark sets that discriminate well between the run-time convergence behaviour of different solvers.
- Published
- 2020
- Full Text
- View/download PDF
32. A Homogeneous-Heterogeneous Ensemble of Classifiers
- Author
-
Phuong Minh Nguyen, Trung Hieu Vu, Alan Wee-Chung Liew, John McCall, Nang Van Pham, Anh Vu Luong, and Tien Thanh Nguyen
- Subjects
Flexibility (engineering) ,Training set ,Computer science ,Random projection ,020206 networking & telecommunications ,02 engineering and technology ,Construct (python library) ,Base (topology) ,computer.software_genre ,Ensemble learning ,Set (abstract data type) ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
In this study, we introduce an ensemble system by combining homogeneous ensemble and heterogeneous ensemble into a single framework. Based on the observation that the projected data is significantly different from the original data as well as each other after using random projections, we construct the homogeneous module by applying random projections on the training data to obtain the new training sets. In the heterogeneous module, several learning algorithms will train on the new training sets to generate the base classifiers. We propose four combining algorithms based on Sum Rule and Majority Vote Rule for the proposed ensemble. Experiments on some popular datasets confirm that the proposed ensemble method is better than several well-known benchmark algorithms proposed framework has great flexibility when applied to real-world applications. The proposed framework has great flexibility when applied to real-world applications by using any techniques that make rich training data for the homogeneous module, as well as using any set of learning algorithms for the heterogeneous module.
- Published
- 2020
- Full Text
- View/download PDF
33. Toward an Ensemble of Object Detectors
- Author
-
John McCall, Truong Dang, and Tien Thanh Nguyen
- Subjects
Physics::Instrumentation and Detectors ,Computer science ,business.industry ,Detector ,Particle swarm optimization ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,Object (computer science) ,01 natural sciences ,Ensemble learning ,Field (computer science) ,Evolutionary computation ,Object detection ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,High Energy Physics::Experiment ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
The field of object detection has witnessed great strides in recent years. With the wave of deep neural networks (DNN), many breakthroughs have achieved for the problems of object detection which previously were thought to be difficult. However, there exists a limitation with DNN-based approaches as some architectures are only suitable for particular types of object. Thus it would be desirable to combine the strengths of different methods to handle objects in different contexts. In this study, we propose an ensemble of object detectors in which individual detectors are adaptively combine for the collaborated decision. The combination is conducted on the outputs of detectors including the predicted label and location for each object. We proposed a detector selection method to select the suitable detectors and a weighted-based combining method to combine the predicted locations of selected detectors. The parameters of these methods are optimized by using Particle Swarm Optimization in order to maximize mean Average Precision (mAP) metric. Experiments conducted on VOC2007 dataset with six object detectors show that our ensemble method is better than each single detector.
- Published
- 2020
- Full Text
- View/download PDF
34. Programming Heterogeneous Parallel Machines Using Refactoring and Monte–Carlo Tree Search
- Author
-
Vladimir Janjic, Christopher Brown, Mehdi Goli, John McCall, European Commission, EPSRC, University of St Andrews. School of Computer Science, and University of St Andrews. Centre for Interdisciplinary Research in Computational Algebra
- Subjects
QA75 ,Monte-Carlo tree search ,Computer science ,Heterogenous parallel computing ,QA75 Electronic computers. Computer science ,T-NDAS ,Monte Carlo tree search ,Monte Carlo method ,02 engineering and technology ,Parallel computing ,computer.software_genre ,Theoretical Computer Science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Algorithmic skeleton ,020207 software engineering ,Tree (data structure) ,Code refactoring ,Theory of computation ,Scalability ,BDC ,Optimisations ,computer ,Software ,Information Systems - Abstract
Funding: This work was supported by the EU Horizon 2020 project, TeamPlay, Grant Number 779882, and UK EPSRC Discovery, Grant Number EP/P020631/1. This paper presents a new technique for introducing and tuning parallelism for heterogeneous shared-memory systems (comprising a mixture of CPUs and GPUs), using a combination of algorithmic skeletons (such as farms and pipelines), Monte–Carlo tree search for deriving mappings of tasks to available hardware resources, and refactoring tool support for applying the patterns and mappings in an easy and effective way. Using our approach, we demonstrate easily obtainable, significant and scalable speedups on a number of case studies showing speedups of up to 41 over the sequential code on a 24-core machine with one GPU. We also demonstrate that the speedups obtained by mappings derived by the MCTS algorithm are within 5–15% of the best-obtained manual parallelisation. Publisher PDF
- Published
- 2020
- Full Text
- View/download PDF
35. Simultaneous meta-data and meta-classifier selection in multiple classifier system
- Author
-
Thi Minh Van Nguyen, Anh Vu Luong, Tien Thanh Nguyen, John McCall, Trong Sy Ha, and Alan Wee-Chung Liew
- Subjects
Training set ,Computer science ,business.industry ,Ant colony optimization algorithms ,Feature selection ,Pattern recognition ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Multiple classifier ,Cross-validation ,Metadata ,ComputingMethodologies_PATTERNRECOGNITION ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) - Abstract
In ensemble systems, the predictions of base classifiers are aggregated by a combining algorithm (meta-classifier) to achieve better classification accuracy than using a single classifier. Experiments show that the performance of ensembles significantly depends on the choice of meta-classifier. Normally, the classifier selection method applied to an ensemble usually removes all the predictions of a classifier if this classifier is not selected in the final ensemble. Here we present an idea to only remove a subset of each classifier's prediction thereby introducing a simultaneous meta-data and meta-classifier selection method for ensemble systems. Our approach uses Cross Validation on the training set to generate meta-data as the predictions of base classifiers. We then use Ant Colony Optimization to search for the optimal subset of meta-data and meta-classifier for the data. By considering each column of meta-data, we construct the configuration including a subset of these columns and a meta-classifier. Specifically, the columns are selected according to their corresponding pheromones, and the meta-classifier is chosen at random. The classification accuracy of each configuration is computed based on Cross Validation on meta-data. Experiments on UCI datasets show the advantage of proposed method compared to several classifier and feature selection methods for ensemble systems.
- Published
- 2019
- Full Text
- View/download PDF
36. On the definition of dynamic permutation problems under landscape rotation
- Author
-
Joan Alza, John McCall, Mark Bartlett, and Josu Ceberio
- Subjects
Permutation ,010201 computation theory & mathematics ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0102 computer and information sciences ,02 engineering and technology ,Flow shop scheduling ,01 natural sciences ,Algorithm ,Evolutionary computation - Abstract
Dynamic optimisation problems (DOPs) are optimisation problems that change over time. Typically, DOPs have been defined as a sequence of static problems, and the dynamism has been inserted into existing static problems using different techniques. In the case of dynamic permutation problems, this process has been usually done by the rotation of the landscape. This technique modifies the encoding of the problem and maintains its structure over time. Commonly, the changes are performed based on the previous state, recreating a concatenated changing problem. However, despite its simplicity, our intuition is that, in general, the landscape rotation may induce severe changes that lead to problems whose resemblance to the previous state is limited, if not null. Therefore, the problem should not be classified as a DOP, but as a sequence of unrelated problems. In order to test this, we consider the flow shop scheduling problem (FSSP) as a case study and the rotation technique that relabels the encoding of the problem according to a permutation. We compare the performance of two versions of the state-of-the-art algorithm for that problem on a wide experimental study: an adaptive version that benefits from the previous knowledge and a restarting version. Conducted experiments confirm our intuition and reveal that, surprisingly, it is preferable to restart the search when the problem changes even for some slight rotations. Consequently, the use of the rotation technique to recreate dynamic permutation problems is revealed in this work.
- Published
- 2019
- Full Text
- View/download PDF
37. Limitations of benchmark sets and landscape features for algorithm selection and performance prediction
- Author
-
John McCall and Benjamin Lacroix
- Subjects
business.industry ,Computer science ,Feature vector ,0102 computer and information sciences ,02 engineering and technology ,Space (commercial competition) ,Machine learning ,computer.software_genre ,01 natural sciences ,Set (abstract data type) ,010201 computation theory & mathematics ,Differential evolution ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Feature (machine learning) ,Performance prediction ,020201 artificial intelligence & image processing ,Case-based reasoning ,Artificial intelligence ,business ,computer - Abstract
Benchmark sets and landscape features are used to test algorithms and to train models to perform algorithm selection or configuration. These approaches are based on the assumption that algorithms have similar performances on problems with similar feature sets. In this paper, we test different configurations of differential evolution (DE) against the BBOB set. We then use the landscape features of those problems and a case base reasoning approach for DE configuration selection. We show that, although this method obtains good results for BBOB problems, it fails to select the best configurations when facing a new set of optimisation problems with a distinct array of landscape features. This demonstrates the limitations of the BBOB set for algorithm selection. Moreover, by examination of the relationship between features and algorithm performance, we show that there is no correlation between the feature space and the performance space. We conclude by identifying some important open questions raised by this work.
- Published
- 2019
- Full Text
- View/download PDF
38. Introducing the Dynamic Customer Location-Allocation Problem
- Author
-
Anthony Conway, John McCall, Reginald Ankrah, Benjamin Lacroix, and Andrew Hardwick
- Subjects
Service (business) ,Mathematical optimization ,021103 operations research ,Stochastic process ,Search algorithm ,Computer science ,0211 other engineering and technologies ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Resource management ,Location-allocation ,02 engineering and technology ,Replication (computing) - Abstract
In this paper, we introduce a new stochastic Location-Allocation Problem which assumes the movement of customers over time. We call this new problem Dynamic Customer Location-Allocation Problem (DC-LAP). The problem is based on the idea that customers will change locations over a defined horizon and these changes have to be taken into account when establishing facilities to service customers demands. We generate 1440 problem instances by varying the problem parameters of movement rate which determines the possible number of times a customer will change locations over the defined period, the number of facilities and the number of customers. We propose to analyse the characteristics of the instances generated by testing a search algorithm using the stochastic dynamic evaluation (based on the replication of customer movement scenarios) and a deterministic static evaluation (based on the assumption that customer will not move over time). We show that the dynamic approach obtains globally better results, but the performances are highly related to the parameters of the problem. Moreover, the dynamic approach involves a significantly high computational overhead.
- Published
- 2019
- Full Text
- View/download PDF
39. Truck and trailer scheduling in a real world, dynamic and heterogeneous context
- Author
-
John McCall, Steven Anderson, Olivier Regnier-Coudert, and Mayowa Ayodele
- Subjects
Truck ,050210 logistics & transportation ,Engineering ,021103 operations research ,Operations research ,business.industry ,Reliability (computer networking) ,05 social sciences ,Trailer ,0211 other engineering and technologies ,Transportation ,Context (language use) ,02 engineering and technology ,Solver ,Automotive engineering ,Scheduling (computing) ,Dynamic simulation ,0502 economics and business ,Vehicle routing problem ,Business and International Management ,business ,Civil and Structural Engineering - Abstract
We present a new variant of the Vehicle Routing Problem based on a real industrial scenario. This VRP is dynamic and heavily constrained and uses time-windows, a heterogeneous vehicle fleet and multiple types of job. A constructive solver is developed and tested using dynamic simulation of real-world data from a leading Scottish haulier. Our experiments establish the efficiency and reliability of the method for this problem. Additionally, a methodology for evaluating policy changes through simulation is presented, showing that our technique supports operations and management. We establish that fleet size can be reduced or more jobs handled by the company.
- Published
- 2016
- Full Text
- View/download PDF
40. Evolving interval-based representation for multiple classifier fusion
- Author
-
Vimal Anand Baghel, John McCall, Anh Vu Luong, Tien Thanh Nguyen, Manh Truong Dang, and Alan Wee-Chung Liew
- Subjects
Information Systems and Management ,Artificial neural network ,business.industry ,Computer science ,Particle swarm optimization ,Pattern recognition ,02 engineering and technology ,Interval (mathematics) ,Base (topology) ,Class (biology) ,Management Information Systems ,Support vector machine ,Set (abstract data type) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Representation (mathematics) ,business ,Software - Abstract
Designing an ensemble of classifiers is one of the popular research topics in machine learning since it can give better results than using each constituent member. Furthermore, the performance of ensemble can be improved using selection or adaptation. In the former, the optimal set of base classifiers, meta-classifier, original features, or meta-data is selected to obtain a better ensemble than using the entire classifiers and features. In the latter, the base classifiers or combining algorithms working on the outputs of the base classifiers are made to adapt to a particular problem. The adaptation here means that the parameters of these algorithms are trained to be optimal for each problem. In this study, we propose a novel evolving combining algorithm using the adaptation approach for the ensemble systems. Instead of using numerical value when computing the representation for each class, we propose to use the interval-based representation for the class. The optimal value of the representation is found through Particle Swarm Optimization. During classification, a test instance is assigned to the class with the interval-based representation that is closest to the base classifiers’ prediction. Experiments conducted on a number of popular dataset confirmed that the proposed method is better than the well-known ensemble systems using Decision Template and Sum Rule as combiner, L2-loss Linear Support Vector Machine, Multiple Layer Neural Network, and the ensemble selection methods based on GA-Meta-data, META-DES, and ACO.
- Published
- 2020
- Full Text
- View/download PDF
41. Evolving an Optimal Decision Template for Combining Classifiers
- Author
-
Anh Vu Luong, Tien Thanh Nguyen, Manh Truong Dang, Thi Thu Thuy Nguyen, Alan Wee-Chung Liew, Lan Phuong Dao, and John McCall
- Subjects
0209 industrial biotechnology ,Training set ,Computer science ,business.industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,Ensemble learning ,Cross-validation ,Artificial bee colony algorithm ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Point estimation ,business ,computer ,Classifier (UML) ,Optimal decision - Abstract
In this paper, we aim to develop an effective combining algorithm for ensemble learning systems. The Decision Template method, one of the most popular combining algorithms for ensemble systems, does not perform well when working on certain datasets like those having imbalanced data. Moreover, point estimation by computing the average value on the outputs of base classifiers in the Decision Template method is sometimes not a good representation, especially for skewed datasets. Here we propose to search for an optimal decision template in the combining algorithm for a heterogeneous ensemble. To do this, we first generate the base classifier by training the pre-selected learning algorithms on the given training set. The meta-data of the training set is then generated via cross validation. Using the Artificial Bee Colony algorithm, we search for the optimal template that minimizes the empirical 0–1 loss function on the training set. The class label is assigned to the unlabeled sample based on the maximum of the similarity between the optimal decision template and the sample’s meta-data. Experiments conducted on the UCI datasets demonstrated the superiority of the proposed method over several benchmark algorithms.
- Published
- 2019
- Full Text
- View/download PDF
42. Ensemble Selection based on Classifier Prediction Confidence
- Author
-
Alan Wee-Chung Liew, John McCall, Manh Truong Dang, Anh Vu Luong, and Tien Thanh Nguyen
- Subjects
Ensemble selection ,Computer science ,business.industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Ensemble learning ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,Credibility ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,010306 general physics ,business ,Classifier (UML) ,computer ,Software - Abstract
Ensemble selection is one of the most studied topics in ensemble learning because a selected subset of base classifiers may perform better than the whole ensemble system. In recent years, a great many ensemble selection methods have been introduced. However, many of these lack flexibility: either a fixed subset of classifiers is pre-selected for all test samples (static approach), or the selection of classifiers depends upon the performance of techniques that define the region of competence (dynamic approach). In this paper, we propose an ensemble selection method that takes into account each base classifier's confidence during classification and the overall credibility of the base classifier in the ensemble. In other words, a base classifier is selected to predict for a test sample if the confidence in its prediction is higher than its credibility threshold. The credibility thresholds of the base classifiers are found by minimizing the empirical 0–1 loss on the entire training observations. In this way, our approach integrates both the static and dynamic aspects of ensemble selection. Experiments on 62 datasets demonstrate that the proposed method achieves much better performance in comparison to some ensemble methods.
- Published
- 2020
- Full Text
- View/download PDF
43. Hepatitis B virus-related hepatocellular carcinoma presenting at an advanced stage: is it preventable?
- Author
-
Thomas, Mules, Ed, Gane, Oonagh, Lithgow, Adam, Bartlett, and John, McCall
- Subjects
Adult ,Aged, 80 and over ,Male ,Carcinoma, Hepatocellular ,Delayed Diagnosis ,Hepatitis B Surface Antigens ,Incidence ,Liver Neoplasms ,Racial Groups ,Middle Aged ,Hepatitis B ,Young Adult ,Humans ,Female ,Chemoembolization, Therapeutic ,Aged ,New Zealand ,Retrospective Studies - Abstract
Earlier diagnosis of hepatitis B virus (HBV) related hepatocellular carcinoma (HCC) increases treatment options and survival. The aim of this study is to evaluate which factors are associated with late presentation of HBV-related HCC.This is a retrospective review of all cases of HBV-related HCC diagnosed with late-stage/incurable HCC in New Zealand between 2003 and 2017. Cases were defined as patients with a positive hepatitis B surface antigen (HBsAg), and advanced (not amenable to potentially curable treatments) HCC at initial diagnosis. Patients were categorised into four groups according to potential reasons for late presentation: no previous diagnosis of HBV infection (Group A); known HBV diagnosis but not receiving HCC surveillance (Group B); known HBV diagnosis and receiving suboptimal HCC surveillance (Group C); and known HBV diagnosis and receiving optimised HCC surveillance (Group D).A total of 368 patients were reviewed. The average age at death was 59 years, and the majority of patients were Māori (39%), Pacific (34%) or Asian (20%). The incidence of patients presenting with HBV-related advanced HCC increased from 4.5 cases to 6.3 cases per million people over the review period. Of the cases, 40% were categorised into Group A, 26% into Group B, 12% into Group C and 23% in Group D. Overall, the median survival was 138 days, and this did not change during the study period. Patients receiving optimised surveillance (Group D) survived longer (mean 469 days) than patients in Group A (90 days), Group B (145 days) or Group C (152 days) (p0.05). Patients in Group D were more likely to be treated with transarterial chemoembolisation than patients in other groups (40% vs 15%, p0.05).This study has highlighted the need for improved rates of HBV diagnosis, better follow-up of those infected and the importance of optimal HCC surveillance. In New Zealand, HBV-related HCC disproportionately affects minority ethnic groups, and given the increasing incidence, provides a potential domain to reduce health inequities.
- Published
- 2018
44. An Analysis of Indirect Optimisation Strategies for Scheduling
- Author
-
Charles Neau, Olivier Regnier-Coudert, and John McCall
- Subjects
Schedule ,education.field_of_study ,Mathematical optimization ,021103 operations research ,Job shop scheduling ,business.industry ,Computer science ,Population ,0211 other engineering and technologies ,02 engineering and technology ,Scheduling (computing) ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Local search (optimization) ,Greedy algorithm ,business ,education ,Hill climbing ,Decoding methods - Abstract
By incorporating domain knowledge, simple greedy procedures can be defined to generate reasonably good solutions to many optimisation problems. However, such solutions are unlikely to be optimal and their quality often depends on the way the decision variables are input to the greedy method. Indirect optimisation uses meta-heuristics to optimise the input of the greedy decoders. As the performance and the runtime differ across greedy methods and meta-heuristics, deciding how to split the computational effort between the two sides of the optimisation is not trivial and can significantly impact the search. In this paper, an artificial scheduling problem is presented along with five greedy procedures, using varying levels of domain information. A methodology to compare different indirect optimisation strategies is presented using a simple Hill Climber, a Genetic Algorithm and a population-based Local Search. By assessing all combinations of meta-heuristics and greedy procedures on a range of problem instances with different properties, experiments show that encapsulating problem knowledge within greedy decoders may not always prove successful and that simpler methods can lead to comparable results as advanced ones when combined with meta-heuristics that are adapted to the problem. However, the use of efficient greedy procedures reduces the relative difference between meta-heuristics.
- Published
- 2018
- Full Text
- View/download PDF
45. Iterated Racing Algorithm for Simulation-Optimisation of Maintenance Planning
- Author
-
Benjamin Lacroix, Jerome Lonchampt, and John McCall
- Subjects
education.field_of_study ,021103 operations research ,Population size ,Population ,0211 other engineering and technologies ,02 engineering and technology ,Maintenance engineering ,Iterated function ,Spare part ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Unavailability ,education ,Algorithm ,Statistical hypothesis testing - Abstract
The purpose of this paper is two fold. First, we present a set of benchmark problems for maintenance optimisation called VMELight. This model allows the user to define the number of components in the system to maintain and a number of customisable parameters such as the failure distribution of the components, the spare part stock level and every costs associated with the preventive and corrective maintenances, unavailability and spare parts. From this model, we create a benchmark of 175 optimisation problems across different dimensions. This benchmark allows us to test the idea of using an iterated racing algorithm called IRACE based on the Friedman statistical test, to reduce the number of simulations needed to compare solutions in the population. We assess different population size and truncation rate to show that those parameters can have a strong influence on the performance of the algorithm.
- Published
- 2018
- Full Text
- View/download PDF
46. Performance Analysis of GA and PBIL Variants for Real-World Location-Allocation Problems
- Author
-
Olivier Regnier-Coudert, Anthony Conway, Andrew Hardwick, Reginald Ankrah, and John McCall
- Subjects
0209 industrial biotechnology ,education.field_of_study ,Mathematical optimization ,Heuristic (computer science) ,Population ,Crossover ,02 engineering and technology ,Tournament selection ,Set (abstract data type) ,020901 industrial engineering & automation ,Mutation (genetic algorithm) ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Location-allocation ,education - Abstract
The Uncapacitated Location-Allocation problem (ULAP) is a major optimisation problem concerning the determination of the optimal location of facilities and the allocation of demand to them. In this paper, we present two novel problem variants of Non-Linear ULAP motivated by a real-world problem from the telecommunication industry: Uncapacitated Location-Allocation Resilience problem (ULARP) and Uncapacitated Location-Allocation Resilience problem with Restrictions (ULARPR). Problem sizes ranging from 16 to 100 facilities by 50 to 10000 demand points are considered. To solve the problems, we explore the components and configurations of four Genetic Algorithms [1]–[3] and [4] selected from the ULAP literature. We aim to understand the contribution each choice makes to the GA performance and so hope to design an Optimal GA configuration for the novel problems. We also conduct comparative experiments with Population-Based Incremental Learning (PBIL) Algorithm on ULAP. We show the effectiveness of PBIL and GA with parameter set: random and heuristic initialisation, tournament and fined_grained tournament selection, uniform crossover and bitflip mutation in solving the proposed problems.
- Published
- 2018
- Full Text
- View/download PDF
47. Tactical Plan Optimisation for Large Multi-Skilled Workforces Using a Bi-Level Model
- Author
-
Gilbert Owusu, Sid Shakya, Russell Ainslie, and John McCall
- Subjects
Matching (statistics) ,Linear programming ,Operations research ,Computer science ,business.industry ,Process (engineering) ,media_common.quotation_subject ,05 social sciences ,050301 education ,Tactical planning ,02 engineering and technology ,Automation ,Capacity planning ,Service (economics) ,0202 electrical engineering, electronic engineering, information engineering ,Revenue ,020201 artificial intelligence & image processing ,business ,0503 education ,Tertiary sector of the economy ,media_common - Abstract
The service chain planning process is a critical component in the operations of companies in the service industry, such as logistics, telecoms or utilities. This process involves looking ahead over various timescales to ensure that available capacity matches the required demand whilst maximizing revenues and minimizing costs. This problem is particularly complex for companies with large, multi-skilled workforces as matching these resources to the required demand can be done in a vast number of combinations. The vastness of the problem space combined with the criticality to the business is leading to an increasing move towards automation of the process in recent years. In this paper we focus on the tactical plan where planning is occurring daily for the coming weeks, matching the available capacity to demand, using capacity levers to flex capacity to keep backlogs within target levels whilst maintaining target levels for provision of new revenues. First we describe the tactical planning problem before defining a bi-level model to search for optimal solutions to it. We show, by comparing the model results to actual planners on real world examples, that the bi-level model produces good results that replicate the planners' process whilst keeping the backlogs closer to target levels, thus providing a strong case for its use in the automation of the tactical planning process.
- Published
- 2018
- Full Text
- View/download PDF
48. A Holistic Metric Approach to Solving the Dynamic Location-Allocation Problem
- Author
-
Andrew Hardwick, John McCall, Benjamin Lacroix, Anthony Conway, and Reginald Ankrah
- Subjects
021110 strategic, defence & security studies ,Mathematical optimization ,education.field_of_study ,Horizon (archaeology) ,Computer science ,Stochastic modelling ,05 social sciences ,Population ,0211 other engineering and technologies ,02 engineering and technology ,Set (abstract data type) ,0502 economics and business ,Metric (mathematics) ,Genetic algorithm ,Location-allocation ,Constant (mathematics) ,education ,050203 business & management - Abstract
In this paper, we introduce a dynamic variant of the Location-Allocation problem: Dynamic Location-Allocation Problem (DULAP). DULAP involves the location of facilities to service a set of customer demands over a defined horizon. To evaluate a solution to DULAP, we propose two holistic metric approaches: Static and Dynamic Approach. In the static approach, a solution is evaluated with the assumption that customer locations and demand remain constant over a defined horizon. In the dynamic approach, the assumption is made that customer demand, and demographic pattern may change over the defined horizon. We introduce a stochastic model to simulate customer population and distribution over time. We use a Genetic Algorithm and Population-Based Incremental Learning algorithm used in previous work to find robust and satisfactory solutions to DULAP. Results show the dynamic approach of evaluating a solution finds good and robust solutions.
- Published
- 2018
- Full Text
- View/download PDF
49. Albendazole and antibiotics synergize to deliver short-course anti
- Author
-
Joseph D, Turner, Raman, Sharma, Ghaith, Al Jayoussi, Hayley E, Tyrer, Joanne, Gamble, Laura, Hayward, Richard S, Priestley, Emma A, Murphy, Jill, Davies, David, Waterhouse, Darren A N, Cook, Rachel H, Clare, Andrew, Cassidy, Andrew, Steven, Kelly L, Johnston, John, McCall, Louise, Ford, Janet, Hemingway, Stephen A, Ward, and Mark J, Taylor
- Subjects
Male ,Pharmacology ,Mice, Inbred BALB C ,albendazole ,Drug Synergism ,Minocycline ,Biological Sciences ,Anti-Bacterial Agents ,Filariasis ,combination therapy ,Mice ,PNAS Plus ,parasitic diseases ,Animals ,Benzimidazoles ,Female ,Rifampin ,macrofilaricide ,Brugia malayi ,Wolbachia - Abstract
Significance Filarial nematode infections, caused by Wuchereria bancrofti, Brugia malayi (elephantiasis), and Onchocerca volvulus (river blindness) infect 150 million of the world’s poorest populations and cause profound disability. Standard treatments require repetitive, long-term, mass drug administrations and have failed to interrupted transmission in certain sub-Saharan African regions. A drug cure using doxycycline, which targets the essential filarial endosymbiont Wolbachia, is clinically effective but programmatically challenging to implement due to long treatment durations and contraindications. Here we provide proof-of-concept of a radical improvement of targeting Wolbachia via identification of drug synergy between the anthelmintic albendazole and antibiotics. This synergy enables the shortening of treatment duration of macrofilaricidal anti-Wolbachia based treatments from 4 wk to 7 d with registered drugs ready for clinical testing., Elimination of filariasis requires a macrofilaricide treatment that can be delivered within a 7-day period. Here we have identified a synergy between the anthelmintic albendazole (ABZ) and drugs depleting the filarial endosymbiont Wolbachia, a proven macrofilaricide target, which reduces treatment from several weeks to 7 days in preclinical models. ABZ had negligible effects on Wolbachia but synergized with minocycline or rifampicin (RIF) to deplete symbionts, block embryogenesis, and stop microfilariae production. Greater than 99% Wolbachia depletion following 7-day combination of RIF+ABZ also led to accelerated macrofilaricidal activity. Thus, we provide preclinical proof-of-concept of treatment shortening using antibiotic+ABZ combinations to deliver anti-Wolbachia sterilizing and macrofilaricidal effects. Our data are of immediate public health importance as RIF+ABZ are registered drugs and thus immediately implementable to deliver a 1-wk macrofilaricide. They also suggest that novel, more potent anti-Wolbachia drugs under development may be capable of delivering further treatment shortening, to days rather than weeks, if combined with benzimidazoles.
- Published
- 2017
50. Albendazole and antibiotics synergize to deliver short-course anti- Wolbachia curative treatments in preclinical models of filariasis
- Author
-
Mark J. Taylor, Rachel H. Clare, Louise Ford, Andrew Steven, Richard S. Priestley, Emma A Murphy, Joseph D. Turner, Janet Hemingway, Jill Davies, John McCall, David Waterhouse, Stephen A. Ward, Andrew Cassidy, Hayley E. Tyrer, Kelly L. Johnston, Raman Sharma, Darren A. N. Cook, Laura Hayward, Joanne Gamble, and Ghaith Al Jayoussi
- Subjects
0301 basic medicine ,Combination therapy ,medicine.drug_class ,030231 tropical medicine ,Antibiotics ,Pharmacology ,Biology ,Albendazole ,Filariasis ,03 medical and health sciences ,Macrofilaricide ,chemistry.chemical_compound ,0302 clinical medicine ,parasitic diseases ,medicine ,Anthelmintic ,Multidisciplinary ,A100 ,biochemical phenomena, metabolism, and nutrition ,medicine.disease ,biology.organism_classification ,3. Good health ,030104 developmental biology ,Synergy ,chemistry ,Immunology ,bacteria ,Wolbachia ,medicine.drug - Abstract
Significance Filarial nematode infections, caused by Wuchereria bancrofti , Brugia malayi (elephantiasis), and Onchocerca volvulus (river blindness) infect 150 million of the world’s poorest populations and cause profound disability. Standard treatments require repetitive, long-term, mass drug administrations and have failed to interrupted transmission in certain sub-Saharan African regions. A drug cure using doxycycline, which targets the essential filarial endosymbiont Wolbachia , is clinically effective but programmatically challenging to implement due to long treatment durations and contraindications. Here we provide proof-of-concept of a radical improvement of targeting Wolbachia via identification of drug synergy between the anthelmintic albendazole and antibiotics. This synergy enables the shortening of treatment duration of macrofilaricidal anti- Wolbachia based treatments from 4 wk to 7 d with registered drugs ready for clinical testing.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.