1,305 results
Search Results
2. From advancements to ethics: Assessing ChatGPT's role in writing research paper.
- Author
-
Gupta, Vasu, Anamika, FNU, Parikh, Kinna, Patel, Meet A., Jain, Rahul, and Jain, Rohit
- Subjects
CHATGPT ,ARTIFICIAL intelligence ,BENCHMARKING (Management) - Abstract
Artificial intelligence (AI), with its infinite capabilities, has ushered in an era of transformation in the twentyfirst century. ChatGPT (Generative Pre-trained Transformer), an AI language model, has lately been in the spotlight, and there is an increasing partnership between the research authors and the chatGPT. Using ChatGPT, authors can set new benchmarks in paper writing in terms of speed, accuracy, consistency, and adaptability. ChatGPT has turned out to be an invaluable tool for manuscript writing, editing, and reference management. While it has numerous advantages, it has been criticised due to ethical quandaries, inaccuracies in scientific data and facts, and, most importantly, a lack of critical thinking skills. These disadvantages of using ChatGPT place limitations on its use in medical publications since these articles guide the future management of many diseases. While AI can fix issues, it lacks the ability to think like humans and thus cannot substitute human authors. To better comprehend the future of this technology in research, we discuss the advantages, drawbacks, and ethical dilemmas of using ChatGPT in paper writing by reviewing existing literature on Pubmed and Google Scholar and using ChatGPT itself to understand the prompt response. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. 2022 BenchCouncil International Symposium on benchmarking, measuring and optimizing (Bench 2022) call for papers.
- Author
-
Chunjie Luo and Wanling Gao
- Subjects
BENCHMARKING (Management) ,DATA management ,HARDWARE ,COMPUTER software ,DATA - Published
- 2022
- Full Text
- View/download PDF
4. Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist.
- Author
-
Vinny, Pulikottil W., Garg, Rahul, Srivastava, M. V. Padma, Lal, Vivek, and Vishnu, Venugoapalan Y.
- Subjects
- *
DEEP learning , *NEUROLOGISTS , *EVIDENCE-based medicine , *MACHINE learning , *BENCHMARKING (Management) , *TERMS & phrases , *ARTIFICIAL neural networks , *PREDICTION models , *ALGORITHMS - Abstract
Machine learning (ML), a form of artificial intelligence (AI), is being increasingly employed in neurology. Reported performance metrics often match or exceed the efficiency of average clinicians. The neurologist is easily baffled by the underlying concepts and terminologies associated with ML studies. The superlative performance metrics of ML algorithms often hide the opaque nature of its inner workings. Questions regarding ML model's interpretability and reproducibility of its results in real-world scenarios, need emphasis. Given an abundance of time and information, the expert clinician should be able to deliver comparable predictions to ML models, a useful benchmark while evaluating its performance. Predictive performance metrics of ML models should not be confused with causal inference between its input and output. ML and clinical gestalt should compete in a randomized controlled trial before they can complement each other for screening, triaging, providing second opinions and modifying treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Validation of the INDDEX24 mobile app v. a pen-and-paper 24-hour dietary recall using the weighed food record as a benchmark in Burkina Faso.
- Author
-
Rogers, Beatrice, Somé, Jérome W., Bakun, Peter, Adams, Katherine P., Bell, Winnie, Carroll II, David Alexander, Wafa, Sarah, and Coates, Jennie
- Subjects
NUTRITIONAL assessment ,MOBILE apps ,RURAL conditions ,CROSS-sectional method ,FOOD diaries ,WOMEN ,NUTRITIONAL requirements ,INTERVIEWING ,SOFTWARE architecture ,BENCHMARKING (Management) ,COMPARATIVE studies ,COST effectiveness ,DESCRIPTIVE statistics ,WRITTEN communication - Abstract
Effective nutrition policies require timely, accurate individual dietary consumption data; collection of such information has been hampered by cost and complexity of dietary surveys and lag in producing results. The objective of this work was to assess accuracy and cost-effectiveness of a streamlined, tablet-based dietary data collection platform for 24-hour individual dietary recalls (24HR) administered using INDDEX24 platform v. a pen-and-paper interview(PAPI) questionnaire, with weighed food record (WFR) as a benchmark. This cross-sectional comparative study included women 18–49 years old from rural Burkina Faso (n 116 INDDEX24; n 115 PAPI). A WFR was conducted; the following day, a 24HR was administered by different interviewers. Food consumption data were converted into nutrient intakes. Validity of 24HR estimates of nutrient and food group consumption was based on comparison with WFR using equivalence tests (group level) and percentages of participants within ranges of percentage error (individual level). Both modalities performed comparably estimating consumption of macro- and micronutrients, food groups and quantities (modalities' divergence from WFR not significantly different). Accuracy of both modalities was acceptable (equivalence to WFR significant at P < 0·05) at group level for macronutrients, less so for micronutrients and individual-level consumption (percentage within ±20 % for WFR, 17–45 % for macronutrients, 5–17 % for micronutrients). INDDEX24 was more cost-effective than PAPI based on superior accuracy of a composite nutrient intake measure (but not gram amount or item count) due to lower time and personnel costs. INDDEX24 for 24HR dietary surveys linked to dietary reference data shows comparable accuracy to PAPI at lower cost. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Scientometric Study of Periodical Literature with Journals "Language Sciences" and "Linguistics and Education".
- Author
-
Mohanty, Barada Kanta, Maharana, Bulu, and Sethi, Bipin Bihari
- Subjects
PERIODICALS on language & languages ,SCIENTOMETRICS ,LINGUISTICS periodicals ,PERIODICALS ,BENCHMARKING (Management) - Abstract
This paper seeks to analyze publications indexed in the database of Science Direct Top 25 hottest Papers in Arts and Humanities journal to understand the international perspective of research publication dynamics in two core journals such as: (1
st )"Language Sciences" (LS) and (2nd )"Linguistics and Education" (L&E) respectively. This is a comprehensive survey work using bibliographic records derived from Science Direct top 25 hottest papers database during 2005-2014 and this paper vigorously tries to give a complete view of the evaluation of research outcomes. Findings of the study revealed that out of a total number of 1800 papers undertaken for the present research, 50 percent were shared from each journal. It is indicated from the study that top 15 authors of 1st journal contributed 349 (38.77 %), and 2nd journal added 281 (31.22 %) papers to their credit which counts more than one third of the whole contribution. In both journals a major share 78 and 76 percent papers were produced by single authors, while the collaborated papers were only 22 and 24 percent the study discloses. Considering the authors' institutional affiliation it is ascertained that, the authors' contributed to both journals was affiliated to 153 and 152 unique institutions spread over a wide range global geographical regions. Besides, the geographical analysis claims and vitalizes the cross-national comparison in the research practices is found considerably benchmarking. The overwhelming and most productive geographical region contributor USA added 139 (15.44 %), and 220 (24.44 %) papers to both journals categorically, and maintained its status of prolificacy in the arena of global research. [ABSTRACT FROM AUTHOR]- Published
- 2016
7. Position paper: Benchmarking the performance of global and emerging knowledge cities.
- Author
-
Yigitcanlar, Tan
- Subjects
- *
PERFORMANCE evaluation , *BENCHMARKING (Management) , *URBANIZATION , *URBAN growth , *COMPARATIVE studies - Abstract
Highlights: [•] Investigates benchmarked performance of global and emerging knowledge cities. [•] Introduces a knowledge-based urban development performance assessment model. [•] Applies the assessment model into an international comparative study. [•] Reveals insights on scrutinizing the development perspectives of knowledge cities. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
8. From wearable sensor data to digital biomarker development: ten lessons learned and a framework proposal.
- Author
-
Daniore, Paola, Nittas, Vasileios, Haag, Christina, Bernard, Jürgen, Gonzenbach, Roman, and von Wyl, Viktor
- Subjects
DIGITAL technology ,CURRICULUM ,DATABASE management ,MULTIPLE sclerosis ,DISEASE management ,BENCHMARKING (Management) ,WEARABLE technology ,CHRONIC diseases ,CONCEPTUAL structures ,BIOMARKERS ,ACTIVITIES of daily living - Abstract
Wearable sensor technologies are becoming increasingly relevant in health research, particularly in the context of chronic disease management. They generate real-time health data that can be translated into digital biomarkers, which can provide insights into our health and well-being. Scientific methods to collect, interpret, analyze, and translate health data from wearables to digital biomarkers vary, and systematic approaches to guide these processes are currently lacking. This paper is based on an observational, longitudinal cohort study, BarKA-MS, which collected wearable sensor data on the physical rehabilitation of people living with multiple sclerosis (MS). Based on our experience with BarKA-MS, we provide and discuss ten lessons we learned in relation to digital biomarker development across key study phases. We then summarize these lessons into a guiding framework (DACIA) that aims to informs the use of wearable sensor data for digital biomarker development and chronic disease management for future research and teaching. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Standards for Instrument Migration When Implementing Paper Patient-Reported Outcome Instruments Electronically: Recommendations from a Qualitative Synthesis of Cognitive Interview and Usability Studies.
- Author
-
Muehlhausen, Willie, Byrom, Bill, Skerritt, Barbara, McCarthy, Marie, McDowell, Bryan, and Sohn, Jeremy
- Subjects
- *
COGNITIVE interviewing , *MEDICAL electronics , *MEDICAL equipment , *BENCHMARKING (Management) , *COGNITION , *DECISION making , *INDUSTRIES , *INTERVIEWING , *QUALITATIVE research - Abstract
Objectives: To synthesize the findings of cognitive interview and usability studies performed to assess the measurement equivalence of patient-reported outcome (PRO) instruments migrated from paper to electronic formats (ePRO), and make recommendations regarding future migration validation requirements and ePRO design best practice.Methods: We synthesized findings from all cognitive interview and usability studies performed by a contract research organization between 2012 and 2015: 53 studies comprising 68 unique instruments and 101 instrument evaluations. We summarized study findings to make recommendations for best practice and future validation requirements.Results: Five studies (9%) identified minor findings during cognitive interview that may possibly affect instrument measurement properties. All findings could be addressed by application of ePRO best practice, such as eliminating scrolling, ensuring appropriate font size, ensuring suitable thickness of visual analogue scale lines, and providing suitable instructions. Similarly, regarding solution usability, 49 of the 53 studies (92%) recommended no changes in display clarity, navigation, operation, and completion without help. Reported usability findings could be eliminated by following good product design such as the size, location, and responsiveness of navigation buttons.Conclusions: With the benefit of accumulating evidence, it is possible to relax the need to routinely conduct cognitive interview and usability studies when implementing minor changes during instrument migration. Application of design best practice and selecting vendor solutions with good user interface and user experience properties that have been assessed in a representative group may enable many instrument migrations to be accepted without formal validation studies by instead conducting a structured expert screen review. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
10. The System Dynamics of Engineer-to-Order Construction Projects: Past, Present, and Future.
- Author
-
Zhou, Yuxuan, Wang, Xun, Gosling, Jonathan, and Naim, Mohamed M.
- Subjects
CONSTRUCTION projects ,SYSTEM dynamics ,PRODUCTION planning ,BENCHMARKING (Management) ,PRODUCTION control ,SHIPBUILDING - Abstract
System dynamics (SD) applications in high-volume production operations are widely used, helping to define decision rules to reduce costs associated with the variance in planning orders and inventory. The exploitation of SD in engineer-to-order (ETO) project-oriented supply chains—e.g., in construction, shipbuilding, and capital goods—is less well established. Hence, this research reviews the literature that takes a systematic ETO perspective in modeling construction projects, exploiting SD approaches. To comprehensively identify and filter previously published works, we used a keyword searching method using Web of Science and Scopus databases. After applying relevant exclusion criteria, 143 papers were selected. Although previous reviews of ETO literature, more generally, have been done, this work contributes to the body of knowledge by specifically reviewing SD applications in ETO industries and providing insights by creating a categorization system by which to determine existing gaps. Articles are categorized into the classic four phases of a project: aggregated planning, preproject planning, project execution, and postdelivery. Analyses of the methods, attributes, and applications of SD were undertaken for each phase. Findings indicate that SD research covers the range of ETO industries, of which construction is the most dominant, demonstrating SD's high applicability. The wealth of case-orientated research in the construction field provides a solid foundation for further SD studies in the ETO field. Further research should focus on (1) developing a general ETO archetype for performance benchmarking and strategy development in construction projects; (2) introducing analytical tools, such as control theoretic approaches as found in manufacturing production planning and control design, to improve understanding of the ETO systems' dynamic behaviors; (3) developing cross-phase, cross-project, design–production integrated, aggregated planning models via hybrid techniques modeling, which can improve understanding of an ETO system's performance; and (4) improving model fidelity. We also provide a research agenda for each phase of the ETO production. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. DIAMetrics: Benchmarking Query Engines at Scale.
- Author
-
Deep, Shaleen, Gruenheid, Anja, Nagaraj, Kruthi, Naito, Hiro, Naughton, Jeff, and Viglas, Stratis
- Subjects
BENCHMARKING (Management) ,SEARCH engines ,SOFTWARE measurement ,WEBOMETRICS ,PROGRAM transformation - Abstract
This paper introduces DIAMetrics: a novel framework for end-to-end benchmarking and performance monitoring of query engines. DIAMetrics consists of a number of components supporting tasks such as automated workload summarization, data anonymization, benchmark execution, monitoring, regression identification, and alerting. The architecture of DIAMetrics is highly modular and supports multiple systems by abstracting their implementation details and relying on common canonical formats and pluggable software drivers. The end result is a powerful unified framework that is capable of supporting every aspect of benchmarking production systems and workloads. DIAMetrics has been developed in Google and is being used to benchmark various internal query engines. In this paper, we give an overview of DIAMetrics and discuss its design and implementation. Furthermore, we provide details about its deployment and example use cases. Given the variety of supported systems and use cases within Google, we argue that its core concepts can be used more widely to enable comparative end-to-end benchmarking in other industrial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Methods and Practices for Institutional Benchmarking based on Research Impact and Competitiveness: A Case Study of ShanghaiTech University.
- Author
-
Chang, Jiang and Liu, Jianhua
- Subjects
BENCHMARKING (Management) ,CASE studies - Abstract
To develop and test a mission-oriented and multi-dimensional benchmarking method for a small scale university aiming for internationally first-class basic research. An individualized evidence-based assessment scheme was employed to benchmark ShanghaiTech University against selected top research institutions, focusing on research impact and competitiveness at the institutional and disciplinary levels. Topic maps opposing ShanghaiTech and corresponding top institutions were produced for the main research disciplines of ShanghaiTech. This provides opportunities for further exploration of strengths and weakness. This study establishes a preliminary framework for assessing the mission of the university. It further provides assessment principles, assessment questions, and indicators. Analytical methods and data sources were tested and proved to be applicable and efficient. To better fit the selective research focuses of this university, its schema of research disciplines needs to be re-organized and benchmarking targets should include disciplinary top institutions and not necessarily those universities leading overall rankings. Current reliance on research articles and certain databases may neglect important research output types. This study provides a working framework and practical methods for mission-oriented, individual, and multi-dimensional benchmarking that ShanghaiTech decided to use for periodical assessments. It also offers a working reference for other institutions to adapt. Further needs are identified so that ShanghaiTech can tackle them for future benchmarking. This is an effort to develop a mission-oriented, individually designed, systematically structured, and multi-dimensional assessment methodology which differs from often used composite indices. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Electricity price modeling from the perspective of start-up costs: incorporating renewable resources in non-convex markets.
- Author
-
Wesseh Jr., Presley K., Jiaying Chen, and Boqiang Lin
- Subjects
RENEWABLE natural resources ,ELECTRIC utility costs ,CLEAN energy ,SOLAR energy ,BENCHMARKING (Management) - Abstract
This paper constructs a comprehensive electricity market model in the context of China, highlighting the deviation caused by neglecting start-up costs from an engineering perspective. The model allows for the abandonment of excess wind and solar power generation, contributing to the achievement of research objectives in scenarios with a high proportion of renewable energy. Our method innovatively integrates fuel and carbon prices, clean energy expansion, and power system marginal prices according to the carbon trading rules of the Chinese power industry, providing a more accurate representation of market dynamics. Findings reveal that neglecting start-up costs can lead to significant biases in electricity prices. We demonstrate that the marginal price sometimes deviates fromthe fluctuation of the real value. While fuel and CO2 prices can be transmitted downstream, the value of new energy must be transmitted through its impact on the marginal unit. This insight is crucial for understanding the "missing money" problem in electricity markets. Based on these findings, we propose policy recommendations. We suggest considering fixed and average costs as pricing benchmarks and utilizing capacity utilization as a signal for demand response to adjust power pricing. Furthermore, we recommend trading different energy types separately in the spot market with different pricing benchmarks to ensure the homogeneity of marginal units. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Environment Aware Friction Observer with Applications to Force Control Benchmarking.
- Author
-
Dimo, Eldison and Calanca, Andrea
- Subjects
FRICTION ,BENCHMARKING (Management) ,STATIC friction ,COULOMB friction ,SCIENTIFIC community - Abstract
The benchmarking of force control algorithms has been significantly investigated in recent years. High-fidelity experimental benchmarking outcomes may require high-end electronics and mechanical systems not to compromise the algorithm's evaluation. However, affordability may be highly desired to spread benchmarking tools within the research community. Mechanical inaccuracies due to affordability can lead to undesired friction effects which in this paper are tackled by exploiting a novel friction compensation technique based on an environment-aware friction observer (EA-FOB). Friction compensation capabilities of the proposed EA-FOB are assessed through simulation and experimental comparisons with a widely used static friction model: Coulomb friction combined with viscous friction. Moreover, a comprehensive stability comparison with state-of-the-art disturbance observers (DOBs) is conducted. Results show higher stability margins for the EA-FOB with respect to traditional DOBs. The research is carried on within the Forecast project, which aims to provide tools and metrics to benchmark force control algorithms relying on low-cost electronics and affordable hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Energy Transition Readiness: A Practical Guide.
- Author
-
Sabidussi, Anna and Maria Wasser, Jack Martinus
- Subjects
RENEWABLE energy transition (Government policy) ,READINESS for school ,ORGANIZATIONAL learning ,TAXONOMY ,BENCHMARKING (Management) - Abstract
This paper addresses the challenge of assessing Energy Transition Readiness levels in businesses. It identifies gaps in existing literature concerning energy transitions and readiness levels. To tackle this, the paper employs organizational learning theory as its foundational model and introduces a new taxonomy of key energy readiness indicators. This taxonomy offers a practical guide for business professionals to implement targeted interventions effectively. Additionally, it enables benchmarking and comparison of sector and industry actors. In summary, the integration of organizational learning theory expands the discourse on strategically managing global challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Disentangling the value equation: a step forward in value-based healthcare.
- Author
-
García-Lorenzo, Borja, Alayo, Itxaso, Arrospide, Arantzazu, Gorostiza, Ania, Fullaondo, Ane, and Group, VOICE Study
- Subjects
SECONDARY analysis ,RESEARCH funding ,VALUE-based healthcare ,BREAST tumors ,BENCHMARKING (Management) ,CANCER patients ,EMOTIONS ,FUNCTIONAL status ,DESCRIPTIVE statistics ,PATIENT-centered care ,LUNG tumors ,PAIN ,QUALITY of life ,HEALTH outcome assessment ,SOCIODEMOGRAPHIC factors ,REGRESSION analysis - Abstract
Background The value equation of value-based healthcare (VBHC) as a single figure remains ambiguous, closer to a theoretical framework than a useful tool for decision making. The challenge lies in the way patient-centred outcomes (PCOs) might be combined to produce a single value of the numerator. This paper aims to estimate the weights of PCOs to provide a single figure in the numerator, which ultimately will allow a VBHC figure to be reached. Methods A cohort of patients diagnosed with breast cancer (n = 690) with a 6-month follow-up recruited in 2019–20 across six European hospitals was used. Patient-reported outcomes (PROs), clinical-related outcomes (CROs), and clinical and socio-demographic variables were collected. The numerator was defined as a composite indicator of the PCOs (CI-PCO), and regression analysis was applied to estimate their weights and consequently arrive at a single figure. Results Pain showed as the highest weight followed by physical functioning , emotional functioning , and ability to work , and then by a symptom, either arm or breast. PCOs weights were robust to sensitivity analysis. The CI-PCO value was found to be more informative than the health-related quality of life (HRQoL) value. Conclusions To the best of our knowledge, this is the first research to combine the PCOs proposed by ICHOM to provide a single figure in the numerator of the value equation. This figure shows a step forward in VBHC to reach a holistic benchmarking across healthcare centres and a value-based payment. This research might also be applied in other medical conditions as a methodological pathway. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Dyport: dynamic importance-based biomedical hypothesis generation benchmarking technique.
- Author
-
Tyagin, Ilya and Safro, Ilya
- Subjects
KNOWLEDGE graphs ,BENCHMARKING (Management) ,NATURAL language processing ,HYPOTHESIS ,SCIENTIFIC discoveries ,SEMANTICS - Abstract
Background: Automated hypothesis generation (HG) focuses on uncovering hidden connections within the extensive information that is publicly available. This domain has become increasingly popular, thanks to modern machine learning algorithms. However, the automated evaluation of HG systems is still an open problem, especially on a larger scale. Results: This paper presents a novel benchmarking framework Dyport for evaluating biomedical hypothesis generation systems. Utilizing curated datasets, our approach tests these systems under realistic conditions, enhancing the relevance of our evaluations. We integrate knowledge from the curated databases into a dynamic graph, accompanied by a method to quantify discovery importance. This not only assesses hypotheses accuracy but also their potential impact in biomedical research which significantly extends traditional link prediction benchmarks. Applicability of our benchmarking process is demonstrated on several link prediction systems applied on biomedical semantic knowledge graphs. Being flexible, our benchmarking system is designed for broad application in hypothesis generation quality verification, aiming to expand the scope of scientific discovery within the biomedical research community. Conclusions: Dyport is an open-source benchmarking framework designed for biomedical hypothesis generation systems evaluation, which takes into account knowledge dynamics, semantics and impact. All code and datasets are available at: https://github.com/IlyaTyagin/Dyport. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Improving monthly precipitation prediction accuracy using machine learning models: a multi-view stacking learning technique.
- Author
-
El Hafyani, Mounia, El Himdi, Khalid, and El Adlouni, Salah-Eddine
- Subjects
MACHINE learning ,METEOROLOGICAL precipitation ,BENCHMARKING (Management) ,RANDOM forest algorithms ,TIME series analysis - Abstract
This research paper explores the implementation of machine learning (ML) techniques in weather and climate forecasting, with a specific focus on predicting monthly precipitation. The study analyzes the efficacy of six multivariate machine learning models: Decision Tree, Random Forest, K-Nearest Neighbors (KNN), AdaBoost, XGBoost, and Long Short-Term Memory (LSTM). Multivariate time series models incorporating lagged meteorological variables were employed to capture the dynamics of monthly rainfall in Rabat, Morocco, from 1993 to 2018. The models were evaluated based on various metrics, including root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2). XGBoost showed the highest performance among the six individual models, with an RMSE of 40.8 (mm). In contrast, Decision Tree, AdaBoost, Random Forest, LSTM, and KNN showed relatively lower performances, with specific RMSEs ranging from 47.5 (mm) to 51 (mm). A novel multi-view stacking learning approach is introduced, offering a new perspective on various ML strategies. This integrated algorithm is designed to leverage the strengths of each individual model, aiming to substantially improve the precision of precipitation forecasts. The best results were achieved by combining Decision Tree, KNN, and LSTM to build the meta-base while using XGBoost as the second-level learner. This approach yielded a RMSE of 17.5 millimeters. The results show the potential of the proposed multi-view stacking learning algorithm to refine predictive results and improve the accuracy of monthly precipitation forecasts, setting a benchmark for future research in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Unraveling the internal drivers of pharmaceutical company performance in Europe: A DEMATEL analysis.
- Author
-
Asad, Arif Ibne, Popesko, Boris, and Godman, Brian
- Subjects
LITERATURE reviews ,ORGANIZATIONAL performance ,PHARMACEUTICAL industry ,SUPPLY chain management ,BENCHMARKING (Management) - Abstract
Research background: Internal business factors are vital to how a company achieves its goals. The present study of internal drivers of pharmaceutical company performance is very insightful, as it has the potential to boost further competitiveness, it may allow health authority personnel to have guidelines to make strategic decisions, as well as inspire investor confidence, ensure regulatory compliance and performance benchmarking, and support talent acquisition and retention. In addition, it can identify the important internal factors that need to receive more priority. Purpose of the article: The European pharmaceutical industry is currently facing multiple challenges. This paper aims to map the relative relationships among the internal factors that influence the business performance of pharmaceutical companies in Europe by using the DEMATEL approach. Method: There are two phases of the present study, an extensive literature review and the use of the decision-making trial and evaluation laboratory (DEMATEL) technique. To identify the key internal drivers and their cause-and-effect relationship with pharmaceutical company performance in Europe, data from experts were obtained using the predesigned DEMATEL questionnaire. Findings & value added: The extensive literature review from the Web of Science and Scopus databases found that seven internal factors are very demanding in the case of European pharmaceutical business performance. The main elements that have the highest impact on pharmaceutical business performance in Europe are human resources competencies, the information system, technological competitiveness, and the patent system. However, financial profitability, research and development competencies, alliances with other companies, and supply chain management are the factors that are affected more by other factors. The study is the first attempt to identify the internal business performance of the pharmaceutical sector in Europe by working with pragmatic and perceptive decisions from pharmaceutical stakeholders in Europe. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. The logistics and sustainability in the European Union.
- Author
-
Loucanova, Erika, Kaputa, Vladislav, Nosalova, Martina, and Olsiakova, Miriam
- Subjects
BUSINESS logistics ,SUSTAINABLE development ,INDUSTRIAL management ,BENCHMARKING (Management) ,KEY performance indicators (Management) - Abstract
The paper is focused on the issue of business logistics performance and sustainability of countries in the EU due to their constantly growing importance in the social, economic and environmental field. We assume a significant dependence between the mentioned quantities. To research the relationship between business logistics performance and sustainability, we used the data of the business logistics performance index and the sustainability index across each EU countries. The importance of the selected indices lies in the ability to identify possible opportunities and challenges of business logistics as a benchmarking tool to increase its performance. To assess the relationship of these researched parameters, we applied the correlation coefficient, cluster and geographic analysis to identify relatively homogeneous groups - EU countries - clusters with the greatest possible difference within the clusters. The results proved a statistically significant dependence between the performance of business logistics and sustainability in EU member states. From a geographic analysis perspective, we have identified a tendency to create geographically close groupings of EU countries within the examined parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Constrained Device Performance Benchmarking with the Implementation of Post-Quantum Cryptography.
- Author
-
Fitzgibbon, Gregory and Ottaviani, Carlo
- Subjects
BENCHMARKING (Management) ,QUANTUM cryptography ,PUBLIC key cryptography ,RSA algorithm ,POLYNOMIAL time algorithms ,CRYPTOGRAPHY ,QUANTUM computers ,RASPBERRY Pi - Abstract
Advances in quantum computers may pose a significant threat to existing public-key encryption methods, which are crucial to the current infrastructure of cyber security. Both RSA and ECDSA, the two most widely used security algorithms today, may be (in principle) solved by the Shor algorithm in polynomial time due to its ability to efficiently solve the discrete logarithm problem, potentially making present infrastructures insecure against a quantum attack. The National Institute of Standards and Technology (NIST) reacted with the post-quantum cryptography (PQC) standardization process to develop and optimize a series of post-quantum algorithms (PQAs) based on difficult mathematical problems that are not susceptible to being solved by Shor's algorithm. Whilst high-powered computers can run these PQAs efficiently, further work is needed to investigate and benchmark the performance of these algorithms on lower-powered (constrained) devices and the ease with which they may be integrated into existing protocols such as TLS. This paper provides quantitative benchmark and handshake performance data for the most recently selected PQAs from NIST, tested on a Raspberry Pi 4 device to simulate today's IoT (Internet of Things) devices, and provides quantitative comparisons with previous benchmarking data on a range of constrained systems. CRYSTALS-Kyber and CRYSTALS-Dilithium are shown to be the most efficient PQAs in the key encapsulation and signature algorithms, respectively, with Falcon providing the optimal TLS handshake size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A Computationally Efficient Approach to Fully Bayesian Benchmarking.
- Author
-
Okonek, Taylor and Wakefield, Jon
- Subjects
SMALL area statistics ,BENCHMARKING (Management) ,MIDDLE-income countries - Abstract
In small area estimation, it is sometimes necessary to use model-based methods to produce estimates in areas with little or no data. In official statistics, we often require that aggregates of small area estimates agree with national estimates for internal consistency purposes. Enforcing this agreement is referred to as benchmarking, and while methods currently exist to perform benchmarking, few are ideal for applications with non-normal outcomes and benchmarks with uncertainty. Fully Bayesian benchmarking is a theoretically appealing approach insofar as we can obtain posterior distributions conditional on a benchmarking constraint. However, existing implementations may be computationally prohibitive. In this paper, we critically review benchmarking methods in the context of small area estimation in low- and middle-income countries with binary outcomes and uncertain benchmarks, and propose a novel approach in which posterior samples of small area characteristics from an unbenchmarked model can be combined with a rejection sampler or Metropolis-Hastings algorithm to produce benchmarked posterior distributions in a computationally efficient way. To illustrate the flexibility and efficiency of our approach, we provide comparisons to an existing benchmarking approach in a simulation, and applications to HIV prevalence and under-5 mortality estimation. Code implementing our methodology is available in the R package stbench. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. HEP Benchmark Suite The centralized future of WLCG benchmarking.
- Author
-
Menéndez Borge, Gonzalo
- Subjects
PARTICLE physics ,GRID computing ,WORKLOAD of computers ,BENCHMARKING (Management) ,INDUSTRIAL productivity - Abstract
The HEPiX Benchmarking Working Group has devised a new HEPspecific benchmark, HEPscore23. This benchmark is an instance of the underlying benchmarking tool that is HEPscore, also created by the Working Group during this endeavor. HEPscore can be set up to run different HEP Workloads, autonomous production applications sourced from various HEP experiments. Through study and analysis, a subset of those workloads was elected, the minimal set that best represented HEP applications overall. To streamline the benchmarking process, this benchmarking framework includes the HEP Benchmark Suite, which facilitates the execution of HEPscore and other benchmarks, such as HEP-SPEC06, SPEC CPU 2006, and DB12. This paper elucidates the rationale behind the benchmark, and the framework created to develop it, delineates the key design considerations, diving into the motivations behind it and the flexibility and advantages that is offers as a replacement of HS06 in the WLCG and HEP communities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. The CMS Orbit Builder for the HL-LHC at CERN.
- Author
-
Amoiridis, Vassileios, Behrens, Ulf, Bocci, Andrea, Branson, James, Brummer, Philipp, Cano, Eric, Cittolin, Sergio, Da Silva Almeida Da Quintanilha, Joao, Darlea, Georgiana-Lavinia, Deldicque, Christian, Dobson, Marc, Dvorak, Antonin, Gigi, Dominique, Glege, Frank, Gomez-Ceballos, Guillelmo, Gorniak, Patrycja, Gutić, Neven, Hegeman, Jeroen, Izquierdo Moreno, Guillermo, and James, Thomas Owen
- Subjects
COMPACT muon solenoid experiment ,DATA acquisition systems ,LUMINOSITY ,OPTICAL properties ,BENCHMARKING (Management) - Abstract
The Compact Muon Solenoid (CMS) experiment at CERN incorporates one of the highest throughput data acquisition systems in the world and is expected to increase its throughput by more than a factor of ten for High-Luminosity phase of Large Hadron Collider (HL-LHC). To achieve this goal, the system will be upgraded in most of its components. Among them, the event builder software, in charge of assembling all the data read out from the different sub-detectors, is planned to be modified from a single event builder to an orbit builder that assembles multiple events at the same time. The throughput of the event builder will be increased from the current 1.6 Tb/s to 51 Tb/s for the HL-LHC orbit builder. This paper presents preliminary network transfer studies in preparation for the upgrade. The key conceptual characteristics are discussed, concerning differences between the CMS event builder in Run 3 and the CMS Orbit Builder for the HL-LHC. For the feasibility studies, a pipestream benchmark, mimicking event-builder-like traffic has been developed. Preliminary performance tests and results are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Towards Green Automated Machine Learning: Status Quo and Future Directions.
- Author
-
Tornede, Tanja, Tornede, Alexander, Hanselle, Jonas, Mohr, Felix, Wever, Marcel, and Hüllermeier, Eyke
- Subjects
MACHINE learning ,AUTOMATION ,GREEN technology ,BENCHMARKING (Management) ,TASK performance - Abstract
Automated machine learning (AutoML) strives for the automatic configuration of machine learning algorithms and their composition into an overall (software) solution -- a machine learning pipeline -- tailored to the learning task (dataset) at hand. Over the last decade, AutoML has developed into an independent research field with hundreds of contributions. At the same time, AutoML is being criticized for its high resource consumption as many approaches rely on the (costly) evaluation of many machine learning pipelines, as well as the expensive large-scale experiments across many datasets and approaches. In the spirit of recent work on Green AI, this paper proposes Green AutoML, a paradigm to make the whole AutoML process more environmentally friendly. Therefore, we first elaborate on how to quantify the environmental footprint of an AutoML tool. Afterward, different strategies on how to design and benchmark an AutoML tool w.r.t. their "greenness", i.e., sustainability, are summarized. Finally, we elaborate on how to be transparent about the environmental footprint and what kind of research incentives could direct the community in a more sustainable AutoML research direction. As part of this, we propose a sustainability checklist to be attached to every AutoML paper featuring all core aspects of Green AutoML. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Investigation of Android Malware with Machine Learning Classifiers using Enhanced PCA Algorithm.
- Author
-
Raymond, V. Joseph and Raj, R. Jeberson Retna
- Subjects
MALWARE ,MACHINE learning ,MULTIPLE correspondence analysis (Statistics) ,BENCHMARKING (Management) ,SUPERVISED learning - Abstract
Android devices are popularly available in the commercial market at different price levels for various levels of customers. The Android stack is more vulnerable compared to other platforms because of its open-source nature. There are many android malware detection techniques available to exploit the source code and find associated components during execution time. To obtain a better result we create a hybrid technique merging static and dynamic processes. In this paper, in the first part, we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Multicollinearity problem is one of the drawbacks in the existing system. In the proposed work, a novel PCA (Principal Component Analysis) based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach. The Android Sensitive Permission is one major key point to be considered while detecting malware. We select vulnerable columns based on features like sensitive permissions, application program interface calls, services requested through the kernel, and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign. The final goal of this paper is to check benchmarking datasets collected from various repositories like virus share, Github, and the Canadian Institute of cyber security, compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. 2015 Best Paper Award.
- Subjects
- *
PUBLISHED articles , *BENCHMARKING (Management) , *AWARDS - Abstract
The article announces that the 2015 Best Paper Award was given to Jing Du, Rui Liu, and Raja R. A. Issa's article "BIM Cloud Score: Benchmarking BIM Performance," which appeared in the November 2014 issue of the "Journal of Construction Engineering and Management."
- Published
- 2016
- Full Text
- View/download PDF
28. Reply: Correspondence on NanoVar's performance outlined by Jiang T. et al. in 'Long-read sequencing settings for efficient structural variation detection based on comprehensive evaluation'.
- Author
-
Jiang, Tao, Liu, Shiqi, and Guo, Hongzhe
- Subjects
SCIENTIFIC community ,BENCHMARKING (Management) - Abstract
We published a paper in BMC Bioinformatics comprehensively evaluating the performance of structural variation (SV) calling with long-read SV detection methods based on simulated error-prone long-read data under various sequencing settings. Recently, C.Y.T. et al. wrote a correspondence claiming that the performance of NanoVar was underestimated in our benchmarking and listed some errors in our previous manuscripts. To clarify these matters, we reproduced our previous benchmarking results and carried out a series of parallel experiments on both the newly generated simulated datasets and the ones provided by C.Y.T. et al. The robust benchmark results indicate that NanoVar has unstable performance on simulated data produced from different versions of VISOR, while other tools do not exhibit this phenomenon. Furthermore, the errors proposed by C.Y.T. et al. were due to them using another version of VISOR and Sniffles, which caused many changes in usage and results compared to the versions applied in our previous work. We hope that this commentary proves the validity of our previous publication, clarifies and eliminates the misunderstanding about the commands and results in our benchmarking. Furthermore, we welcome more experts and scholars in the scientific community to pay attention to our research and help us better optimize these valuable works. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. BENCHMARK VERCORS 2022: BLIND PREDICTION OF TIME-DEPENDENT BEHAVIOR OF CONCRETE CONTAINMENT BUILDING WITH LOW AND HIGH-FIDELITY MODELS.
- Author
-
KRÁTKÝ, ŠTĚPÁN and HAVLÁSEK, PETR
- Subjects
CONCRETE testing ,CONCRETE construction ,CREEP (Materials) ,CALIBRATION ,BENCHMARKING (Management) - Abstract
The VERCORS program aims to acquire an extensive experimental dataset collected to provide a solid basis for numerical modeling of concrete containment buildings (CCBs). In order to cover the entire life-span of a real containment, the measurements are done on 3× smaller mock-up which leads to 9-fold acceleration of all processes related to drying. The goal for the participants of the third VERCORS benchmark was to predict the behaviour of the CCB based on standard laboratory measurements on VERCORS concrete. The previous paper [1] presented the calibration procedure of material models for moisture transport and time-dependent behavior of concrete and summarized the results obtained with a computationally efficient low-fidelity model (LFM). The present paper compares the responses of the LFM and a high-fidelity model (HFM) with a detailed geometry of the entire containment and presents a comparison with the experimental data collected over the last 8 years on the VERCORS mockup. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Evaluating the UN Global Compact Communication on Progress as a CSR Benchmarking Tool.
- Author
-
Ribeiro, Lucas, Branco, Manuel Castelo, and Chaves, Cristina
- Subjects
SOCIAL responsibility of business ,INTERNATIONAL communication ,BENCHMARKING (Management) ,SUSTAINABILITY ,SUSTAINABLE development ,ECOLOGY - Abstract
Corporate social responsibility (CSR) extends beyond mere profit-seeking to encompass the ethical behavior of a company toward society, mitigating negative and generating positive impacts on the environment, consumers, employees, communities, and all stakeholders. The UN Global Compact (UNGC) is the world's largest voluntary CSR initiative, and its Communication on Progress (CoP) requirement is a key reporting mechanism that allows participating companies to transparently showcase their progress and efforts regarding CSR. As more and more companies are reporting CSR practices, it is crucial to establish a global, standardized, trusted, accessible, and useful database that can be used by different stakeholders, including the companies themselves in the benchmarking process. This paper examines whether the UNGC CoP can be used as a sustainability benchmarking tool, based on well-established criteria, and compares it with other existing reporting frameworks. Results indicate that the UNGC CoP can be considered a benchmarking tool, being applicable to nearly all phases of the benchmarking process. The study also shows that the CoP stands out regarding other frameworks due to ample coverage of the sustainable development goals (SDGs), number of reporting companies, accessibility to all stakeholders, and consolidation of the information into one platform. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Novel Optimization Approach for Energy-Efficient Multiple Workflow Scheduling in Cloud Environment.
- Author
-
Aggarwal, Ambika, Kumar, Sunil, Bhansali, Ashok, Alsekait, Deema Mohammed, and AbdElminaam, Diaa Salama
- Subjects
WORKFLOW ,ENERGY consumption ,CLOUD computing ,ELECTRICAL energy ,BENCHMARKING (Management) - Abstract
Existing multiple workflow scheduling techniques focus on traditional Quality of Service (QoS) parameters such as cost, deadline, and makespan to find optimal solutions by consuming a large amount of electrical energy. Higher energy consumption decreases system efficiency, increases operational cost, and generates more carbon footprint. These major problems can lead to several problems, such as economic strain, environmental degradation, resource depletion, energy dependence, health impacts, etc. In a cloud computing environment, scheduling multiple workflows is critical in developing a strategy for energy optimization, which is an NP-hard problem. This paper proposes a novel, bi-phase Energy-Efficient Fruit Fly-based Optimization (EFFO) algorithm for optimizing energy consumption for scheduling multiple workflows. In the first phase, the proposed EFFO algorithm uses first come, first serve, priority scheduling and a Genetic Algorithm to generate the initial workflow search space. In the second phase, the energy consumption is optimized by the proposed EFFO algorithm. Eight NAS benchmarks and five NAS classes (A, B, C, S & W) are employed as a case study. The simulation results are carried out on the WorkflowSim 1.0 platform to test the efficacy of the proposed EFFO algorithm. The experimental results are compared against energy-aware for workflow scheduling and virtual machine consolidation (EASVMC), Power-Efficient Scheduling for Virtual Machine Systems (PESVMS), Energy Efficiency Scheduler (EES), and heterogeneous earliest finish time (HEFT) algorithms and outperformed them with 10.518%, 16.302%, 26.154%, and 28.982%, respectively, based on average energy consumption on five scientific workflows comprised Montage, CyberShake, Laser Interferometer Gravitational-Wave Observatory (LIGO), Scripps Institution of Oceanography High-Throughput (SIPHT), and Epigenomics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Review and comparison of methods and benchmarks for automatic modal identification based on stabilization diagram.
- Author
-
Min He, Peng Liang, Jiuxian Liu, and Zhiqiang Liang
- Subjects
BRIDGE design & construction ,CIVIL engineering ,STRUCTURAL health monitoring ,BENCHMARKING (Management) ,HIERARCHICAL clustering (Cluster analysis) - Abstract
Automatic modal identification via automatically interpreting the stabilization diagram provides key technique in bridge structural health monitoring. This paper reviews the progress in the area of automatic modal identification based on interpreting the stabilization diagram. The whole identification process is divided into four steps from establishing the stabilization diagram to removing the outliers in the identification results. The criteria and algorithms used in each step in the existing studies are carefully summarized and classified. Comparisons between typical methods in cleaning and interpreting the stabilization diagram are also conducted. Real structure benchmarks used in the existing studies to validate the proposed automatic modal identification methods are also summarized. Based on the review and comparison, the specific ratio method for cleaning the stabilization diagram, the hierarchical clustering method for interpreting the stabilization diagram and the adjusted boxplot for removing the outliers in the identification results are the most suitable methods for each step. The key point of automatic modal identification based on interpreting the stabilization diagram has also discussed, and it is recommended to pay more attention to cleaning the stabilization diagram. Future study about automatic modal identification under situation with very few sensors deployed should be more concerned. This review aims to help researchers and practitioners in implementing existing automatic modal identification algorithms effectively and developing more suitable and practical methods for civil engineering structures in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Evaluation and optimization of pipeline pricing strategies on oil product logistics in China.
- Author
-
Rui Qiu, Renfu Tu, Xuemei Wei, Hao Zhang, Mengda Gao, Qi Liao, and Yongtu Liang
- Subjects
PETROLEUM products ,PETROLEUM pipelines ,LOGISTICS ,MATHEMATICAL programming ,BENCHMARKING (Management) - Abstract
In the early stage of pipeline network reform in China, it is still controversial to formulate an appropriate pipeline freight pricing strategy. Focusing on this issue, this paper puts forward an integrated framework to analyze the impact of different pipeline pricing strategies on the economic-environmental benefits of China's oil product logistics. A basic mathematical programming model is developed to simulate the planning of nationwide oil product logistics at the tactical level. On this basis, five pipeline pricing strategies are customized for comparative analysis, including pricing as usual (PAU), pricing by benchmarking railway (PBR), pricing by discounting on excess (PDE), tiered pricing by mileage (TPM), and tiered pricing by volume (TPV). Then, the basic logistics optimization model is upgraded accordingly. The real-world case study in China in 2019 is carried out in detail and the results demonstrate that (i) Except for TPM, the other pricing strategies can achieve coordination between oil shippers and pipeline carriers compared with PAU; (ii) Ranked by economic performance as follows: PDE >PBT>TPV> PAU>TPM; (iii) As for PDE, it also helps to reduce carbon emissions by 0.5% annually. The proposed method can be a theoretical guide for oil and gas logistics managers and decision-makers within and beyond China. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. REACTOR CRITICALS AND SPENT FUEL SYSTEMS SIMILARITIES.
- Author
-
ŠIKL, MATEJ
- Subjects
FUEL storage ,DATABASES ,BENCHMARKING (Management) ,ISOTOPE exchange reactions ,METROLOGY - Abstract
Many conservative assumptions for subcriticality assessment calculations have to be made in current calculations of spent fuel storage systems. This is due to a fact that there is no available database of benchmarks containing spent fuel or materials with isotopic composition like spent fuel. However one potential source of spent fuel system experiments is being omitted - there are hundreds of commercial reactors around the world containing spent fuel on some level of burnup. These reactors have to be tested for safety reasons in the beginning of each cycle and parameters of these test states are well known. In this paper a possibility of using commercial reactor critical states for code validations and for subcriticality assessments of storage systems is discussed and potential approach for criticality safety analysis is given. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Financial Distress Analysis of Technology Companies Using Grover Model.
- Author
-
Kah Fai Liew, Weng Siew Lam, and Weng Hoe Lam
- Subjects
HIGH technology industries ,DECISION making ,WORKING capital ,FINANCIAL performance ,BENCHMARKING (Management) - Abstract
The decision-making process is of utmost importance as it dictates what will be chosen. Good decision making may lead to an ideal result that decision makers wish to achieve. Decision-making process is an essential consideration for the organization and investors before making decisions. Proper and thorough planning can help the investors make good decisions and, hence, gain profits. As a result, it is important to conduct a financial distress analysis of companies in order to understand their financial condition. In this study, the financial performance of technology companies is assessed using the Grover model. Financial ratios, such as working capital to total assets, earnings before interest and taxes to total assets, and net income to total assets, are analyzed in this study with the Grover model. Each of the companies will obtain a G-score based on their financial performance. The Grover model is capable of categorizing companies either in safe, grey or distress zones. The findings of this paper depict that 28 companies are performing well during the period of study. It indicates that these companies are performing well in terms of financial performance. Therefore, this provides insights to investors to identify companies with good financial performance for investment. Furthermore, the identified companies in the safe zone can serve as a reference to other companies for benchmarking. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. The 'Art' of Automatic Benchmark Extraction.
- Author
-
Boncz, Peter
- Subjects
BENCHMARKING (Management) ,WEBOMETRICS - Abstract
An introduction is presented to a paper on benchmarking database systems and automatic benchmark extraction, specifically the framework from search engine Google, "DIAMetrics."
- Published
- 2022
- Full Text
- View/download PDF
37. Real-Time Performance Benchmarking of RISC-V Architecture: Implementation and Verification on an EtherCAT-Based Robotic Control System.
- Author
-
Yoo, Taeho and Choi, Byoung Wook
- Subjects
BENCHMARKING (Management) ,INSTRUCTION set architecture ,INDUSTRIAL robots ,RASPBERRY Pi ,ROBOTICS - Abstract
RISC-V offers a modular technical approach combined with an open, royalty-free instruction set architecture (ISA). However, despite its advantages as a fundamental building block for many embedded systems, the escalating complexity and functional demands of real-time applications have made adhering to response time deadlines challenging. For real-time applications of RISC-V, real-time performance analysis is required for various ISAs. In this paper, we analyze the real-time performance of RISC-V through two real-time approaches based on processor architectures. For real-time operating system (RTOS) applications, we adopted FreeRTOS and evaluated its performance on HiFive1 Rev B (RISC-V) and STM3240G-EVAL (ARM M). For real-time Linux, we utilized Linux with the Preempt-RT patch and tested its performance on VisionFive 2 (RISC-V), MIO5272 (x86-64), and Raspberry Pi 4 B (ARM A). Through these experiments, we examined the response times on the real-time mechanisms of each operating system. Additionally, in the Preempt-RT experiments, scheduling latencies were evaluated by means of the cyclictest. These are very important parameters for implementing real-time applications comprised of multi-tasking. Finally, in order to show the real-time capabilities of RISC-V practically, we implemented motion control of a six-axis collaborative robot, which was performed on the VisionFive 2. This implementation provided a comparative result of RISC-V's performance against the x86-64 architecture. Ultimately, the results indicated that the real-time performance of RISC-V for real-time applications was feasible. A noticeable achievement of this research is its first implementation of an EtherCAT master on RISC-V designed for real-time applications. The successful implementation of the EtherCAT master on RISC-V shows real-time capabilities for a wide range of real-time applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. European value-based healthcare benchmarking: moving from theory to practice.
- Author
-
García-Lorenzo, Borja, Gorostiza, Ania, Alayo, Itxaso, Zas, Susana Castelo, Baena, Patricia Cobos, Camiña, Inés Gallego, Narbaiza, Begoña Izaguirre, Mallabiabarrena, Gaizka, Ustarroz-Aguirre, Iker, Rigabert, Alina, Balzi, William, Maltoni, Roberta, Massa, Ilaria, López, Isabel Álvarez, Lobera, Sara Arévalo, Esteban, Mónica, Calleja, Marta Fernández, Mediavilla, Jenifer Gómez, Fernández, Manuela, and Hitar, Manuel del Oro
- Subjects
RESEARCH ,STATISTICAL significance ,HEALTH facilities ,HUMAN research subjects ,KEY performance indicators (Management) ,LUNG tumors ,HEALTH outcome assessment ,MEDICAL care costs ,REGRESSION analysis ,VALUE-based healthcare ,BENCHMARKING (Management) ,INFORMED consent (Medical law) ,QUESTIONNAIRES ,CLINICAL medicine ,DESCRIPTIVE statistics ,RESEARCH funding ,SOCIODEMOGRAPHIC factors ,ELECTRONIC health records ,CLUSTER analysis (Statistics) ,BREAST tumors ,LONGITUDINAL method ,DELPHI method - Abstract
Background Value-based healthcare (VBHC) is a conceptual framework to improve the value of healthcare by health, care-process and economic outcomes. Benchmarking should provide useful information to identify best practices and therefore a good instrument to improve quality across healthcare organizations. This paper aims to provide a proof-of-concept of the feasibility of an international VBHC benchmarking in breast cancer, with the ultimate aim of being used to share best practices with a data-driven approach among healthcare organizations from different health systems. Methods In the VOICE community—a European healthcare centre cluster intending to address VBHC from theory to practice—information on patient-reported, clinical-related, care-process-related and economic-related outcomes were collected. Patient archetypes were identified using clustering techniques and an indicator set following a modified Delphi was defined. Benchmarking was performed using regression models controlling for patient archetypes and socio-demographic characteristics. Results Six hundred and ninety patients from six healthcare centres were included. A set of 50 health, care-process and economic indicators was distilled for benchmarking. Statistically significant differences across sites have been found in most health outcomes, half of the care-process indicators, and all economic indicators, allowing for identifying the best and worst performers. Conclusions To the best of our knowledge, this is the first international experience providing evidence to be used with VBHC benchmarking intention. Differences in indicators across healthcare centres should be used to identify best practices and improve healthcare quality following further research. Applied methods might help to move forward with VBHC benchmarking in other medical conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Leakage Benchmarking for Universal Gate Sets.
- Author
-
Wu, Bujiao, Wang, Xiaoyang, Yuan, Xiao, Huang, Cupjin, and Chen, Jianxin
- Subjects
QUANTUM gates ,LEAKAGE ,QUANTUM computing ,QUBITS ,BENCHMARKING (Management) ,HILBERT space - Abstract
Errors are common issues in quantum computing platforms, among which leakage is one of the most-challenging to address. This is because leakage, i.e., the loss of information stored in the computational subspace to undesired subspaces in a larger Hilbert space, is more difficult to detect and correct than errors that preserve the computational subspace. As a result, leakage presents a significant obstacle to the development of fault-tolerant quantum computation. In this paper, we propose an efficient and accurate benchmarking framework called leakage randomized benchmarking (LRB), for measuring leakage rates on multi-qubit quantum systems. Our approach is more insensitive to state preparation and measurement (SPAM) noise than existing leakage benchmarking protocols, requires fewer assumptions about the gate set itself, and can be used to benchmark multi-qubit leakages, which has not been achieved previously. We also extended the LRB protocol to an interleaved variant called interleaved LRB (iLRB), which can benchmark the average leakage rate of generic n-site quantum gates with reasonable noise assumptions. We demonstrate the iLRB protocol on benchmarking generic two-qubit gates realized using flux tuning and analyzed the behavior of iLRB under corresponding leakage models. Our numerical experiments showed good agreement with the theoretical estimations, indicating the feasibility of both the LRB and iLRB protocols. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Measuring health equity in the ASEAN region: conceptual framework and assessment of data availability.
- Author
-
Barcellona, Capucine, Mariñas, Bryanna Yzabel, Tan, Si Ying, Lee, Gabriel, Ko, Khin Chaw, Chham, Savina, Chhorvann, Chhea, Leerapan, Borwornsom, Pham Tien, Nam, and Lim, Jeremy
- Subjects
EVALUATION of medical care ,KEY performance indicators (Management) ,HEALTH services accessibility ,WORLD health ,HEALTH outcome assessment ,CONCEPTUAL structures ,DOCUMENTATION ,DATABASE management ,BENCHMARKING (Management) ,CLINICAL medicine ,GOVERNMENT policy ,RESEARCH funding ,FINANCIAL management ,POPULATION health ,INSURANCE - Abstract
Background: Existing research on health equity falls short of identifying a comprehensive set of indicators for measurement across health systems. Health systems in the ASEAN region, in particular, lack a standardised framework to assess health equity. This paper proposes a comprehensive framework to measure health equity in the ASEAN region and highlights current gaps in data availability according to its indicator components. Methods: A comprehensive literature review was undertaken to map out a core set of indicators to evaluate health equity at the health system level. Secondary data collection was subsequently conducted to assess current data availability for ASEAN states in key global health databases, national health accounts, and policy documents. Results: A robust framework to measure health equity was developed comprising 195 indicators across Health System Inputs and Processes, Outputs, Outcomes, and Contextual Factors. Total indicator data availability equated to 72.9% (1423/1950). Across the ASEAN region, the Inputs and Processes sub-component of Health Financing had complete data availability for all indicators (160/160, 100%), while Access to Essential Medicine had the least data available (6/30, 20%). Under Outputs and Outcomes, Coverage of Selected Interventions (161/270, 59.63%) and Population Health (350/350, 100%) respectively had the most data available, while other indicator sub-components had little to none (≤ 38%). 72.145% (384/530) of data is available for all Contextual Factors. Out of the 10 ASEAN countries, the Philippines had the highest data availability overall at 77.44% (151/195), while Brunei Darussalam and Vietnam had the lowest data availability at 67.18% (131/195). Conclusions: The data availability gaps highlighted in this study underscore the need for a standardised framework to guide data collection and benchmarking of health equity in ASEAN. There is a need to prioritise regular data collection for overlooked indicator areas and in countries with low levels of data availability. The application of this indicator framework and resulting data availability analysis could be conducted beyond ASEAN to enable cross-regional benchmarking of health equity. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Benchmarking ChatGPT for prototyping theories: Experimental studies using the technology acceptance model.
- Author
-
Goh, Tiong-Thye, Xin Dai, and Yanwu Yang
- Subjects
BENCHMARKING (Management) ,CHATGPT ,RAPID prototyping ,TECHNOLOGY Acceptance Model ,HYPOTHESIS - Abstract
This paper explores the paradigm of leveraging ChatGPT as a benchmark tool for theory prototyping in conceptual research. Specifically, we conducted two experimental studies using the classical technology acceptance model (TAM) to demonstrate and evaluate ChatGPT's capability of comprehending theoretical concepts, discriminating between constructs, and generating meaningful responses. Results of the two studies indicate that ChatGPT can generate responses aligned with the TAM theory and constructs. Key metrics including the factors loading, internal consistency reliability, and convergence reliability of the measurement model surpass the minimum threshold, thus confirming the validity of TAM constructs. Moreover, supported hypotheses provide an evidence for the nomological validity of TAM constructs. However, both of the two studies show a high Heterotrait-Monotrait ratio of correlations (HTMT) among TAM constructs, suggesting a concern about discriminant validity. Furthermore, high duplicated response rates were identified and potential biases regarding gender, usage experiences, perceived usefulness, and behavioural intention were revealed in ChatGPT-generated samples. Therefore, it calls for additional efforts in LLM to address performance metrics related to duplicated responses, the strength of discriminant validity, the impact of prompt design, and the generalizability of findings across contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. IMPACT OF JPY LIBOR RATE CHANGES ON REGULATIONS AND DEVELOPMENT OF ALTERNATIVE BENCHMARKS IN JAPAN.
- Author
-
Kubacki, Dominik
- Subjects
BENCHMARKING (Management) ,LIBOR ,COMPARATIVE studies ,INTERBANK market ,BANKING industry - Abstract
Copyright of Journal of Finance & Financial Law / Finanse i Prawo Finansowe is the property of Wydawnictwo Uniwersytetu Lodzkiego and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
43. Lessons learned from coreflood experiments with surfactant-polymer and alkali-surfactant-polymer for enhanced oil recovery.
- Author
-
Kurnia, Ivan, Fatchurrozi, Muhammad, Anwary, Riyaz Ghulam, and Zhang, Guoyin
- Subjects
ENHANCED oil recovery ,SURFACE active agents ,POLYMERS ,BENCHMARKING (Management) ,PARAMETER estimation - Abstract
A review of coreflood experiments for chemically enhanced oil recovery (EOR) is presented in this paper, particularly surfactant-polymer (SP) and alkali-surfactant-polymer (ASP) processes. The objective of this review is to gain a general outlook and insight from coreflood experiments injecting SP or ASP slug as tertiary recovery. The discussion is separated into sections based on relevant core and fluid properties as well as surfactant selection and SP/ASP slug design and their impact on incremental recovery. Most studies in this review have been published within the last twenty years but few older coreflood works have been included for benchmarking. Parameters in each reviewed study have been summarized in tables to help readers gain detailed observation. Lessons learned from these past experiments should help other chemical EOR practitioners or students of the field in benchmarking or improving the outcomes of their future SP/ASP experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation.
- Author
-
Dlougach, Eugenia and Kichik, Margarita
- Subjects
BENCHMARKING (Management) ,COMPUTER software ,TOKAMAKS ,GRAPHICAL user interfaces ,COMPUTER software testing - Abstract
BTR code (originally—"Beam Transmission and Re-ionization", 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for 'beam-driven' fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Benchmarking Study of Existing Possibilities for the Development of Social Farming in the Czech Republic.
- Author
-
Rajchlová, Jaroslava and Svatošová, Veronika
- Subjects
AGRICULTURE ,BENCHMARKING (Management) ,PUBLIC administration ,AGRICULTURAL processing ,FARMS - Abstract
The paper in the form of short communication deals with the phenomenon of social farming. This is a form of involvement of disadvantaged people in the integration process through agricultural activities. Based on the results of documentary analysis and benchmarking method, we presented experiences from the other European countries. In the Czech Republic, the concept is not widespread, not anchored in legislation and not supported by the public administration. Our proposals were aimed at using existing legislative possibilities, not at proposals that require changes to legal standards or the focus of financial support in the form of subsidies. We propose to use certain tools, namely a social business model or cooperation between a social service provider and a farmer. Furthermore, it is a suitable form of business for public beneficial entities, namely associations and especially institutes. We see suitability in the way of tax optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Mapping and contesting peer selection in digitalized public sector benchmarking.
- Author
-
Chua, Wai Fong, Graaf, Johan, and Kraus, Kalle
- Subjects
PUBLIC sector ,BENCHMARKING (Management) ,KEY performance indicators (Management) ,PEERS ,CARTOGRAPHY - Abstract
This paper investigates the influence of digitalization on different modes of peer selection in public sector benchmarking. We do so in the context of a field study of the impact of "Kolada"—a digital database and benchmarking device comparing the performance of Swedish municipalities. We find that the municipal quality controllers often used algorithmically selected peer groups to identify "pure" performance gaps for a range of performance indicators. Politicians, departmental managers, and the citizenry, however, continued to prefer benchmarking against neighboring municipalities. Drawing on Gieryn's concept of cultural cartography, differences in peer selection are characterized as a form of credibility contest between digitally generated and local maps. Our paper contributes to the literature in three main ways. First, we demonstrate how peer selection involves a mutual interplay between new digitally generated, abstract maps of performance and local cartographic legacies sustained by complex social attachments. Second, our paper illustrates the importance of often overlooked social ties informing processes of peer selection, highlighting the importance of professional ties, neighborly familiarity, and affective relations. Third, our paper characterizes the power of "native truths." More generally, our paper indicates the epistemic authority of digital "truths" is contestable and may be resisted. Ultimately, the coexistence of "old" and new epistemic maps confers choice, which contributes to the legitimacy of new technologies enabling digitalized benchmarking to persist in shifting and locally meaningful ways. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Benchmarking of the e-Learning Quality Assurance in Vocational Education and Training: Project Results.
- Author
-
Lengyelová, Kristína and Dimopoulou, Nefeli
- Subjects
VOCATIONAL education ,QUALITY assurance ,DIGITAL learning ,ONLINE education ,BENCHMARKING (Management) ,COMPLETE dentures - Abstract
Purpose: The paper aims to present project No. 2020-1-SK01-KA226-VET- 094266 BEQUEL and partial results of the benchmarking questionnaire. Methodology/Approach: The starting point for the project solution was an analysis of the current state of ensuring the quality of e-learning in vocational training and education in the partner countries of the project (Slovakia, Greece, Spain, and Italy) and an overview of laws and regulations valid at the European level and in the world. A benchmarking survey was conducted to determine the level of e-learning quality assurance in these countries compared to good practices in the European Union. The Benchmarking Badge published monthly on the www.bequal.info portal tracks changes over time. Findings: The average standard for the four involved countries after the pandemic was the level for VET (Vocational Education and Training) providers: (1) strategy and policy for e-learning 71.6%, (2) support for trainers and trainees for e-learning 70.4, (3) infrastructure support for e-learning 74.9%, (4) program/course design and development and approval for e-learning provision 75.8%, and (5) e-learning training program evaluation procedures 67.9%. Research Limitation/Implication: On the one hand, the project was limited by the measures of the Covid-19 pandemic, during which face-to-face meetings and training were not allowed. Still, on the other hand, the VET providers recognised their weaknesses, strengths, and readiness for complete online education. Originality/Value of paper: Examples of good practice and video presentations inspire improving the quality of e-learning in VET. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. Benchmarking machine learning robustness in Covid-19 genome sequence classification.
- Author
-
Ali, Sarwan, Sahoo, Bikram, Zelikovsky, Alexander, Chen, Pin-Yu, and Patterson, Murray
- Subjects
MACHINE learning ,COVID-19 pandemic ,COVID-19 ,SARS-CoV-2 ,BENCHMARKING (Management) - Abstract
The rapid spread of the COVID-19 pandemic has resulted in an unprecedented amount of sequence data of the SARS-CoV-2 genome—millions of sequences and counting. This amount of data, while being orders of magnitude beyond the capacity of traditional approaches to understanding the diversity, dynamics, and evolution of viruses, is nonetheless a rich resource for machine learning (ML) approaches as alternatives for extracting such important information from these data. It is of hence utmost importance to design a framework for testing and benchmarking the robustness of these ML models. This paper makes the first effort (to our knowledge) to benchmark the robustness of ML models by simulating biological sequences with errors. In this paper, we introduce several ways to perturb SARS-CoV-2 genome sequences to mimic the error profiles of common sequencing platforms such as Illumina and PacBio. We show from experiments on a wide array of ML models that some simulation-based approaches with different perturbation budgets are more robust (and accurate) than others for specific embedding methods to certain noise simulations on the input sequences. Our benchmarking framework may assist researchers in properly assessing different ML models and help them understand the behavior of the SARS-CoV-2 virus or avoid possible future pandemics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Multicriteria Model for Organizational Green Information Technology Maturity Assessment and Benchmarking: Defining a Class Structure †.
- Author
-
de Carvalho, Victor Diogho Heuer, Poleto, Thiago, Verde, Salvatore, and Nepomuceno, Thyago Celso Cavalcante
- Subjects
INFORMATION technology ,MULTIPLE criteria decision making ,NUMERICAL analysis ,BENCHMARKING (Management) ,STRATEGIC planning - Abstract
Assessing Green Information Technology (IT) maturity in organizations is a relevant process to measure the progress of sustainable IT initiatives and to support new actions to improve them. Knowledge about the organizational maturity level in Green IT and comparing this level with those of other companies are necessary for self-assessment to strengthen organizations' general sustainability strategy. The main objective of this paper is to communicate a Green IT maturity assessment model with its class structure. This model can also provide benchmarking regarding organizations' maturity since its fundamental premise is a pairwise comparison between companies to obtain their classification. Based on a literature search to identify the existing maturity models, the CMMI model was selected since it is the most recurrent in the literature on managing organizational Green IT actions. The classification process using CMMI maturity levels as classes is based on the ELECTRE IV multicriteria decision support method, which was developed to work specifically with classification problems. The results include the companies' allocation into the most appropriate classes, considering well-defined criteria set with their weights, the class boundaries according to numerical parameters such as lower and upper limits for each of them, and data collected on companies under consideration for the assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Artificial Intelligence for Text-Based Vehicle Search, Recognition, and Continuous Localization in Traffic Videos.
- Author
-
Panetta, Karen, Kezebou, Landry, Oludare, Victor, Intriligator, James, and Agaian, Sos
- Subjects
ARTIFICIAL intelligence ,INTELLIGENT transportation systems ,OBJECT tracking (Computer vision) ,TRAFFIC engineering ,BENCHMARKING (Management) - Abstract
The concept of searching and localizing vehicles fromlive traffic videos based on descriptive textual input has yet to be explored in the scholarly literature. Endowing Intelligent Transportation Systems (ITS) with such a capability could help solve crimes on roadways. One major impediment to the advancement of fine-grain vehicle recognition models is the lack of video testbench datasets with annotated ground truth data. Additionally, to the best of our knowledge, no metrics currently exist for evaluating the robustness and performance efficiency of a vehicle recognition model on live videos and even less so for vehicle search and localization models. In this paper, we address these challenges by proposing V-Localize, a novel artificial intelligence framework for vehicle search and continuous localization captured from live traffic videos based on input textual descriptions. An efficient hashgraph algorithm is introduced to compute valid target information from textual input. This work further introduces two novel datasets to advance AI research in these challenging areas. These datasets include (a) the most diverse and large-scale Vehicle Color Recognition (VCoR) dataset with 15 color classes--twice as many as the number of color classes in the largest existing such dataset--to facilitate finer-grain recognition with color information; and (b) a Vehicle Recognition in Video (VRiV) dataset, a first of its kind video testbench dataset for evaluating the performance of vehicle recognition models in live videos rather than still image data. The VRiV dataset will open new avenues for AI researchers to investigate innovative approaches that were previously intractable due to the lack of annotated traffic vehicle recognition video testbench dataset. Finally, to address the gap in the field, five novel metrics are introduced in this paper for adequately accessing the performance of vehicle recognition models in live videos. Ultimately, the proposed metrics could also prove intuitively effective at quantitative model evaluation in other video recognition applications. T One major advantage of the proposed vehicle search and continuous localization framework is that it could be integrated in ITS software solution to aid law enforcement, especially in critical cases such as of amber alerts or hit-and-run incidents. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.