694 results
Search Results
2. 60.1: Invited Paper: 3D Model-Based Camera Tracking Technology for Augmented Reality.
- Author
-
Makita, Koji
- Subjects
AUGMENTED reality ,THREE-dimensional imaging ,IMAGE processing ,BENCHMARKING (Management) ,VIRTUAL reality ,HISTOGRAMS - Abstract
This paper presents a study of 3D model-based camera tracking method for indoor augmented reality applications, and its benchmarking. Our proposed tracking method is based on registered frame data of virtualized reality models, which are photos with known photo-shoot positions and orientations, and depth data. Tracking results of the method are evaluated with two types of datasets created with real camera images and generated images of virtualized reality models. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
3. Special issue on Performance modeling, benchmarking, and simulation of high performance computing systems.
- Subjects
LATTICE quantum chromodynamics ,BENCHMARKING (Management) ,HIGH performance computing - Abstract
Scientific computing and numerical simulation are now indispensable tools in many areas of science and engineering. PERFORMANCE MODELING, BENCHMARKING, AND SIMULATION This issue of I Concurrency and Computation: Practice and Experience i contains six extended papers from the 10th and 11th Workshops on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS 2019 and 2020). This special issue of Concurrency and Computation: Practice and Experience contains six extended papers selected from the 10th and 11th International Workshops on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS 2019 and 2020), both held as part of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
4. What metrics of harm are being captured in clinical trials involving talking treatments for young people? A systematic review of registered studies on the ISRCTN.
- Author
-
Hayes, Daniel and Za'ba, Nur
- Subjects
RISK-taking behavior ,SUICIDE ,CLINICAL trials ,SYSTEMATIC reviews ,MENTAL health ,BENCHMARKING (Management) ,ADVERSE health care events ,PSYCHOTHERAPY ,SELF-mutilation ,CHILDREN ,ADULTS - Abstract
Objective: The recording of harm and adverse events in psychological trials is essential, yet the types of harm being captured in trials for talking treatments involving children and young people have not been systematically investigated. The aim of this review was to determine how often harm and adverse events are recorded in talking treatments for children and young people, as well as the metrics that are being collected. Method: The ISRCTN was searched for trials involving talking therapies and young people. Of 355 entries, 69 met inclusion criteria. The authors of these records were contacted for further information, and additional searches were conducted of protocols and papers. Results: Findings show that around half of all records mentioned harm or adverse events in at least one piece of study documentation. Overall, metrics commonly collected are as follows: suicide, suicidal ideation and intent, self‐harm, changes to clinical symptomology, and the need for further or additional care. Conclusions: Similar to the wider field of psychological interventions for mental health, the recording of harm and adverse events in children and young people tends to rely on a few key metrics, many of which are borrowed from drug trials. Examples of best practice have been highlighted, as well as recommendations for the progression of this research area. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Benchmarking, Temporal Disaggregation, and Reconciliation of Systems of Time Series.
- Author
-
Chen, Baoline, Di Fonzo, Tommaso, and Mushkudiani, Nino
- Subjects
BENCHMARKING (Management) ,STATISTICS - Abstract
An introduction is presented in which the editor discusses articles in the issue on topics including benchmarking; data consistency; and statistics.
- Published
- 2018
- Full Text
- View/download PDF
6. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
- Author
-
Xiao, Ruiyang, Adolfsson‐Erici, Margaretha, Åkerman, Gun, McLachlan, Michael S., and MacLeod, Matthew
- Subjects
GASTROINTESTINAL system ,FISH physiology ,DIPHENYL ,BENCHMARKING (Management) ,PAPER chemicals ,ENVIRONMENTAL sciences - Abstract
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2′,5,6′-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. Environ Toxicol Chem 2013;32:2695-2700. © 2013 SETAC [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
7. Mapping and contesting peer selection in digitalized public sector benchmarking.
- Author
-
Chua, Wai Fong, Graaf, Johan, and Kraus, Kalle
- Subjects
PUBLIC sector ,BENCHMARKING (Management) ,KEY performance indicators (Management) ,PEERS ,CARTOGRAPHY - Abstract
This paper investigates the influence of digitalization on different modes of peer selection in public sector benchmarking. We do so in the context of a field study of the impact of "Kolada"—a digital database and benchmarking device comparing the performance of Swedish municipalities. We find that the municipal quality controllers often used algorithmically selected peer groups to identify "pure" performance gaps for a range of performance indicators. Politicians, departmental managers, and the citizenry, however, continued to prefer benchmarking against neighboring municipalities. Drawing on Gieryn's concept of cultural cartography, differences in peer selection are characterized as a form of credibility contest between digitally generated and local maps. Our paper contributes to the literature in three main ways. First, we demonstrate how peer selection involves a mutual interplay between new digitally generated, abstract maps of performance and local cartographic legacies sustained by complex social attachments. Second, our paper illustrates the importance of often overlooked social ties informing processes of peer selection, highlighting the importance of professional ties, neighborly familiarity, and affective relations. Third, our paper characterizes the power of "native truths." More generally, our paper indicates the epistemic authority of digital "truths" is contestable and may be resisted. Ultimately, the coexistence of "old" and new epistemic maps confers choice, which contributes to the legitimacy of new technologies enabling digitalized benchmarking to persist in shifting and locally meaningful ways. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. A novel unified interference management scheme for multicellular MIMO communication with instantaneous relay.
- Author
-
Menon U, Vivek and Selvaprabhu, Poongundran
- Subjects
- *
WIRELESS communications , *TELECOMMUNICATION , *BENCHMARKING (Management) , *DEGREES of freedom , *IMAC (Computer) - Abstract
Summary In the world of emerging wireless networks, interference poses a significant challenge to reliable wireless communication. Additionally, these networks are prone to path loss and blockages, which can be addressed by utilizing the advanced technology of multihop communication with instantaneous relay (IR). However, scenarios involving IR‐assisted networks are considered instances of multihop communications that face potential obstacles caused by interference. As a result, multiple interference management approaches exist to tackle this interference issue, among which aligned interference neutralization (AIN) is a state‐of‐the‐art technology that seamlessly unifies two established interference management strategies: interference alignment (IA) and interference neutralization (IN). Therefore, this paper presents a novel tristaged AIN scheme to mitigate interference in a multicellular multiple‐input multiple‐output (MIMO) interference multiple access channel (IMAC). In the proposed scheme, the initial stage‐1 involves transmitting message signals from individual transmitters or users to the IR and the receiving base stations (BSs). In stage‐2, the IR neutralizes half of the interference signals by performing IN. Finally, in stage‐3, IA is carried out at the receiver BS terminals, aligning the remaining interference signals equally within the available dimensions. Based on this proposed approach, we determined that for an IR‐aided multicellular MIMO IMAC, the achievable degree of freedom (DoF) is 2
N . The proposed approach's robustness and effectiveness have been analyzed through extensive simulations, and these simulation results indicated that the proposed approach outperforms other benchmark interference management techniques in terms of DoF and sum rate, thereby improving user performance. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
9. KNOWLEDGE AND THE FIRM: OVERVIEW.
- Author
-
Spender, J.-C. and Grant, Robert M.
- Subjects
KNOWLEDGE management ,RESOURCE allocation ,BENCHMARKING (Management) ,DIFFUSION of innovations ,TOTAL quality management ,RESOURCE management ,TECHNOLOGICAL innovations ,ORGANIZATIONAL learning ,ENTERPRISE resource planning - Abstract
The explosion of interest in knowledge and its management reflects the trend towards 'knowledge work' and the Information Age, and recognition of knowledge as the principal source of economic rent. The papers in this Special Issue represent an attempt by strategy scholars (and some outside our traditional field) to come to terms with the implications of knowledge for the theory of the firm and its management. They are the product of a convergence of several streams of research which have addressed management implications of knowledge, including the management of technology, the economics of innovation and information, resource-based theory, and organizational learning. At the theoretical level, knowledge-centered approaches of Penrose, Arrow, Hayek and others have been enriched by contributions from evolutionary economists (notably Nelson and Winter) and epistemologists (notably M. Polanyi). At the empirical level, research into innovation and its diffusion originated by Mansfield, Griliches and others has been extended through studies which investigate tacit as well as explicit knowledge, and explore knowledge transfer within as well as across firms. [ABSTRACT FROM AUTHOR]
- Published
- 1996
10. Deviating from the ideal.
- Author
-
Barrett, Jacob
- Subjects
THEORISTS ,BENCHMARKING (Management) ,PROBLEM solving ,DYSTOPIAS ,POLITICAL philosophy - Abstract
Ideal theorists aim to describe the ideally just society. Problem solvers aim to identify concrete changes to actual societies that would make them more just. The relation between these two sorts of theorizing is highly contested. According to the benchmark view, ideal theory is prior to problem solving because a conception of the ideally just society serves as an indispensable benchmark for evaluating societies in terms of how far they deviate from it. In this paper, I clarify the benchmark view, argue that existing criticisms of it are unsuccessful, and develop a novel redundancy objection to the benchmark view and the claim of priority it allegedly entails. I then consider the extent to which ideal theory might facilitate problem solving without being prior to it and argue that it can only play a modest role in this regard. The upshot is that ideal theory is neither required for nor especially relevant to problem solving—but it is not completely irrelevant either. It facilitates problem solving to some limited degree, but no more, say, than theorizing about dystopia. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Benchmarking the first generation of production quality Arm‐based supercomputers.
- Author
-
McIntosh‐Smith, Simon, Price, James, Poenaru, Andrei, and Deakin, Tom
- Subjects
SUPERCOMPUTERS ,HIGH performance computing ,BENCHMARKING (Management) ,CATALYSTS ,SCOUTS (Youth organization members) - Abstract
In this paper, we present scaling results from two production quality supercomputers that use the first generation of Arm‐based CPUs that have been optimized for scientific workloads. Both systems use Marvell ThunderX2 CPUs, which deliver high core counts and class‐leading memory bandwidth. The first system is Isambard, a Cray XC50 "Scout" system operated by the GW4 Alliance and the UK Met Office as a Tier‐2 national HPC service. The second system is one of three Arm‐based HPE Apollo 70 systems delivered as part of the Catalyst UK project, running at the University of Bristol. We compare scaling results from these two systems with three Cray XC50 systems based on Intel Skylake and Broadwell CPUs. We focus on a range of applications and mini‐apps that are important to the UK national HPC service, ARCHER, and to our project partners. We also compare the performance and maturity of the state‐of‐the‐art toolchains available on Arm‐based HPC systems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. A feedback view of behavioural distortions from perceived public service gaps at 'street‐level' policy implementation: The case of unintended outcomes in public schools.
- Author
-
Bianchi, Carmine and Salazar Rua, Robinson
- Subjects
SOCIAL support ,STRATEGIC planning ,MEDICAL databases ,INFORMATION storage & retrieval systems ,PERCEPTUAL disorders ,EVALUATION of organizational effectiveness ,ACADEMIC achievement ,BENCHMARKING (Management) ,GOVERNMENT policy ,GROUP decision making ,PUBLIC sector ,SCHOOL administration ,PATIENT education ,CORPORATE culture - Abstract
This paper discusses the limitations and risks associated with the use of output‐oriented measures to assess public school performance. In particular, it questions the capability of the performance measures set by external institutions, with respect to schools, to support sustainable educational outcomes. To this end, the 'street‐level bureaucracy' theory is used in the paper as a basis to analyse the behavioural distortions generated by perceived public service gaps at school level policy implementation. Such unintended behavioural effects are often a major cause of disappointing outcomes when test‐based accountability systems are adopted. In the second part of the paper, an insight model referred to a hypothetic medium‐sized public school located in a poor area in Colombia is used to illustrate how a feedback approach to school performance measurement can support decision‐makers to pursue sustainable education outcomes and to prevent behavioural distortions from perceived public service gaps at 'street‐level' policy implementation. This analysis outlines an alternative approach to school performance measurement that might help policy makers in extending the domain of governmental benchmarks to performance measures and collaborative efforts that reflect the challenges of holistic education in the context where public schools are located. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Review article: Pre‐hospital trauma guidelines and access to lifesaving interventions in Australia and Aotearoa/New Zealand.
- Author
-
Andrews, Tim, Meadley, Ben, Gabbe, Belinda, Beck, Ben, Dicker, Bridget, and Cameron, Peter
- Subjects
- *
WOUNDS & injuries , *MEDICAL protocols , *BENCHMARKING (Management) , *HOSPITALS , *EMERGENCY medical services , *EMERGENCY medicine , *EVALUATION of medical care , *PATIENT care , *TRANSPORTATION of patients - Abstract
The centralisation of trauma services in western countries has led to an improvement in patient outcomes. Effective trauma systems include a pre‐hospital trauma system. Delivery of high‐level pre‐hospital trauma care must include identification of potential major trauma patients, access and correct application of lifesaving interventions (LSIs) and timely transport to definitive care. Globally, many nations endorse nationwide pre‐hospital major trauma triage guidelines, to ensure a universal approach to patient care. This paper examined clinical guidelines from all 10 EMS in Australia and Aotearoa/New Zealand. All relevant trauma guidelines were included, and key information was extracted. Authors compared major trauma triage criteria, all LSI included in guidelines, and guidelines for transport to definitive care. The identification of major trauma patients varied between all 10 EMS, with no universal criteria. The most common approach to trauma triage included a three‐step assessment process: physiological criteria, identified injuries and mechanism of injury. Disparity between physiological criteria, injuries and mechanism was found when comparing guidelines. All 10 EMS had fundamental LSI included in their trauma guidelines. Fundamental LSI included haemorrhage control (arterial tourniquets, pelvic binders), non‐invasive airway management (face mask ventilation, supraglottic airway devices) and pleural wall needle decompression. Variation in more advanced LSI was evident between EMS. Optimising trauma triage guidelines is an important aspect of a robust and evidence driven trauma system. The lack of consensus in trauma triage identified in the present study makes benchmarking and comparison of trauma systems difficult. Effective trauma systems include a pre‐hospital trauma system. Delivery of high‐level pre‐hospital trauma care must include identification of potential major trauma patients, access and correct application of lifesaving interventions (LSIs), and timely transport to definitive care. Authors compared major trauma triage criteria, all LSI included in guidelines, and guidelines for transport to definitive care, and identified variations between all systems included in the present study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. At the crossroads of logics: Automating newswork with artificial intelligence—(Re)defining journalistic logics from the perspective of technologists.
- Author
-
Sirén‐Heikel, Stefanie, Kjellman, Martin, and Lindén, Carl‐Gustav
- Subjects
PRESS ,MASS media ,DIGITAL technology ,SERIAL publications ,RESEARCH methodology ,SOCIAL media ,ARTIFICIAL intelligence ,DISINFORMATION ,INTERVIEWING ,CONCEPTUAL structures ,BENCHMARKING (Management) ,AUTOMATION ,RESEARCH funding ,LOGIC - Abstract
As artificial intelligence (AI) technologies become more ubiquitous for streamlining and optimizing work, they are entering fields representing organizational logics at odds with the efficiency logic of automation. One such field is journalism, an industry defined by a logic enacted through professional norms, practices, and values. This paper examines the experience of technologists developing and employing natural language generation (NLG) in news organizations, looking at how they situate themselves and their technology in relation to newswork. Drawing on institutional logics, a theoretical framework from organizational theory, we show how technologists shape their logic for building these emerging technologies based on a theory of rationalizing news organizations, a frame of optimizing newswork, and a narrative of news organizations misinterpreting the technology. Our interviews reveal technologists mitigating tensions with journalistic logic and newswork by labeling stories generated by their systems as nonjournalistic content, seeing their technology as a solution for improving journalism, enabling newswork to move away from routine tasks. We also find that as technologists interact with news organizations, they assimilate elements from journalistic logic beneficial for benchmarking their technology for more lucrative industries. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Report on G4‐Med, a Geant4 benchmarking system for medical physics applications developed by the Geant4 Medical Simulation Benchmarking Group.
- Author
-
Arce, P., Bolst, D., Bordage, M.‐C., Brown, J. M. C., Cirrone, P., Cortés‐Giraldo, M. A., Cutajar, D., Cuttone, G., Desorgher, L., Dondero, P., Dotti, A., Faddegon, B., Fedon, C., Guatelli, S., Incerti, S., Ivanchenko, V., Konstantinov, D., Kyriakou, I., Latyshev, G., and Le, A.
- Subjects
MEDICAL physics ,MEDICAL simulation ,NUCLEAR medicine ,BENCHMARKING (Management) ,MONTE Carlo method - Abstract
Background: Geant4 is a Monte Carlo code extensively used in medical physics for a wide range of applications, such as dosimetry, micro‐ and nanodosimetry, imaging, radiation protection, and nuclear medicine. Geant4 is continuously evolving, so it is crucial to have a system that benchmarks this Monte Carlo code for medical physics against reference data and to perform regression testing. Aims: To respond to these needs, we developed G4‐Med, a benchmarking and regression testing system of Geant4 for medical physics. Materials and Methods: G4‐Med currently includes 18 tests. They range from the benchmarking of fundamental physics quantities to the testing of Monte Carlo simulation setups typical of medical physics applications. Both electromagnetic and hadronic physics processes and models within the prebuilt Geant4 physics lists are tested. The tests included in G4‐Med are executed on the CERN computing infrastructure via the use of the geant‐val web application, developed at CERN for Geant4 testing. The physical observables can be compared to reference data for benchmarking and to results of previous Geant4 versions for regression testing purposes. Results: This paper describes the tests included in G4‐Med and shows the results derived from the benchmarking of Geant4 10.5 against reference data. Discussion: Our results indicate that the Geant4 electromagnetic physics constructor G4EmStandardPhysics_option4 gives a good agreement with the reference data for all the tests. The QGSP_BIC_HP physics list provided an overall adequate description of the physics involved in hadron therapy, including proton and carbon ion therapy. New tests should be included in the next stage of the project to extend the benchmarking to other physical quantities and application scenarios of interest for medical physics. Conclusion: The results presented and discussed in this paper will aid users in tailoring physics lists to their particular application. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. APPLICATION OF PANEL DATA MODELS IN BENCHMARKING ANALYSIS OF THE ELECTRICITY DISTRIBUTION SECTOR.
- Author
-
Farsi, Mehdi, Filippini, Massimo, and Greene, William
- Subjects
ELECTRIC power distribution ,ECONOMETRICS ,STOCHASTIC analysis ,PANEL analysis ,BENCHMARKING (Management) ,ELECTRIC power systems ,TOTAL quality management ,BUSINESS enterprises - Abstract
This paper explores the application of several panel data models in measuring productive efficiency of the electricity distribution sector. Stochastic Frontier Analysis has been used to estimate the cost-efficiency of 59 distribution utilities operating over a nine-year period in Switzerland. The estimated coefficients and inefficiency scores are compared across three different panel data models. The results indicate that individual efficiency estimates are sensitive to the econometric specification of unobserved firm-specific heterogeneity. This paper shows that alternative panel models such as the ‘true’ random effects model proposed by Greene (2005) could be used to explore the possible impacts of unobserved firm-specific factors on efficiency estimates. When these factors are specified as a separate stochastic term, the efficiency estimates are substantially higher suggesting that conventional models could confound efficiency differences with other unobserved variations among companies. On the other hand, refined specification of unobserved heterogeneity might lead to an underestimation of inefficiencies by mistaking potential persistent inefficiencies as external factors. Given that specification of inefficiency and heterogeneity relies on non-testable assumptions, there is no conclusive evidence in favour of one or the other specification. However, this paper argues that alternative panel data models along with conventional estimators can be used to obtain approximate lower and upper bounds for companies' efficiency scores. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
17. Benchmarking scientific journals from the submitting author's viewpoint.
- Author
-
Björk, Bo-Christer and Holmströ, Jonas
- Subjects
BENCHMARKING (Management) ,SCIENTIFIC literature ,SCIENCE writers ,SCIENCE publishing ,SCHOLARLY publishing ,SCIENCE periodical publishing - Abstract
Authors of scholarly papers to a large extent base the decision on where to submit their manuscripts on the prestige of journals, taking little account of other possible factors. Information concerning such factors is in fact often not available. This paper argues for the establishment of methods for benchmarking scientific journals, taking into account a wider range of journal performance parameters than is currently available. A model for how prospective authors determine the value of submitting to a particular journal is presented. The model includes eight factors that influence an author's decision and 21 other underlying factors. The model is a qualitative one. The method proposes to benchmark groups of journals by application of the factors. Initial testing of the method has been undertaken in one discipline. [ABSTRACT FROM AUTHOR]
- Published
- 2006
18. Bycatch Beknown: Methodology for jurisdictional reporting of fisheries discards – Using Australia as a case study.
- Author
-
Kennelly, Steven J.
- Subjects
FISHERIES ,SHRIMP fisheries ,ELECTRONIC surveillance ,BENCHMARKING (Management) ,CASE studies - Abstract
Bycatch remains one of the most important issues in the world's fisheries so its estimation and reporting have been highlighted in many international, regional and jurisdictional guidelines and policies. This paper describes a simple methodology to estimate jurisdictional discards, using Australia's first national bycatch report as a case study. The methodology involves: (a) identifying annual landings for all fisheries and methods; (b) deriving retained:discard ratios for each; (c) where ratios are lacking, using substitute ratios from similar fisheries; (d) applying the ratios from (b) and (c) to the data from (a) to obtain totals; and (e) scoring the quality of the discard information using the US Tier Classification System weighted by estimated discard levels. The results for Australia revealed that, during the last decade, commercial fisheries annually discarded 42.5% of what was caught (87,983t). 70% came from just eight fisheries/methods with 30% coming from the other 299. The Queensland East Coast Prawn Trawl fishery contributed 28.5% of the national total. The quality of discard information was reasonable across most jurisdictions, with a national score of 59.1%. The best quality data came from the Commonwealth due to its observer and (more recent) Electronic Monitoring programmes. Those data also showed that fishers' logbook information under‐estimated levels of discards (determined from observer data) by 89.7%. This paper provides: (a) the means to develop benchmarks in bycatch management and estimation against which jurisdictions can be compared and performances tracked; and (b) for Australia, priority areas for management intervention to reduce discarding and improve its monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Helenos: A realistic benchmark for distributed transactional memory.
- Author
-
Kobyliński, Paweł, Siek, Konrad, Baranowski, Jan, and Wojciechowski, Paweł T.
- Subjects
COMPUTER storage devices ,MULTIPROCESSORS ,BENCHMARKING (Management) ,APPLICATION software - Abstract
Summary: Transactional memory (TM) is an approach to concurrency control that aims to make writing parallel programs both effective and simple. The approach has been initially proposed for nondistributed multiprocessor systems, but it is gaining popularity in distributed systems to synchronize tasks at large scales. Efficiency and scalability are often the key issues in TM research; thus, performance benchmarks are an important part of it. However, while standard TM benchmarks like the Stanford Transactional Applications for Multi‐Processing suite and STMBench7 are available and widely accepted, they do not translate well into distributed systems. Hence, the set of benchmarks usable with distributed TM systems is very limited, and must be padded with microbenchmarks, whose simplicity and artificial nature often makes them uninformative or misleading. Therefore, this paper introduces Helenos, a realistic, complex, and comprehensive distributed TM benchmark based on the problem of the Facebook inbox, an application of the Cassandra distributed store. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Introduction.
- Author
-
Fisher, Colin
- Subjects
BUSINESS ethics ,BENCHMARKING (Management) ,SOCIAL responsibility of business ,CONFERENCES & conventions - Abstract
The section introduces a series of articles highlighted at the Challenge of Business Ethics conference in Cambridge, England. The diversity of the papers disguised a unity of contradiction,which linked papers that addressed the same key themes and tensions from opposite perspectives. The corporate social responsibility (CSR) approach faces the problem of converting formal, generalized, intentions into practical and consistent action. The paper by Ioanna Foka in this special edition picks up on this theme and provides a framework for benchmarking organizations' performance on CSR. Other contributors, implicitly or explicitly, consider the sublation of this tension between a focus on the organization and a focus on the individual.
- Published
- 2003
- Full Text
- View/download PDF
21. Atom‐to‐atom Mapping: A Benchmarking Study of Popular Mapping Algorithms and Consensus Strategies.
- Author
-
Lin, Arkadii, Dyubankova, Natalia, Madzhidov, Timur I., Nugmanov, Ramil I., Verhoeven, Jonas, Gimadiev, Timur R., Afonina, Valentina A., Ibragimova, Zarina, Rakhimbekova, Assima, Sidorov, Pavel, Gedich, Andrei, Suleymanov, Rail, Mukhametgaleev, Ravil, Wegner, Joerg, Ceulemans, Hugo, and Varnek, Alexandre
- Subjects
ALGORITHMS ,BENCHMARKING (Management) ,DATA scrubbing - Abstract
In this paper, we compare the most popular Atom‐to‐Atom Mapping (AAM) tools: ChemAxon,[1] Indigo,[2] RDTool,[3] NameRXN (NextMove),[4] and RXNMapper[5] which implement different AAM algorithms. An open‐source RDTool program was optimized, and its modified version ("new RDTool") was considered together with several consensus mapping strategies. The Condensed Graph of Reaction approach was used to calculate chemical distances and develop the "AAM fixer" algorithm for an automatized correction of erroneous mapping. The benchmarking calculations were performed on a Golden dataset containing 1851 manually mapped and curated reactions. The best performing RXNMapper program together with the AMM Fixer was applied to map the USPTO database. The Golden dataset, mapped USPTO and optimized RDTool are available in the GitHub repository https://github.com/Laboratoire‐de‐Chemoinformatique. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Benchmarking flood risk reduction in the Elbe River.
- Author
-
Grabs, W.
- Subjects
FLOOD risk ,BENCHMARKING (Management) ,FLOOD control ,FLOODS ,ENVIRONMENTAL monitoring ,MANAGEMENT - Abstract
The past decade has seen the development and overall acceptance of the concept of Integrated Flood Risk Management. This approach strives to balance positive and negative effects of riverine floods and combines it with risk management concepts. In practice, this has led to a diversification of flood management practices that go beyond traditional structural flood protection measures towards non-structural measures. These include giving more space for rivers, backwards location of dykes, re-naturalisation of flood plains and a suite of improved information systems including improved flood forecasting services, promoting flood risk awareness and self-help capabilities. The paper describes in some detail the process of benchmarking to support effective planning, implementation and monitoring of integrated flood risk management activities that require a set of quantifiable measures, against which progress in flood risk management can be referenced. The case of the extreme Elbe River floods in 2002 and 2013 triggered the development and implementation of the Elbe Flood Protection - Action Plan that in large parts has been implemented. The paper shows the basic concepts of the Elbe Flood Protection Plan and its actions and puts these in the context of a benchmarking framework. The paper concludes that although elements of a benchmarking concept are outlined in the Action Plan, the establishment of benchmarking practices as a tool in flood risk management is in its infancy. The development of a consistent benchmarking procedure would have the potential to improve further the effectiveness of Integrated Flood Risk Management practices in national and international river basins. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. Best practices in teaching echocardiography to cardiology fellows: a review of the evidence.
- Author
-
Ruden, Emily A., Way, David P., Nagel, Rollin W., Cheek, Fern, and Auseon, Alex J.
- Subjects
EVALUATION of teaching ,TEACHING methods ,BENCHMARKING (Management) ,CLINICAL competence ,CONFIDENCE intervals ,ECHOCARDIOGRAPHY ,ERIC (Information retrieval system) ,INFORMATION storage & retrieval systems ,MEDICAL databases ,STUDY & teaching of medicine ,MEDLINE ,ONLINE information services ,SYSTEMATIC reviews ,EVIDENCE-based medicine ,EDUCATIONAL outcomes ,DATA analysis software - Abstract
Background Best practices in the teaching of performance and interpretation of echocardiography to cardiology fellows are unknown, and thus, it has traditionally been performed through an apprenticeship model. This review summarizes the existing literature describing evidence-based teaching of echocardiography. Methods A comprehensive search of multiple scientific and educational databases included prospective studies describing an educational intervention for teaching echocardiography to physicians. A total of 288 articles were retrieved, and 10 articles were included in our review. The Medical Education Research Study Quality Instrument (MERSQI), a validated rubric designed to measure the methodological quality of educational research, was used to assign a comprehensive score to each paper. Results The articles were categorized by educational themes as follows: focused curriculum-based training, simulation, and assessment of competency. Individual study MERSQI scores varied from 8 to 13 (mean 10.55) on a scale of 18 points. The distribution of each group's median score (focused curriculum-based training 11.64; simulation 12.92; assessment of competency 9.39) was analyzed using boxplots with a 95% confidence interval. The median MERSQI score for the assessment of competency group was significantly lower than the others. Conclusions A review of the data exploring best practices in teaching echocardiography shows only limited effects describing the curricular and assessment components of an overall educational system, rather than one-on-one clinical teaching. Future papers should explore application of point-of-care teaching and the impact of interventions on patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. LESC: Superpixel cut‐based local expansion for accurate stereo matching.
- Author
-
Cheng, Xianjing, Zhao, Yong, Yang, Wenbang, Hu, Zhijun, Yu, Xiaomin, Sang, Haiwei, and Zhang, Guiying
- Subjects
HIGH resolution imaging ,MATHEMATICAL optimization ,IMAGE quality in imaging systems ,OPTICAL resolution ,BENCHMARKING (Management) - Abstract
The rapid estimation of the accurate disparity between pixels is the goal of stereo matching. However, it is very difficult for the 3D labels‐based methods due to huge search space of 3D labels, especially for high‐resolution images. In this paper, a novel superpixel cut‐based method is proposed, in an attempt to get the accurate disparity map efficiently, including the multi‐layer superpixel optimization and iteractive local α‐expansion in parallel. As for the multi‐layer superpixel optimization, feature point optimization is designed to get accurate candidate labels that are set for most pixels using non‐local cost aggregation strategy and update per‐pixel labels of the corresponding superpixels from the candidate label sets on the small‐size superpixel layer, and then update the middle to large‐size superpixel layers progressively using non‐local cost aggregation strategy. In order to provide more prior information to identify weak texture and textureless regions in non‐local cost aggregation, the weight combination of "intensity + gradient + binary image" is proposed for constructing an optimal minimum spanning tree (MST) to calculate the aggregated matching cost and obtain the labels of minimum aggregated matching cost. Moreover, the local patch surrounding the corresponding superpixels is designed to accelerate superpixel optimization in parallel, and a neighborhood structure is presented to optimize the algorithm in this study, including superpixel neighborhood and patch neighborhood. As for the iteractive local α‐expansion, three layers of patch structure corresponding to the superpixel neighborhood structure is proposed for optimizing the algorithm in this study. The experimental results show that higher accuracy could be achieved via the method in this study compared with some known state‐of‐the‐art stereo methods on KITTI 2015 and Middlebury benchmark V3, which are the standard benchmarks for testing the stereo matching methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. A review on potentials and challenges of nanolubricants as promising lubricants for electric vehicles.
- Author
-
Ahmed Abdalglil Mustafa, Waleed, Dassenoy, Fabrice, Sarno, Maria, and Senatore, Adolfo
- Subjects
ELECTRIC motors ,BENCHMARKING (Management) ,TRIBOLOGY ,INTERNAL combustion engines ,CORROSION resistance ,THERMAL resistance - Abstract
The most remarkable difference between electric vehicles (EVs) and conventional ones is the fuel burning dependency of the internal combustion engine, while the emerging EVs operate on electric motors. These alternations create staggering shifts in both lubricants' market demand and performance specifications. Lubricants for electrical powertrain constitutes greases, transmission oils, and lubricants for auxiliary systems and do not rely on engine oils as internal combustion vehicles. The new standards will be more focused on lubricants' electrical properties such as breakage voltage and conductivity, coupled with tribological performance under high rpm, corrosion resistance and thermal management benchmarks. This paper thematically reviews the different studies performed with nanolubricants, and how they match EVs' operational requirements. Conclusions from this study can be considered as guidelines for the potential application of nanolubricants in EVs and possible future research that can be accomplished on the topic. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Benchmarking within a DEA framework: setting the closest targets and identifying peer groups with the most similar performances.
- Author
-
Ruiz, José L. and Sirvent, Inmaculada
- Subjects
PEERS ,DATA envelopment analysis ,BENCHMARKING (Management) - Abstract
Data envelopment analysis (DEA) is widely used as a benchmarking tool for improving performance of organizations. For that purpose, DEA analyses provide information on both target setting and peer identification. However, the identification of peers is actually a by‐product of DEA. DEA models seek a projection point of the unit under evaluation on the efficient frontier of the production possibility set, which is used to set targets, while peers are identified simply as the members of the so‐called reference sets, which consist of the efficient units that determine the projection point as a combination of them. In practice, the selection of peers is crucial for benchmarking, because organizations need to identify a peer group in their sector or industry that represents actual performances from which to learn. In this paper, we argue that DEA benchmarking models should incorporate into their objectives criteria for the selection of suitable benchmarks among peers, in addition to considering the setting of appropriate targets (as usual). Specifically, we develop models having two objectives: setting the closest targets and selecting the most similar reference sets. Thus, we seek to establish targets that require the least effort from organizations for their achievement in addition to identifying peer groups with the most similar performances, which are potential benchmarks to emulate and improve. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Caveat Compounder: A Warning about Using the Daily CRSP Equal-Weighted Index to Compute Long-Run Excess Returns.
- Author
-
CANINA, LINDA, MICHAELY, RONI, THALER, RICHARD, and WOMACK, KENT
- Subjects
COMPOUND annual growth rate ,STOCK price indexes ,PRICE indexes ,RATE of return ,INVESTMENTS ,BENCHMARKING (Management) ,PRICES of securities ,STOCK prices ,RATE of return on stocks ,REGRESSION analysis - Abstract
This paper issues a warning that compounding daily returns of the Center for Research in Security Prices (CRSP) equal-weighted index can lead to surprisingly large biases. The differences between the monthly returns compounded from the daily tapes and the monthly CRSP equal-weighted indices is almost 0.43 percent per month, or 6 percent per year. This difference amounts to one-third of the average monthly return, and is large enough to reverse the conclusions of a paper using the daily tape to compute the return on the benchmark portfolio. We also investigate the sources of these biases and suggest several alternative strategies to avoid them. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
28. The World Price of Covariance Risk.
- Author
-
Harvey, Campbell R.
- Subjects
SECURITIES ,RATE of return ,ANALYSIS of covariance ,FINANCE ,CAPITAL assets pricing model ,RISK assessment ,INVESTMENTS ,PORTFOLIO management (Investments) ,RISK management in business ,BENCHMARKING (Management) ,FINANCIAL performance ,GLOBALIZATION - Abstract
In a financially integrated global market, the conditionally expected return on a portfolio of securities from a particular country is determined by the country's world risk exposure. This paper measures the conditional risk of 17 countries. The reward per unit of risk is the world price of covariance risk. Although the tests provide evidence on the conditional mean variance efficiency of the benchmark portfolio, the results show that countries' risk exposures help explain differences in performance. Evidence is also presented which indicates that these risk exposures change through time and that the world price of covariance risk is not constant. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
29. Critically classifying: UK e-government website benchmarking and the recasting of the citizen as customer.
- Author
-
Mosse, Benjamin and Whitley, Edgar A.
- Subjects
GOVERNMENT websites -- Access control ,BENCHMARKING (Management) ,CONSUMER preferences ,PRIVATE sector ,TOTAL quality management ,WEB development ,CUSTOMER services - Abstract
In recent years, discussion of the provision of government services has paid particular attention to notions of customer choice and improved service delivery. However, there appears to be marked shift in the relationship between the citizen and the state moving from government being responsive to the needs of citizens to viewing citizens explicitly as customers. This paper argues that this change is being accelerated by government use of techniques like benchmarking, which have been widely used in the private sector. To illustrate this point, the paper focuses on the adoption of website benchmarking techniques by the public sector. The paper argues that the essence of these benchmarking technologies, a process comprised of both finding and producing truth, is fundamentally based on the act of classifying and draws on Martin Heidegger's etymological enquiry to reinterpret classification as a dynamic movement towards order that both creates and obfuscates truth. In so doing, it demonstrates how Heidegger's seminal ideas can be adapted for critical social research by showing that technology is more than an instrument as it has epistemic implications for what counts as truth. This stance is used as the basis for understanding empirical work reporting on a UK government website benchmarking project. Our analysis identifies the means involved in producing the classifications inherent in such benchmarking projects and relates these to the more general move that is recasting the relationship between the citizen and the state, and increasingly blurring the boundaries between the state and the private sector. Recent developments in other attempts by the UK government to use private-sector technologies and approaches indicate ways in which this move might be challenged. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
30. The Development of a Benchmarking Methodology to Assist in Managing the Enhancement of University Research Quality.
- Author
-
Nicholls, Miles G.
- Subjects
UNIVERSITIES & colleges ,EDUCATIONAL quality ,RESEARCH grants ,EDUCATIONAL finance ,BENCHMARKING (Management) ,FEDERAL aid to research ,STOCK exchanges ,EDUCATION research - Abstract
The paper proposes a metric, the research quality index (RQI), for assessing and tracking university research quality. The RQI is a composite index that encompasses the three main areas of research activity: publications, research grants and higher degree by research activity. The public availability of such an index will also facilitate benchmarking (internally, competitively and generically) by academic units in universities. This has become an important activity in Australia with the proposed introduction of the Research Quality Framework (RQF) as the future research funding mechanism for Australian universities. The RQF is a quality-based system that will replace the existing funding system that is focused on the volume of research output, not quality. Benchmarking, using the RQI, will allow academic units to track their progress towards their quality targets and facilitates internal and competitive benchmarking, allowing academic units to assess the efficacy of their research quality enhancement strategies and policies on an annual basis. The paper illustrates the compilation and operation of the RQI. The RQI is a short-term tracking methodology for use in between the cyclical major research quality assessments. With modifications, it is applicable in a wide range of countries. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Lexical data augmentation for sentiment analysis.
- Author
-
Xiang, Rong, Chersoni, Emmanuele, Lu, Qin, Huang, Chu‐Ren, Li, Wenjie, and Long, Yunfei
- Subjects
DEEP learning ,SEMANTICS ,SUPPORT vector machines ,PHONOLOGICAL awareness ,RESEARCH evaluation ,SUBJECT headings ,MACHINE learning ,COMPARATIVE studies ,BENCHMARKING (Management) ,KNOWLEDGE base ,DESCRIPTIVE statistics ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Machine learning methods, especially deep learning models, have achieved impressive performance in various natural language processing tasks including sentiment analysis. However, deep learning models are more demanding for training data. Data augmentation techniques are widely used to generate new instances based on modifications to existing data or relying on external knowledge bases to address annotated data scarcity, which hinders the full potential of machine learning techniques. This paper presents our work using part‐of‐speech (POS) focused lexical substitution for data augmentation (PLSDA) to enhance the performance of machine learning algorithms in sentiment analysis. We exploit POS information to identify words to be replaced and investigate different augmentation strategies to find semantically related substitutions when generating new instances. The choice of POS tags as well as a variety of strategies such as semantic‐based substitution methods and sampling methods are discussed in detail. Performance evaluation focuses on the comparison between PLSDA and two previous lexical substitution‐based data augmentation methods, one of which is thesaurus‐based, and the other is lexicon manipulation based. Our approach is tested on five English sentiment analysis benchmarks: SST‐2, MR, IMDB, Twitter, and AirRecord. Hyperparameters such as the candidate similarity threshold and number of newly generated instances are optimized. Results show that six classifiers (SVM, LSTM, BiLSTM‐AT, bidirectional encoder representations from transformers [BERT], XLNet, and RoBERTa) trained with PLSDA achieve accuracy improvement of more than 0.6% comparing to two previous lexical substitution methods averaged on five benchmarks. Introducing POS constraint and well‐designed augmentation strategies can improve the reliability of lexical data augmentation methods. Consequently, PLSDA significantly improves the performance of sentiment analysis algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. DISCUSSION.
- Author
-
MORS, WALLACE P.
- Subjects
CONSUMER credit ,CREDIT control ,STATISTICS ,BENCHMARKING (Management) ,CONSUMERISM - Abstract
The author comments on the article "Credit Regulations and Consumer Buying," by Duncan McC. Holthausen. The author explains that he agrees with Holthausen on the importance of having adequate consumer credit statistics. In Holthausen's article he presents suggestions for improving the figures, which fall into four broad categories. The categories include the elimination of non-consumer credit items in the statistics, adding consumer credit items not covered, bringing benchmark figures up to date, and expand and improve the reporting samples used to project benchmark figures. The author critically analyzes these categories.
- Published
- 1952
33. Narrative review: status of key performance indicators in contemporary hospital pharmacy practice.
- Author
-
Lloyd, Georgia F., Bajorek, Beata, Barclay, Peter, and Goh, Sue
- Subjects
HOSPITALS ,BENCHMARKING (Management) ,CLINICAL medicine ,HOSPITAL pharmacies ,MEDLINE ,ONLINE information services ,SYSTEMATIC reviews ,KEY performance indicators (Management) - Abstract
Aim The aim of this review was to explore the status of key performance indicators ( KPIs) in Australian hospital pharmacy practice. Data sources For this narrative review, databases ( MEDLINE, PubMed and EBSCO) were searched for relevant publications within the period from April 1980 to April 2014 using the following search terms: hospital pharmacy, key performance indicators, performance measures, clinical indicators and benchmarking. The inclusion criteria were as follows: full text papers (papers only available as abstracts were discarded) and English language. Reference lists of selected papers were also searched to identify additional literature. Results While there are established competencies, standards and quality use of medicines ( QUM) indicators for hospital pharmacy in Australia, there are no standardised KPIs relating to the performance and practice of hospital pharmacy. International research has demonstrated that KPIs are valuable tools for measuring pharmacy performance; the need for KPIs is highlighted in research from the UK, USA, Canada, New Zealand and Australia. Particular challenges associated with KPI implementation include: the need for relevance to all stakeholders; difficulties in measuring pharmacists' activities due to the inherent nature of their work; lack of resources for data collection; limited understanding of KPIs; and negative attitudes toward KPIs by some pharmacists. Conclusion Before nationally standardised KPIs are introduced into Australian hospital pharmacy practice, attention must be paid to developing relevant measures through careful consultation with all relevant stakeholders, including pharmacists themselves. KPIs should provide relevant results, be easy to measure and highlight the value of hospital pharmacy services in a resource-friendly manner. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
34. A method to detect internal leakage of hydraulic cylinder by combining data augmentation and multiscale residual CNN.
- Author
-
He, Qingchuan, Ruan, Huiqi, Pan, Jun, and Lyu, Xiaotian
- Subjects
HYDRAULIC cylinders ,DATA augmentation ,FEEDFORWARD neural networks ,CONVOLUTIONAL neural networks ,BENCHMARKING (Management) - Abstract
Developing a method to detect internal leakage in hydraulic cylinder, which is used for Electro‐Hydrostatic Actuators (EHA), is important to prevent serious malfunctions for aircrafts. At present, the internal leakage in an EHA cannot be accurately detected only using operational data. This paper proposed a convolutional neural networks (CNN) based method to detect internal leakage in hydraulic cylinder according to the relationship between operational state parameters of EHA and leakage in the hydraulic cylinder. A method was presented to align multi‐source signals with different forms by using the motor current as a benchmark. Because the number of monitoring signals are relatively small, a feedforward neural network (FFNN) based data augment method is proposed to increase parameters of input data set. A general method on how to detect internal leakage by combining signals alignment, data augmentation and multiscale residual CNN was proposed. The experimental results show that the proposed method can be used to accurately detect internal leakage in a hydraulic cylinder operating under non‐stationary load and velocity conditions, and the detection accuracy reached 99.8%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. U‐FPNDet: A one‐shot traffic object detector based on U‐shaped feature pyramid module.
- Author
-
Ke, Xiao and Li, Jianping
- Subjects
AUTOMOBILE driving ,TRAFFIC monitoring ,BENCHMARKING (Management) ,PIXELS ,CAMERAS - Abstract
In the field of automatic driving, identifying vehicles and pedestrians is the starting point of other automatic driving techniques. Using the information collected by the camera to detect traffic targets is particularly important. The main bottleneck of traffic object detection is due to the same category of targets, which may have different scales. For example, the pixel‐level of cars may range from 30 to 300 px, which will cause instability of positioning and classification. In this paper, a multi‐dimension feature pyramid is constructed in order to solve the multi‐scale problem. The feature pyramid is built by developing a U‐shaped module and using a cascade‐method. In order to verify the effectiveness of the U‐shaped module, we also designed a new one‐shot detector U‐FPNDet. The model first extracts the basic feature map by using the basic network and constructs the multi‐dimension feature pyramid. Next, a pyramid pooling module is used to get more context information from the scene. Finally, the detection network is run on each level of the pyramid to obtain the final result by NMS. By using this method, a state‐of‐the‐art performance is achieved on both detection and classification on commonly used benchmarks. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. A model‐based method for rotor speed estimation of synchronous generators using wide area measurement system.
- Author
-
Hosseini, Hossein and Afsharnia, Saeed
- Subjects
SYNCHRONOUS generators ,ELECTRIC power systems ,ESTIMATION theory ,BENCHMARKING (Management) - Abstract
In this paper a novel method is proposed to estimate the rotor speed of synchronous generators in steady state and dynamic conditions using phasor measurement unit's data. This method uses the synchronous generator model to find the relationship between generator rotor speed and frequencies at different buses of the power system. The principles of the proposed method are first presented through a simple network consisting a generator connected to a transmission line. Afterward the general formulation is developed. To demonstrate the effectiveness of our method, IEEE 9‐bus and IEEE 39‐bus power systems are used as the benchmark systems. Different tests are carried out under three conditions including, normal condition, presence of measurement errors and uncertainty of the power system parameters. As well, the estimation results are compared with the results of other estimation methods. The results show the high accuracy of the proposed method in different conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. The performance of socially responsible equity mutual funds: Evidence from Sweden.
- Author
-
Leite, Carlos, Cortez, Maria Ceu, Silva, Florinda, and Adcock, Christopher
- Subjects
ETHICAL investments ,PORTFOLIO management (Investments) ,BENCHMARKING (Management) ,DECISION making ,INVESTMENT management - Abstract
This paper presents a comprehensive analysis of socially responsible (SR) funds in Sweden by assessing fund managers' abilities and performances across different market states. These issues are analyzed at the aggregate and individual fund levels. The paper also presents several new statistical tests that allow more precise inferences about differences in performance and the variability in fund returns arising from different benchmarks. In general, SR and conventional funds perform similarly to the market. At the aggregate level, SR funds investing in Sweden and Europe perform similarly to conventional funds, while those investing globally tend to underperform. This underperformance seems to be linked with poor selectivity abilities of global SR fund managers. For individual funds, the performance of both types of funds is more similar. Most funds perform similarly in crisis periods compared to non‐crisis periods. Overall, our results are consistent with a mature market for SR investing and support the view that the similar performance of SR and conventional funds is associated with the mainstreaming of SR investment in Sweden. These findings encourage SR investing both by socially conscious investors, who wish to align their social values with their investment decisions, as well as by conventional investors, who will not be penalized by investing in these funds. We also call attention to the difficulties investors face when trying to identify funds with high social standards, considering that there is scarce information on the extent to which each fund (SR or conventional) holds stocks that comply with ethical and social criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Globalized and bounded Nelder-Mead algorithm with deterministic restarts for tuning controller parameters: Method and application.
- Author
-
Butt, Khurram, Rahman, Ramhuzaini A., Sepehri, Nariman, and Filizadeh, Shaahin
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,SELF-tuning controllers ,FREQUENCY tuning ,BENCHMARKING (Management) - Abstract
This paper develops and examines an optimization algorithm for simulation-based tuning of controller parameters. The proposed algorithm globalizes the Guin augmented variant of Nelder-Mead's nonlinear downhill simplex by deterministic restarts, linearly growing memory vector, and moving initial simplex. First, the effectiveness of the algorithm is tested using 10 complex and multimodal optimization benchmarks. The algorithm achieves global minima of all benchmarks and compares favorably against the evolutionary, swarm, and other globalized local-search multimodal optimization algorithms in probability of finding global minimum and numerical cost. Next, the proposed algorithm is applied for tuning sliding mode controller parameters for a servo pneumatic position control application. The experimental results reveal that the system with sliding mode controller parameters tuned using the proposed algorithm targeting smooth position control with maximum possible accuracy, performs as desired and eliminates the need of manual online tuning for desired performance. The results are also compared with the performance of the same servo pneumatic system with parameters tuned using manual online tuning in an earlier published work. The system with controller parameters tuned using the proposed algorithm shows improvement in accuracy by 28.9% in sinusoidal and 42.2% in multiple step polynomials tracking. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. An adaptive chaos particle swarm optimization for tuning parameters of PID controller.
- Author
-
Nie, Shan ‐ Kun, Wang, Yu ‐ Jia, Xiao, Shanli, and Liu, Zhifeng
- Subjects
PARTICLE swarm optimization ,PID controllers ,BENCHMARKING (Management) ,STOCHASTIC convergence ,ALGORITHMS - Abstract
An adaptive chaos particle swarm optimization (ACPSO) is presented in this paper to tune the parameters of proportional-integral-derivative (PID) controller. To avoid the local minima, we introduced a constriction factor. Meanwhile, the chaotic searching is combined with the particle swarm optimization to improve the ability of the proposed algorithm. A series of experiment is performed on 6 benchmark functions to confirm its performance. It is found that the ACPSO can get better solution quality in solving the global optimization problems and avoiding the premature convergence. Based on it, the proposed algorithm is applied to tune the PID controller's parameters. The performances of the ACPSO are compared with different inspired algorithms, and these results show that the ACPSO is more robust and efficient when it is used to find the optimal parameters of PID controller. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Benchmarking community/primary care musculoskeletal services: A narrative review and recommendation.
- Author
-
Burgess, Roanna, Lewis, Martyn, and Hill, Jonathan C.
- Subjects
MEDICAL care standards ,MUSCULOSKELETAL system diseases ,MEDICAL quality control ,COMMUNITY health services ,HEALTH outcome assessment ,BENCHMARKING (Management) ,PRIMARY health care ,NATIONAL health services ,RISK assessment - Abstract
Introduction: High quality data on service performance is essential in healthcare to evidence efficacy, efficiency, and value. There remains a paucity of publicly reported data in community and primary care musculoskeletal (MSK) services. There is also a lack of guidance on which metrics MSK services should be collecting and reporting, and how this data could be used to directly improve patient outcomes, experiences, and value. Method: A narrative review of the evidence around benchmarking MSK services was undertaken with a focus on how to develop routine data collection within community/primary care settings, and how to develop benchmarking capabilities for the future, looking towards a national MSK audit. This evidence was triangulated with the findings from recent MSK data studies undertaken by the authors and emerging UK policy and guidance in this area. Recommendations: To enable MSK benchmarking services need to collect consistent, standardised outcomes and, therefore, we have developed a recommendation on a minimum MSK 'core outcome set' of Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) (PROMs: MSK‐HQ, NPRS, WPAI; PREMs: National MSK PREM). In addition, we make recommendations on the use of a standardised evidence‐based method for case‐mix adjustment and outlier identification (using the following baseline demographics and clinical factors; age, sex, ethnicity, pain site, comorbidities, duration of symptoms, previous surgery, previous pain episodes), alongside considerations on how this data should be integrated and reported within NHS systems. Conclusions: Capturing high quality MSK data in a standardised, consistent, and sustainable way is a significant challenge. Policy holders, commissioners, managers, and clinicians need to be realistic with expectations, and take time to explore barriers to implementation including, funding, digital infrastructure/intra‐operability, data sharing/governance, digital literacy, and local/national leadership. Next steps include developing a national MSK audit programme to provide a benchmarking model to support continuous improvements in care quality for patients living with MSK conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. Optimal direct mailing modelling based on data envelopment analysis.
- Author
-
Mahdiloo, Mahdi, Noorizadeh, Abdollah, and FarzipoorSaen, Reza
- Subjects
DATA envelopment analysis ,GROUP decision making ,CUSTOMER relationship management ,CUSTOMER satisfaction ,BENCHMARKING (Management) ,MATHEMATICAL models - Abstract
Data envelopment analysis (DEA) is a mathematical programming technique that is frequently used for measuring and benchmarking efficiency of the homogenous decision-making units (DMUs). This paper proposes a new use of DEA for customers scoring and particularly their direct mailing modelling. Moreover, because DEA models suffer from some weaknesses, that is, unrealistic weighting scheme of the inputs and outputs and incomplete ranking among efficient DMUs, the present paper compares different ways of solving these problems and concludes that common set of weights method, as a result of some advantages, outperforms other procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. A framework for benchmarking competency assessment models.
- Author
-
Kasser, Joseph, Hitchins, Derek, Frank, Moti, and Zhao, Yang Yang
- Subjects
BENCHMARKING (Management) ,OUTCOME-based education ,MANAGEMENT science ,LIBRARY inventories ,SYSTEMS engineering - Abstract
This paper discusses the need for competent systems engineers, the differences between nine current ways of assessing competencies (competency models) and the difficulty of comparing the competency models due to the different ways each model groups the competencies. The paper then introduces a competency model maturity framework (CMMF) for benchmarking competency models of systems engineers. The paper benchmarks the nine models using the CMMF and a surprising finding was an error of omission in all nine models. The paper shows that the CMMF can also be used as the basis for developing an original model for a specific organization in a specific time and place and concludes with suggestions for future research. ©2012 Wiley Periodicals, Inc. Syst Eng 15 [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
43. Modelling the Blind Principal Bid Basket Trading Cost.
- Author
-
Giannikos, Christos, Guirguis, Hany, and Suen, Tin Shan
- Subjects
BID price ,FINANCIAL markets ,STOCK prices ,ASSET management ,LIQUIDITY (Economics) ,STOCKBROKERS ,BENCHMARKING (Management) - Abstract
A blind principal bid (BPB) is one of the mechanisms for simultaneously trading a basket of stocks at a pre-determined execution price. In a BPB, asset managers auction a basket of stocks directly to liquidity providers who do not know the identities of the individual stocks in the basket. Unlike other methods of trading, the cost and composition of the BPB basket are not reported in a standard and timely manner. Complete basket data are available only to the asset manager and the broker who won the auction. The current literature contains very little information on the BPB phenomenon, largely due to a lack of public data for research. This paper analyses a unique dataset of 140 executed baskets, building on the seminal papers of Kavajecz and Keim (2005) and Stoll (1978a, b) to develop empirical and structural models of BPB trading costs. Our research provides novel insights into the dynamics of pricing BPB trading costs, a topic that has rarely been examined in the literature. The research reported here also has significant practical applications. Asset managers obtain a benchmark for evaluating the lowest bid, and brokers obtain qualitative insights that can aid them in formulating their bids. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. EXTRAPOLATION OF PURCHASING POWER PARITIES USING MULTIPLE BENCHMARKS AND AUXILIARY INFORMATION: A NEW APPROACH.
- Author
-
Rao, D. S. Prasada, Rambaldi, Alicia, and Doran, Howard
- Subjects
ECONOMETRIC models ,PURCHASING power parity ,REGRESSION analysis ,EXTRAPOLATION ,BENCHMARKING (Management) ,STATE-space methods ,LOGICAL prediction ,PRICE indexes - Abstract
The paper presents an econometric framework for the construction of a consistent panel of purchasing power parities (PPPs) which makes it possible to combine all the PPP benchmark data from various phases of the International Comparison Program with the data on national price movements in the form of implicit deflators from national accounts. The method improves upon the current practice used in the construction of the Penn World Tables (PWT), and similar tables produced by the World Bank which tend to be anchored on a selected benchmark. The econometric formulation is based on a regression model for the national price levels where the disturbances are assumed to be heteroskedastic and spatially correlated across countries. The regression model along with data on country specific price movements are combined using a state–space formulation and optimum predictions of PPPs are obtained. As a property of the method presented in the paper, we show that the resulting PPP predictions are weighted averages of extrapolations of PPPs from different benchmarks—thus the method provides a formal approach which has a simple intuitive interpretation. The smoothed PPP predictions (and standard errors) obtained through the state–space are produced for both ICP-participating and non-participating countries and non-benchmark years. A complete tableau of PPPs for 141 countries spanning the period 1970 to 2005 is compiled using the method. Results for some selected countries are presented and the new series are compared and contrasted with the currently available PWT series. Extrapolated series for the remaining countries are available from the authors upon request. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
45. Output and Productivity Performance of Hong Kong and Singapore's Transport and Communications Sector, 1990 to 2005.
- Author
-
Lee, Boon L. and Shepherd, William
- Subjects
LABOR productivity ,TRANSPORTATION ,COMMUNICATION ,BENCHMARKING (Management) - Abstract
This paper uses the industry of origin approach to analyze value added and labor productivity outcomes arising from progressive liberalization of government and from statutory board control of transport and communications in Singapore. The paper compares these outcomes with those from the market-orientated, more privatized transport and communications sector in Hong Kong, for the benchmark year 2004 and a review period from 1990 to 2005. The study is among the first to carefully compare labor productivity in specific sectors between the two countries. Although Singapore generally recorded higher levels of labor productivity, there was some catch-up by Hong Kong in the later part of the review period. There was also substantial variation in labor productivity performance within sectoral branches in the two sectors. The study suggests there is some evidence that the different political–economic structures and policy approaches to deregulation and liberalization played a role in determining productivity performance in the transport and communications sectors in Singapore and Hong Kong. The analysis infers a potential, increasing focus on privatization as the driving force for further liberalization of the transport and communications sector in Singapore. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
46. References.
- Subjects
EDUCATIONAL accreditation ,BENCHMARKING (Management) ,HIGHER education ,EDUCATIONAL quality ,QUALITY assurance - Abstract
Discusses published materials about educational accreditation in the United States. Benchmarking in higher education; Survey of accreditation issues; Quality assurance in education.
- Published
- 2004
47. The Attitudes of British National Health Service Managers and Clinicians Towards the Introduction of Benchmarking.
- Author
-
Jones, C. S.
- Subjects
BENCHMARKING (Management) ,MEDICAL care - Abstract
This paper describes an empirical study, conducted in three acute hospitals, of the attitudes of central managers, medical managers and clinicians towards the adoption of benchmarking. Benchmarking was portrayed in The New NHS White Paper (1997) as an important means of improving efficiency over the next decade. The present paper examines the context of change and nature of benchmarking. Findings are presented in seven sections including: the understanding which respondents had of benchmarking; their willingness to be involved in benchmarking; the existence of strategies and policies for implementing benchmarking; the relevance of existing costing information; and the role of networks in facilitating benchmarking. The study concludes that the process of change adopted contradicted most of the factors associated with creating receptivity to change. Also, that the publication of the National Reference Costs seemed to have more relevance to resource planning at central National Health Service Management Executive level, than to effecting improvements at operational level in acute hospitals. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
48. THE EMERGENCE OF MULTI-INSPECTORATE INSPECTIONS: 'GOING IT ALONE IS NOT AN OPTION'.
- Author
-
Mordaunt, Enid
- Subjects
BENCHMARKING (Management) ,WORK ethic - Abstract
Drawing on data from HM Inspectorate of Prisons, HM Inspectorate of Probation, the Office for Standards in Education and the Social Services Inspectorate, this paper develops a typology of inspection, classified according to the focus of inspection. Five basic inspection types emerge, namely single institutional, multi-service, thematic, survey and monitoring review. The typology is further categorized by a range of characteristics, resulting in a series of variants. The paper then focuses on the particular characteristic of the multi-inspectorate approach to inspection, because this is seen to offer a significant development in inspection practice that is set to expand and develop in the future. By examining operational examples of this approach it becomes clear that inspectorates are affecting the working practices of one another as they use the multi-inspectorate approach as an exercise in bench-making. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
49. Noisy Time-Series Prediction using Pattern Recognition Techniques.
- Author
-
Singh, Sameer
- Subjects
BENCHMARKING (Management) ,STOCK price indexes ,TIME series analysis - Abstract
Time-series prediction is important in physical and financial domains. Pattern recognition techniques for time-series prediction are based on structural matching of the current state of the time-series with previously occurring states in historical data for making predictions. This paper describes a Pattern Modelling and Recognition System (PMRS) which is used for forecasting benchmark series and the US S&P financial index. The main aim of this paper is to evaluate the performance of such a system on noise free and Gaussian additive noise injected time-series. The results show that the addition of Gaussian noise leads to better forecasts. The results also show that the Gaussian noise standard deviation has an important effect on the PMRS performance. PMRS results are compared with the popular Exponential Smoothing method. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
50. Incentives, Discretion, and Asset Valuation in Closed-End Mutual Funds.
- Author
-
Chandar, N. and Bricker, R.
- Subjects
MUTUAL funds ,LABOR incentives ,SECURITIES trading ,BENCHMARKING (Management) ,ASSETS (Accounting) ,FINANCIAL markets - Abstract
This paper studies earnings management using 363 closed-end mutual fund firm-years of data. Closed-end fund assets consist of unrestricted and restricted securities, and realized and unrealized income. While unrestricted securities are not subject to earnings management, restricted security values are largely discretionary. Managerial valuation of restricted securities is modeled as contingent on unrestricted returns relative to a performance benchmark. Four unrestricted performance regions are identified. Known multi-period compensation incentives become the basis for hypothesizing earnings management behaviors in the regions in the form of restricted security valuation. Across several benchmarks, the results are consistent with multi-period maximization rather than simpler single-period compensation maximization or income smoothing. Funds with extreme unrestricted performance show relatively larger income-decreasing earnings management, and funds with slightly-below benchmark returns show relatively larger income-increasing earnings management than those slightly above. These results clarify the relationship between complex earnings management behavior and managerial incentives. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.