15,854 results on '"METRICS"'
Search Results
2. A Novel Risk-Adjusted Metric to Compare Hospitals on Their Antibiotic Prescribing at Hospital Discharge
- Author
-
Livorsi, Daniel J, Merchant, James A, Cho, Hyunkeun, Goetz, Matthew Bidwell, Alexander, Bruce, Beck, Brice, and Goto, Michihiko
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,8.1 Organisation and delivery of services ,Good Health and Well Being ,antibiotic stewardship ,hospital discharge ,metrics ,risk-adjustment ,pneumonia ,community-acquired ,Biological Sciences ,Medical and Health Sciences ,Microbiology ,Clinical sciences - Abstract
BackgroundAntibiotic overuse at hospital discharge is common, but there is no metric to evaluate hospital performance at this transition of care. We built a risk-adjusted metric for comparing hospitals on their overall post-discharge antibiotic use.MethodsThis was a retrospective study across all acute-care admissions within the Veterans Health Administration during 2018-2021. For patients discharged to home, we collected data on antibiotics and relevant covariates. We built a zero-inflated negative binomial mixed-model with two random intercepts for each hospital to predict post-discharge antibiotic exposure and length of therapy (LOT). Data were split into training and testing sets to evaluate model performance using absolute error. Hospital performance was determined by the predicted random intercepts.Results1,804,300 patient-admissions across 129 hospitals were included. Antibiotics were prescribed to 41.5% while hospitalized and 19.5% at discharge. Median LOT among those prescribed post-discharge antibiotics was 7 (IQR 4-10). The predictive model detected post-discharge antibiotic use with fidelity, including accurate identification of any exposure (area under the precision-recall curve=0.97) and reliable prediction of post-discharge LOT (mean absolute error = 1.48). Based on this model, 39 (30.2%) hospitals prescribed antibiotics less often than expected at discharge and used shorter LOT than expected. Twenty-eight (21.7%) hospitals prescribed antibiotics more often at discharge and used longer LOT.ConclusionA model using electronically-available data was able to predict antibiotic use prescribed at hospital discharge and showed that some hospitals were more successful in reducing antibiotic overuse at this transition of care. This metric may help hospitals identify opportunities for improved antibiotic stewardship at discharge.
- Published
- 2024
3. Using Vehicle Miles Traveled Instead of Level of Service as a Metric of Environmental Impact for Land Development Projects: Progress in California
- Author
-
Volker, Jamey, Hosseinzade, Rey, and Handy, Susan
- Subjects
implementation ,land use ,level of service ,metrics ,urban development ,vehicle miles of travel - Abstract
Senate Bill (SB) 743 (2013) and its related regulations eliminated automobile level of service (LOS) and replaced it with vehicle miles traveled (VMT) as the primary transportation impact metric for land development projects under the California Environmental Quality Act. Actual implementation of the LOS-to-VMT shift was left up to lead agencies, primarily local governments. The LOS-to-VMT shift was expected to create many challenges, given the often-limited resources of local governments, the entrenched use of LOS, and the perceived lack of established practice regarding VMT estimation, mitigation, and monitoring. With those concerns in mind, researchers at the University of California, Davis investigated how local governments have been implementing the LOS-to-VMT shift for land development projects. This policy brief summarizes the findings from that investigation. View the NCST Project Webpage
- Published
- 2024
4. On Computational Indistinguishability and Logical Relations
- Author
-
Lago, Ugo Dal, Galal, Zeinab, Giusti, Giulia, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Kiselyov, Oleg, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Beyond Heatmaps: A Comparative Analysis of Metrics for Anomaly Localization in Medical Images
- Author
-
Zimmerer, David, Maier-Hein, Klaus, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Sudre, Carole H., editor, Mehta, Raghav, editor, Ouyang, Cheng, editor, Qin, Chen, editor, Rakic, Marianne, editor, and Wells, William M., editor
- Published
- 2025
- Full Text
- View/download PDF
6. Including local actors' perspective in neighborhood sustainability assessment: evidence from Dubai's sustainable city
- Author
-
Dessouky, Nermin, Wheeler, Stephen, and Salama, Ashraf M.
- Published
- 2024
- Full Text
- View/download PDF
7. Budgets and biologicals: The bio-economization of HIV governance.
- Author
-
Tseng, Po-Chia
- Abstract
Changing global responses to HIV/AIDS entail shifting biological loci of surveillance which are believed to constitute HIV risk. Meanwhile, various local institutions and organizations are mobilized to play a key role in HIV service delivery and surveillance. Drawing on a socio-material approach to the body and the economy, this study theorizes the emergence of three 'HIV service bio-economies' devised to provide HIV services while controlling HIV transmission in Taiwan. Instead of presuming a divide between the social and the biological, it analyzes how different bodies are produced through differing modes of HIV surveillance and economization, buttressed by global health sciences, state budgets and quantitative metrics. The analysis of multiple ontologies of HIV underscores the political nature of risk-framing in a transnational context, but also how certain bodies incapable of being enrolled in these economies could be further marginalized – a process which might be understood as an ontological politics of HIV. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Consequences of arbitrary binning the midpoint category in survey data: an illustration with student satisfaction in the National Student Survey.
- Author
-
Pollet, Thomas V., Bilalić, Merim, and Shepherd, Lee
- Subjects
- *
ARBITRARY constants , *STUDENTS , *METHODOLOGY , *PHENOMENOLOGY , *EVALUATION - Abstract
Arbitrary placing cut-offs in data, i.e. binning, is recognised as poor statistical practice. We explore the consequences of using arbitrary cut-offs in two large datasets, the National Student Survey (2019 and 2022). These are nationwide surveys aimed at capturing student satisfaction amongst UK undergraduates. For these survey data, it is common to group the responses to the question on student satisfaction on a five point Likert scale into '% satisfied' based on two categories. These % satisfied are then further used in metrics. We examine the consequences of using three rather than two categories for the rankings of courses and institutions, as well as the consequences of excluding the midpoint from the calculations. Across all courses, grouping the midpoint with satisfied leads to a median shift of 8.40% and 11.41% in satisfaction for 2019 and 2022, respectively. Excluding the midpoint from the calculations leads to a median shift of 4.20% and 5.70% in satisfaction for 2019 and 2022, respectively. While the overall stability of the rankings is largely preserved, individual courses or institutions exhibit sizeable shifts. Depending on the analysis, the most extreme shifts for courses in rankings are between 13 and 79 ranks, for institutions between 24 and 416 ranks. Our analysis thus illustrates the potentially profound consequences of arbitrarily grouping categories for individual institutions and courses. We offer some recommendations on how this issue can be addressed but primarily we caution against the reliance on arbitrary grouping of response categories in survey data such as the NSS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Reinforcement learning based task offloading of IoT applications in fog computing: algorithms and optimization techniques.
- Author
-
Allaoui, Takwa, Gasmi, Kaouther, and Ezzedine, Tahar
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *PROCESS capability , *MATHEMATICAL optimization , *INTERNET of things - Abstract
In recent years, fog computing has become a promising technology that supports computationally intensive and time-sensitive applications, especially when dealing with Internet of Things (IoT) devices with limited processing capability. In this context, offloading can push resource-intensive tasks closer to the end devices at the network edge. This allows user equipment to profit from the fog computing environment by offloading their tasks to fog resources. Thus, computation offloading mechanisms can overcome the resource constraints of devices and enhance the system's performance by minimizing delay and extending the battery lifetime of devices. In this regard, designing an algorithm to decide which tasks to offload and where to execute them is crucial. Recently, there has been a growing interest in utilizing Reinforcement Learning (RL) and deep reinforcement learning (DRL), to address computation offloading mechanisms in the context of fog computing. This paper reviews the research conducted on Reinforcement learning (RL) and Deep Reinforcement Learning (DRL) based computation offloading mechanisms for IoT applications in the fog environment. We provide a comprehensive and detailed survey, analyzing and classifying the research paper in terms of RL techniques, objectives, architecture, and use cases. Then, in particular, we identify the advantages and weaknesses of each paper. After that, We systematically elaborate on open issues and future research directions that are crucial for the next decade. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. АВТОМАТИЗОВАНА ВІЗУАЛІЗАЦІЯ КОМПОНЕНТІВ АРХІТЕКТУРИ ПРОГРАМИ для мовИ SWIFT.
- Author
-
ФРАНКІВ, О. О. and ГЛИБОВЕЦЬ, М. М.
- Subjects
BUILDING design & construction ,SOFTWARE measurement ,COMPUTER software quality control ,STRUCTURAL components ,COMPUTER software - Abstract
Copyright of Cybernetics & Systems Analysis / Kibernetiki i Sistemnyj Analiz is the property of V.M. Glushkov Institute of Cybernetics of NAS of Ukraine and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
11. 'Mattering' the circular economy: tackling the Achilles' heel of sustainable places via adopting a critical-relational perspective.
- Author
-
Ersoy, Aksel and Lagendijk, Arnoud
- Subjects
CIRCULAR economy ,GOVERNMENT policy ,JARGON (Terminology) ,SUSTAINABILITY ,TOPOLOGY - Abstract
The transition towards a circular economy (CE) is seen as vital for developing sustainable places. CE is used as a new buzzword, as well as an inducement to innovate and change socio-economic practices, by a diverse set of actors to meet sustainability and other goals. Genuine transformation, we argue here, requires those practices to seriously alter discourses and metrics. We adopt a material critical-relational perspective, drawing on the assemblage notion of (counter)actualisation. Our contribution is both conceptual and empirical. Conceptually, we develop an assemblage-based framework featuring practices, discourses and metrics. Empirically, we apply this to national CE policy and local initiatives in the Netherlands. Our results point out both passions and challenges to come to a genuinely transformative discourse and use of metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Catch, Engage, Retain: Audience-Oriented Journalistic Role Performance in Canada.
- Author
-
Blanchett, Nicole, Brin, Colette, and Duncan, Stuart
- Subjects
EVIDENCE gaps ,DATA analytics ,SOCIAL media ,SOCIAL interaction ,SOCIAL services - Abstract
To understand audience-oriented journalistic role performance, one must understand how journalists conceptualize and cater to their audience. Giving the audience what it wants is a complex endeavor, with varying goals and hybridized end results, in newsrooms with fewer resources serving increasingly polarized audiences. Through a triangulation of data—content analysis at the subdimension level to examine the range and hybridity of audience-oriented journalistic product presenting the civic, service and infotainment roles; a survey to identify journalists' attitudes toward the use of audience data and social media in their work; and interviews with journalists that revealed how their journalistic practice and audience perceptions were impacted by quantitative (metrics and analytics) and qualitative data (comments/social media interactions)—this research fills a gap in understanding about the connection between journalists, their audiences, and audience data when it comes to journalistic role performance. Findings show that in Canada the infotainment role is a significant part of reporting, but entertaining often comes with a goal of educating, as does service journalism. There are no "bad" journalistic roles, but there are a lot of journalists trying to figure out which ones might best catch, engage, and retain an ever-shrinking news audience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Review of ecological valuation and equivalency analysis methods for assessing temperate nearshore submerged aquatic vegetation.
- Author
-
Pausch, Rachel E., Hale, Jessica R., Kiffney, Peter, Sanderson, Beth, Azat, Sara, Barnas, Katie, Chesney, W. Bryant, Cosentino‐Manning, Natalie, Ehinger, Stephanie, Lowry, Dayv, and Marx, Steve
- Subjects
- *
HABITAT suitability index models , *WATER management , *ANTHROPOGENIC effects on nature , *SPECIES diversity , *KELPS , *MACROCYSTIS - Abstract
Nearshore seagrass, kelp, and other macroalgae beds (submerged aquatic vegetation [SAV]) are productive and important ecosystems. Mitigating anthropogenic impacts on these habitats requires tools to quantify their ecological value and the debits and credits of impact and mitigation. To summarize and clarify the state of SAV habitat quantification and available tools, we searched peer‐reviewed literature and other agency documents for methods that either assigned ecological value to or calculated equivalencies between impact and mitigation in SAV. Out of 47 tools, there were 11 equivalency methods, 7 of which included a valuation component. The remaining valuation methods were most commonly designed for seagrasses and rocky intertidal macroalgae rather than canopy‐forming kelps. Tools were often designed to address specific resource policies and associated habitat evaluation. Frequent categories of tools and methods included those associated with habitat equivalency analyses and those that scored habitats relative to reference or ideal conditions, including models designed for habitat suitability indices and the European Union's Water and Marine Framework Directives. Over 29 tool input metrics spanned 3 spatial scales of SAV: individual shoots or stipes, bed or site, and landscape or region. The most common metric used for both seagrasses and macroalgae was cover. Seagrass tools also often employed density measures, and some categories used measures of tissue content (e.g., carbon, nitrogen). Macroalgal tools for rocky intertidal habitats frequently included species richness or incorporated indicator species to assess habitat. We provide a flowchart for decision‐makers to identify representative tools that may apply to their specific management needs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A Systematic Review on Sequential Pattern Mining-Types, Algorithms and Applications.
- Author
-
Jamshed, Aatif, Mallick, Bhawna, and Bharti, Rajendra Kumar
- Subjects
SEQUENTIAL pattern mining ,DATA mining ,RESEARCH personnel ,CLASSIFICATION ,ALGORITHMS - Abstract
Sequential Pattern Mining (SPM) is a vital area of data mining focused on uncovering meaningful patterns and subsequences within sequential data, such as time-series and transactional datasets. Despite its significance, existing reviews often overlook the rapid advancements and diverse applications of SPM techniques, leading to gaps in understanding their effectiveness and scalability. This paper provides a systematic review of SPM techniques and their applications from 2018 to 2024. A total of 1,440 articles were identified, with 31 selected based on rigorous screening criteria. The review categorizes these articles based on method type, dataset, metrics, and application domains, offering a structured analysis that emphasizes practical outcomes. Key findings reveal that while several SPM techniques have demonstrated significant improvements in accuracy and efficiency, challenges remain in scaling these methods to handle large datasets and complex patterns. The review highlights practical applications in fields such as special children assessment analysis, vehicle trajectory prediction in vehicular ad hoc networks (VANET), sitemap generation, and SPM based on uncertain databases. These insights underscore the need for robust algorithms tailored to address specific challenges within these domains. This work contributes a comprehensive overview of SPM, methodical classification of the reviewed literature, and identification of future research directions. By synthesizing current trends and practical implications, this study serves as a valuable resource for researchers and practitioners, guiding them in selecting appropriate methods for their specific needs and fostering further innovations in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Reshaping school discipline with metrics: an examination of teachers' disciplinary practices with ClassDojo.
- Author
-
Manolev, Jamie, Sullivan, Anna, and Tippett, Neil
- Subjects
- *
SCHOOL discipline , *TEACHERS , *NEOLIBERALISM , *DIGITAL technology - Abstract
Education is increasingly infiltrated by technology and datafication. This techno-data amplification is entangled with neoliberalism and the emphasis on calculation and measurement it brings, often through metrics. This article critically examines how metrics are shaping discipline practices in schools through ClassDojo, a popular platform for managing student behaviour. Little is known about how ClassDojo is implemented in schools, and how its dependence on metrics is impacting school discipline practices. Through a critical qualitative inquiry, we examined teachers' practices with ClassDojo, and found they operate via techniques of control, and that metrics are central to these techniques. We draw on the concept of 'metric power' to understand how these school discipline practices manifest as forms of power. We argue ClassDojo's metrics operate as powerful narrowing pedagogical devices that fixate on measurement and lead to practices which operate via neoliberal governing rationalities that reshape school discipline. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Global Metrics for Terrestrial Biodiversity.
- Author
-
Burgess, Neil D., Ali, Natasha, Bedford, Jacob, Bhola, Nina, Brooks, Sharon, Cierna, Alena, Correa, Roberto, Harris, Matthew, Hargey, Ayesha, Hughes, Jonathan, McDermott-Long, Osgur, Miles, Lera, Ravilious, Corinna, Rodrigues, Ana Ramos, van Soesbergen, Arnout, Sihvonen, Heli, Seager, Aimee, Swindell, Luke, Vukelic, Matea, and Durán, América Paz
- Subjects
- *
GENETIC variation , *DATABASES , *CIVIL society , *ECOSYSTEMS , *DECISION making - Abstract
Biodiversity metrics are increasingly in demand for informing government, business, and civil society decisions. However, it is not always clear to end users how these metrics differ or for what purpose they are best suited. We seek to answer these questions using a database of 573 biodiversity-related metrics, indicators, indices, and layers, which address aspects of genetic diversity, species, and ecosystems. We provide examples of indicators and their uses within the state–pressure–response–benefits framework that is widely used in conservation science. Considering complementarity across this framework, we recommend a small number of metrics considered most pertinent for use in decision-making by governments and businesses. We conclude by highlighting five future directions: increasing the importance of national metrics, ensuring wider uptake of business metrics, agreeing on a minimum set of metrics for government and business use, automating metric calculation through use of technology, and generating sustainable funding for metric production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Network analysis and teaching excellence as a concept of relations.
- Author
-
Hayes, Aneta and Garnett, Nick
- Subjects
- *
HIGHER education , *DECISION trees , *THEORY of knowledge , *SOCIAL context , *METHODOLOGY - Abstract
The aim of this paper is to foreground network analysis as a statistical lens through which higher education institutions can articulate their own process of striving for teaching excellence, and how it is constituted in their own contexts. The paper offers an approach to analysis that extends the frontiers of methodologies in 'measurement' of teaching excellence; one that responds to the shortcomings of the current methodologies, critiqued for being reductive, performative, alienating, and promoting closure and convergence in how they assess teaching excellence. We review epistemological and methodological shifts in conceptualising teaching excellence and measurement that are required to work with our methodology, as well as provide statistical details, for anyone who wishes to reproduce our profiled examples. We thus build in the paper a link between the theory of (teaching) excellence and practice (of measurement) and champion a theory-based approach to the methodology of educational metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Longitudinal bibliometric investigation of published orthodontic research 2013–23: are we investigating patient-reported outcomes?
- Author
-
Liu, Catherine, Seehra, Jadbinder, and Cobourne, Martyn T
- Subjects
CORRECTIVE orthodontics ,DISTRIBUTION (Probability theory) ,ORTHODONTICS ,DATABASES ,SCIENTIFIC observation - Abstract
Background The published literature represents the fundamental basis of any academic specialty, including orthodontics. Orthodontic research outputs provide useful insight into clinical and research priorities, which can help inform future research efforts and resource outputs. In recent years, the need for more patient-reported outcomes in orthodontic research has been highlighted. Objectives To identify the most common reported research subjects in orthodontics between 2013–23; (2) identify the main outcomes and types of study design associated with this research, including study design related to patient-reported outcomes; and (3) identify trends in this research activity based upon these findings. Material and methods A literature search was performed in a single electronic database (Scopus) to return all indexed publications with relevance to orthodontics published from 2013 to 2023. The 50 most-cited publications per year were then identified. Publication characteristics were extracted using a data collection sheet. Descriptive statistics including frequency distributions were calculated. Results A total of 14 397 publications were identified. Publications on orthodontic bonding made up 7.02% of all output, followed by materials (5.88%) and tooth movement (5.42%). Subsequent analysis of the most-cited publications per year revealed the most frequently published subjects were aligners (12.5%), orthodontic tooth movement (9.45%), and digital workflow (9.09%), and the most common study designs were in vitro (19.09%) and retrospective observational studies (15.45%). The most common outcome type was morphological features of malocclusion (26.9%). Conversely, patient-focused measures were only reported in 12.7% of studies. Conclusions Orthodontic research outputs are dynamic but do show consistent research interest in certain subjects. There is a predilection for the reporting of clinician-focused outcomes; whilst these have some value, more efforts should be focused on conducting rigorous and robust studies that include patient-reported outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Measuring Abstraction Levels of Sculptural Objects.
- Author
-
Lyon, Gordon E. and Lyon, Merritt R.
- Subjects
SCATTER diagrams ,SCULPTURE ,ARITHMETIC ,ENTROPY ,INTERNET - Abstract
Freestanding sculpture can be scored for abstraction using order scales such as the three-rank 'realistic,' 'mixed,' and 'abstract.' Subjective but invaluable, the ranks support ordering but not arithmetic. Numeric limitations can be addressed via information-theoretic metrics that measure collective uncertainty about a sculpture; to this end 60 participants view 25 images of sculpture via an internet questionnaire. They (i) score each depicted object and (ii) type a caption stating what impression the object evokes. Captions for the image are simplified to their English simple subject, sorted into classification categories and counted. A metric converts an image's categories and counts into viewers' uncertainty. The investigation examines four-rank and five-rank scales plus metrics CC (category count), H (entropy) and EC ≡ 2
H (equalized category count). Medians of scores or uncertainties reduce variability as appropriate. Comparisons of scores vs uncertainties show the two generally correlate well. Results also highlight inherent design tradeoffs. Scale/metric pairings affect both ranks (which should be statistically distinct) and rank/uncertainty correlations. Two good combinations are [four-rank; median CC ] and [five-rank; median EC ]. Metric CC is easy to compute and correlates slightly better than EC , although the latter supports five evenly separated ranks. Shifting focus to image scatterplots, the pair [four-rank median score; CC ] yields a very strong correlation. Once uncertainties are linked to ranks to 'calibrate' them, one gets a quantitative sense of the rank order scale. The augmented framework offers the speed and ease of scoring along with valid numeric estimates of uncertainty unavailable from ranks alone. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
20. Critical care clinical pharmacy value-based metrics: Time to terminate widget counting.
- Author
-
Buckley, Mitchell S and Roberts, Russel J
- Subjects
- *
LABOR productivity , *OCCUPATIONAL roles , *BENCHMARKING (Management) , *RESPONSIBILITY , *PATIENT care , *INTENSIVE care units , *CRITICAL care medicine , *EMPLOYEES' workload - Abstract
The article highlights the value-based metrics that need to be considered in critical care clinical pharmacy. Topics mentioned include the pitfalls of clinical pharmacy productivity metrics, the main paradigm shift for the pharmacy profession, a description of key performance indicators, the importance of providing direct patient care and the use of the medication regimen complexity-intensive care unit.
- Published
- 2024
- Full Text
- View/download PDF
21. What Environmental Metrics Are Used in Scientific Research to Estimate the Impact of Human Diets?
- Author
-
Aceves-Martins, Magaly, Lofstedt, Anneli, Godina Flores, Naara Libertad, Ortiz Hernández, Danielle Michelle, and de Roos, Baukje
- Abstract
Background/Objectives: Metrics drive diagnosis, and metrics will also drive our response to the challenge of climate change. Recognising how current scientific research defines and uses metrics of the environmental impact of human diets is essential to understand which foods, food groups, or dietary patterns are associated with a higher environmental impact. Methods: This research, aided by artificial intelligence (AI), aimed to search, map, and synthesise current evidence on the commonly used definitions and metrics of the environmental impacts of human diets. Results: We identified 466 studies measuring the environmental impact of diets. Most studies were from North American or European countries (67%), with data mainly from high-income countries (81%). Most studies did not include methods to recall the provenance of the foods consumed. Most (53%) of the studies only used one metric to estimate the environmental impact of human diets, with 82% of the studies using GHGE. Conclusions: Agreement on how the environmental impact of diets is measured and more comprehensive and accurate data on the environmental impact of single foods is essential to better understand what changes in food systems are needed, at a consumer and policy level, to make a well-meaning change towards a more sustainable diet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Can Robotic Therapy Improve Performance in Activities of Daily Living? A Randomized Controlled Trial in Sub-Acute Spinal Cord Injured Patients.
- Author
-
Lozano-Berrio, Vicente, Alcobendas-Maestro, Mónica, Perales-Gómez, Raquel, Pérez-Borrego, Yolanda, Gil-Agudo, Angel, Polonio-López, Begoña, Cortés, Camilo, and de los Reyes-Guzmán, Ana
- Subjects
CERVICAL cord ,SPINAL cord injuries ,SPINAL cord ,ACTIVITIES of daily living ,ASSISTIVE technology - Abstract
(1) Background: The influence of robotic therapy on patients with sub-acute cervical spinal cord injury (SCI) for improving their activities of daily living (ADL) performance is unclear; (2) Methods: 31 subjects with cervical SCI completed the training randomly assigned to an intervention or control group during 40 sessions. All the subjects received, in each session, 30 min of upper-extremity conventional therapy. In addition, the subjects within the control group received another 30 min of conventional therapy, whereas subjects within the intervention group received 30 min of robotic therapy with Armeo Spring (Hocoma AG, Volketswil, Switzerland). Therefore, the ADL of drinking was trained by using the exoskeleton. Feasibility and efficacy measurements as clinical scales and kinematic indices, and usability questionnaires, were used as assessment at baseline and at the ending of the study (week 10); (3) Results: The intervention group significantly improved with regards to the feeding and grooming items of the Spinal Cord Independence Measure scale. The improvement in the movement smoothness related to the activity of drinking was greater in the intervention group than in the control (p = 0.034); (4) Conclusions: The findings of this study reveal that patients with cervical SCI improve their performance in ADL with robotic therapy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Image de-photobombing benchmark.
- Author
-
Patel, Vatsa S., Agrawal, Kunal, Baraheem, Samah S., Yousif, Amira, and Nguyen, Tam V.
- Subjects
DEEP learning ,SIGNAL-to-noise ratio ,INPAINTING ,RESEARCH personnel ,BENCHMARKING (Management) - Abstract
Removing photobombing elements from images is a challenging task that requires sophisticated image inpainting techniques. Despite the availability of various methods, their effectiveness depends on the complexity of the image and the nature of the distracting element. To address this issue, we conducted a benchmark study to evaluate 10 state-of-the-art photobombing removal methods on a dataset of over 300 images. Our study focused on identifying the most effective image inpainting techniques for removing unwanted regions from images. We annotated the photobombed regions that require removal and evaluated the performance of each method using peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and Fréchet inception distance (FID). The results show that image inpainting techniques can effectively remove photobombing elements, but more robust and accurate methods are needed to handle various image complexities. Our benchmarking study provides a valuable resource for researchers and practitioners to select the most suitable method for their specific photobombing removal task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Precision Metrics: A Narrative Review on Unlocking the Power of KPIs in Radiology for Enhanced Precision Medicine.
- Author
-
Lastrucci, Andrea, Wandael, Yannick, Barra, Angelo, Miele, Vittorio, Ricci, Renzo, Livi, Lorenzo, Lepri, Graziano, Gulino, Rosario Alfio, Maccioni, Giovanni, and Giansanti, Daniele
- Subjects
- *
KEY performance indicators (Management) , *TECHNOLOGICAL innovations , *RADIATION protection , *DIGITAL health , *INDIVIDUALIZED medicine - Abstract
(Background) Over the years, there has been increasing interest in adopting a quality approach in radiology, leading to the strategic pursuit of specific and key performance indicators (KPIs). These indicators in radiology can have significant impacts ranging from radiation protection to integration into digital healthcare. (Purpose) This study aimed to conduct a narrative review on the integration of key performance indicators (KPIs) in radiology with specific key questions. (Methods) This review utilized a standardized checklist for narrative reviews, including the ANDJ Narrative Checklist, to ensure thoroughness and consistency. Searches were performed on PubMed, Scopus, and Google Scholar using a combination of keywords related to radiology and KPIs, with Boolean logic to refine results. From an initial yield of 211 studies, 127 were excluded due to a lack of focus on KPIs. The remaining 84 studies were assessed for clarity, design, and methodology, with 26 ultimately selected for detailed review. The evaluation process involved multiple assessors to minimize bias and ensure a rigorous analysis. (Results and Discussion) This overview highlights the following: KPIs are crucial for advancing radiology by supporting the evolution of imaging technologies (e.g., CT, MRI) and integrating emerging technologies like AI and AR/VR. They ensure high standards in diagnostic accuracy, image quality, and operational efficiency, enhancing diagnostic capabilities and streamlining workflows. KPIs are vital for radiological safety, measuring adherence to protocols that minimize radiation exposure and protect patients. The effective integration of KPIs into healthcare systems requires systematic development, validation, and standardization, supported by national and international initiatives. Addressing challenges like CAD-CAM technology and home-based radiology is essential. Developing specialized KPIs for new technologies will be key to continuous improvement in patient care and radiological practices. (Conclusions) In conclusion, KPIs are essential for advancing radiology, while future research should focus on improving data access and developing specialized KPIs to address emerging challenges. Future research should focus on expanding documentation sources, improving web search methods, and establishing direct connections with scientific associations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A Predictive Model for Benchmarking the Performance of Algorithms for Fake and Counterfeit News Classification in Global Networks.
- Author
-
Azeez, Nureni Ayofe, Misra, Sanjay, Ogaraku, Davidson Onyinye, and Abidoye, Ademola Philip
- Subjects
- *
NEWS websites , *DATA integrity , *DATA analytics , *FAKE news , *CLASSIFICATION algorithms - Abstract
The pervasive spread of fake news in online social media has emerged as a critical threat to societal integrity and democratic processes. To address this pressing issue, this research harnesses the power of supervised AI algorithms aimed at classifying fake news with selected algorithms. Algorithms such as Passive Aggressive Classifier, perceptron, and decision stump undergo meticulous refinement for text classification tasks, leveraging 29 models trained on diverse social media datasets. Sensors can be utilized for data collection. Data preprocessing involves rigorous cleansing and feature vector generation using TF-IDF and Count Vectorizers. The models' efficacy in classifying genuine news from falsified or exaggerated content is evaluated using metrics like accuracy, precision, recall, and more. In order to obtain the best-performing algorithm from each of the datasets, a predictive model was developed, through which SG with 0.681190 performs best in Dataset 1, BernoulliRBM has 0.933789 in Dataset 2, LinearSVC has 0.689180 in Dataset 3, and BernoulliRBM has 0.026346 in Dataset 4. This research illuminates strategies for classifying fake news, offering potential solutions to ensure information integrity and democratic discourse, thus carrying profound implications for academia and real-world applications. This work also suggests the strength of sensors for data collection in IoT environments, big data analytics for smart cities, and sensor applications which contribute to maintaining the integrity of information within urban environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. DevOps Metrics and KPIs: A Multivocal Literature Review.
- Author
-
Amaro, Ricardo, Pereira, Rúben, and Mira da Silva, Miguel
- Published
- 2024
- Full Text
- View/download PDF
27. Harmonizing Quality Improvement Metrics Across Global Trial Networks to Advance Paediatric Clinical Trials Delivery.
- Author
-
Attar, Sabah, Price, Angie, Hovinga, Collin, Stewart, Breanne, Lacaze-Masmonteil, Thierry, Bonifazi, Fedele, Turner, Mark A., and Fernandes, Ricardo M.
- Subjects
CONSENSUS (Social sciences) ,RESEARCH funding ,INTERPROFESSIONAL relations ,CLINICAL trials ,EXPERIMENTAL design ,PEDIATRICS ,DRUG approval ,WORLD health ,THEMATIC analysis ,QUALITY assurance - Abstract
Background: Despite global efforts to improve paediatric clinical trials, significant delays continue in paediatric drug approvals. Collaboration between research networks is needed to address these delays. This paper is a first step to promote interoperability between paediatric networks from different jurisdictions by comparing drivers for, and content of, metrics about clinical trial conduct. Methods: Three paediatric networks, Institute for Advanced Clinical Trials for Children, the Maternal Infant Child and Youth Research Network and conect4children, have each developed metrics to address delays and create efficiencies. We identified the methodology by which each network identified metrics, described the metrics of each network, and mapped consistency to come to consensus about core metrics that networks could share. Results: Metric selection was driven by site quality improvement in one network (11 metrics), by network performance in one network (13 metrics), and by both in one network (five metrics). The domains of metrics were research capacity/capability, site identification/feasibility, trial start-up, and recruitment/enrolment. The network driven by site quality improvement did not have indicators for capacity/capability or identification/feasibility. Fifteen metrics for trial start up and conduct were identified. Metrics related to site approvals were found in all three networks. The themes for metrics can inform the development of 'shared' metrics. Conclusion: We found disparity in drivers, methodology and metrics. Tackling this disparity will result in a unified approach to addressing delays in paediatric drug approvals. Collaborative work to define inter-operable metrics globally is outlined. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Resilience Metrics for Socio-Ecological and Socio-Technical Systems: A Scoping Review.
- Author
-
Steinmann, Patrick, Tobi, Hilde, and van Voorn, George A. K.
- Subjects
SOCIOTECHNICAL systems ,ECOLOGICAL resilience ,RESEARCH personnel - Abstract
An increased interest in the resilience of complex socio-ecological and socio-technical systems has led to a variety of metrics being proposed. An overview of these metrics and their underlying concepts would support identifying useful metrics for applications in science and engineering. This study undertakes a scoping review of resilience metrics for systems straddling the societal, ecological, and technical domains to determine how resilience has been measured, the conceptual differences between the proposed approaches, and how they align with the domains of their case studies. We find that a wide variety of resilience metrics have been proposed in the literature. Conceptually, ten different quantification approaches were identified. Four different disturbance types were observed, including sudden, continuous, multiple, and abruptly ending disturbances. Surprisingly, there is no strong pattern regarding socio-ecological systems being studied using the "ecological resilience" concept and socio-technical systems being studied using the "engineering resilience" concept. As a result, we recommend that researchers use multiple resilience metrics in the same study, ideally following different conceptual approaches, and compare the resulting insights. Furthermore, the used metrics should be mathematically defined, the included variables explained and their units provided, and the chosen functional form justified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. SYSTEMATIC REVIEW OF THE CIRCULAR ECONOMY PERFORMANCE ASSESSMENT SYSTEM UNDER INTERNATIONAL MANAGEMENT PARADIGMS.
- Author
-
de Jesus Lopes, Eliana, da Silva Lima, Leandra Silvestre, Maldonado Bonilla, María Alejandra, and Bouzon, Marina
- Subjects
META-analysis ,CIRCULAR economy ,NATURAL resources management ,PERFORMANCE evaluation ,PERFORMANCE management ,SECONDARY analysis ,SUPPLY chains ,NATURAL resources - Abstract
Copyright of Environmental & Social Management Journal / Revista de Gestão Social e Ambiental is the property of Environmental & Social Management Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
30. Exploring the Effectiveness of Shallow and L2 Learner-Suitable Textual Features for Supervised and Unsupervised Sentence-Based Readability Assessment.
- Author
-
Kostadimas, Dimitris, Kermanidis, Katia Lida, and Andronikos, Theodore
- Subjects
LANGUAGE ability testing ,NATURAL languages ,MACHINE learning ,CLASSIFICATION ,RESEARCH evaluation - Abstract
Simplicity in information found online is in demand from diverse user groups seeking better text comprehension and consumption of information in an easy and timely manner. Readability assessment, particularly at the sentence level, plays a vital role in aiding specific demographics, such as language learners. In this paper, we research model evaluation metrics, strategies for model creation, and the predictive capacity of features and feature sets in assessing readability based on sentence complexity. Our primary objective is to classify sentences as either simple or complex, shifting the focus from entire paragraphs or texts to individual sentences. We approach this challenge as both a classification and clustering task. Additionally, we emphasize our tests on shallow features that, despite their simplistic nature and ease of use, seem to yield decent results. Leveraging the TextStat Python library and the WEKA toolkit, we employ a wide variety of shallow features and classifiers. By comparing the outcomes across different models, algorithms, and feature sets, we aim to offer valuable insights into optimizing the setup. We draw our data from sentences sourced from Wikipedia's corpus, a widely accessed online encyclopedia catering to a broad audience. We strive to take a deeper look at what leads to greater readability classification in datasets that appeal to audiences such as Wikipedia's, assisting in the development of improved models and new features for future applications with low feature extraction/processing times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. ECLIM-SEHOP: how to develop a platform to conduct academic trials for childhood cancer.
- Author
-
Juan-Ribelles, Antonio, Bautista, Francisco, Cañete, Adela, Rubio-San-Simón, Alba, Alonso-Saladrigues, Anna, Hladun, Raquel, Rives, Susana, Dapena, Jose Luís, Fernández, Jose María, Lassaletta, Álvaro, Cruz, Ofelia, Ramírez-Villar, Gemma, Fuster, Jose Luís, de Heredia, Cristina Diaz, García-Ariza, Miguel, Quiroga, Eduardo, del Mar Andrés, María, Verdú-Amorós, Jaime, Molinés, Antonio, and Herrero, Blanca
- Abstract
Introduction: ECLIM-SEHOP platform was created in 2017. Its main objective is to establish the infrastructure to allow Spanish participation into international academic collaborative clinical trials, observational studies, and registries in pediatric oncology. The aim of this manuscript is to describe the activity conducted by ECLIM-SEHOP since its creation. Methods: The platform's database was queried to provide an overview of the studies integrally and partially supported by the organization. Data on trial recruitment and set-up/conduct metrics since its creation until November 2023 were extracted. Results: ECLIM-SEHOP has supported 47 studies: 29 clinical trials and 18 observational studies/registries that have recruited a total of 5250 patients. Integral support has been given to 25 studies: 16 trials recruiting 584 patients and nine observational studies/registries recruiting 278 patients. The trials include front-line studies for leukemia, lymphoma, brain and solid extracranial tumors, and other key transversal topics such as off-label use of targeted therapies and survivorship. The mean time from regulatory authority submission to first patient recruited was 12.2 months and from first international site open to first Spanish site open was 31.3 months. Discussion: ECLIM-SEHOP platform has remarkably improved the availability and accessibility of international academic clinical trials and has facilitated the centralization of resources in childhood cancer treatment. Despite the progressive improvement on clinical trial set-up metrics, timings should still be improved. The program has contributed to leveling survival rates in Spain with those of other European countries that presented major differences in the past. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Aligning With Metrics: Differential Impact of IT and Organizational Metrics on Cognitive Coordination in Top Management Teams.
- Author
-
Siamionava, Katsiaryna, Mitra, Sabyasachi, and Westerman, George
- Subjects
SENIOR leadership teams ,CHIEF information officers ,CROSS-functional teams ,INFORMATION technology management ,INFORMATION technology - Abstract
Achieving cognitive coordination in cross-functional teams is a perennial challenge in organizations. It is especially challenging in the context of information systems, where strategic alignment remains a top concern of leaders despite decades of research on the topic. In this study, we develop and empirically validate a theoretical model that explores the role of communicating about performance metrics in fostering cognitive coordination between the Chief Information Officer (CIO) and top management team (TMT). Building on a theoretical lens of transactive memory systems, we hypothesize how the use of unit-specific metrics of information technology (IT) performance and collective metrics of organizational performance can differentially influence mutual trust and shared understanding in the CIO–TMT relationship. Through a survey of 268 CIOs, an experiment with 106 participants using a novel IT leadership game, and an algorithmic analysis of 3200+ articles in a trade publication, we find that communications using narrowly focused IT unit metrics improve mutual trust between the CIO and the TMT, while communications using broader organizational metrics (along with mutual trust) increase shared understanding of IT's role in improving organizational performance. Our multimethod study adds an important new facet to the rich literature on IT strategic alignment as well as the use of performance metrics in operations management. We discuss its implications for both theory and practice in improving cognitive coordination among the CIOs and TMT. Our model and findings are also relevant to other cross-functional teams where specialized individuals must collaborate to achieve collective goals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Interregional Trade in Russia: Gravity Approach
- Author
-
Konstantin Nikolaevich Salnikov and Alexander Yurievich Filatov
- Subjects
spatial economics ,gravity models of trade ,interregional trade ,metrics ,distance matrix ,russia ,Economics as a science ,HB71-74 - Abstract
The paper analyzes interregional trade in Russia using gravity models. The model estimates the trade elasticity with respect to the size of exporting and importing regions and the distance between them. In addition, the impact on trade of additional factors, such as the common border of trading regions, the presence or absence of railroads, land or sea borders with other countries, is studied. Special attention is given to the issue of measuring distances between regions. The influence of the method of calculating the distance matrix (from the simplest orthodromic to the proposed weighted matrix of the shortest road and rail distances) on the coefficients of the models is studied. The all-Russian estimates of trade elasticities by the size of the exporting and importing region, equal to 1.15 and 1.05, showed high accuracy and robustness to the set of factors included in the model, the observation period, and the distance matrix. Both values were greater than one, which is significantly higher than typical estimates for international trade. This suggests that large and wealthy regions in Russia trade more, further increasing their welfare, while small and depressed regions are unable to escape the poverty trap, further increasing the current high level of regional heterogeneity. Distance is also very important in Russia (the elasticity of trade with respect to distance is –1.15, which is much higher than the world average, but still lower than the previous estimates for Siberia and the Russian Far East). This indicates insufficient transport infrastructure, higher costs of information search, transactions, contract execution, and other difficulties associated with long-distance trade. The absence of railroads in a region reduces its trade by about one-third, while neighboring regions increase the quantity of goods transported between them by about 75%. An external land or sea border facilitates domestic imports, some of which are re-exported abroad and some are consumed with the money earned from exports. At the same time, domestic exports from border regions, which cannot compete with external exports, are reduced. The method of calculating the distance matrix has a significant effect on the elasticity of trade with respect to distance, and to a limited extent on other coefficients of the model. In this case, it is recommended to use the weighted matrix proposed in this paper, which uses road distances for nearby regions and rail distances for distant regions
- Published
- 2024
- Full Text
- View/download PDF
34. Review, framework, and future perspectives of Geographic Knowledge Graph (GeoKG) quality assessment
- Author
-
Shu Wang, Peiyuan Qiu, Yunqiang Zhu, Jie Yang, Peng Peng, Yan Bai, Gengze Li, Xiaoliang Dai, and Yanmin Qi
- Subjects
Geographic Knowledge Graph (GeoKG) ,Quality Assessment ,Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) ,assessment indicators ,metrics ,quality evaluation ,Mathematical geography. Cartography ,GA1-1776 ,Geodesy ,QB275-343 - Abstract
High-quality Geographic Knowledge Graphs (GeoKGs) are highly anticipated for their potential to provide reliable semantic support in geographical knowledge reasoning, training Geographic Large Language Models (Geo-LLMs), enabling geographical recommendation, and facilitating various geospatial knowledge-driven tasks. However, there is a lack of a standardized quality assessment methodology and clearly defined evaluative indicators in the field of GeoKGs research. This research uses the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to conduct a systematic review of literature and standards in the field of GeoKG in an effort to fill the gap. First, using the lifecycle theory as a guide, we outline and propose five groups including twenty assessment criteria and their accompanying calculation techniques for evaluating GeoKG quality. Then, expanding on this foundation, we present a streamlined evaluation scheme for GeoKGs that relies on just seven key measures, discussing their applicability, utility, and weight scheme in greater detail. After applying the GeoKG quality framework, we stated three key tasks emerge as priorities: the creation of specialized assessment tools, the formation of worldwide standards, and the building of large-scale, high-quality GeoKGs. We believe this thorough and systematic GeoKG quality assessment technique will help construct high-quality GeoKGs and promote GeoKGs as an engine for geo-intelligence applications including Geospatial Artificial Intelligence (GeoAI) systems, Sustainable Development Goals (SDGs) analyzers, and Virtual Geographic Environments (VGEs) models.
- Published
- 2024
- Full Text
- View/download PDF
35. Evaluating Audience Engagement as a Measure of Digital Diplomacy Effectiveness
- Author
-
M. M. Bazlutckaia, A. N. Sytnik, and N. A. Tsvetkova
- Subjects
engagement ,digital diplomacy ,us ,russia ,social media ,telegram ,vk ,effectiveness ,metrics ,International relations ,JZ2-6530 - Abstract
This article presents relevant tools for analyzing engagement in digital diplomacy, where ‘engagement’ is defined as the two-way interaction between a digital diplomacy channel on social media and its users. Existing methods for evaluating engagement often fail to fully utilize the available data or lack the flexibility needed for specific diplomatic purposes. Using the case study of US digital diplomacy in Russia, this paper addresses two research questions: (1) To what extent are the automated evaluation metrics developed by data aggregator platforms applicable for analyzing engagement in digital diplomacy? (2) Do these metrics help identify significant patterns in digital diplomacy?The findings of our pilot study indicate that the automated engagement metrics provided by the Popsters and TGStat data aggregator services effectively measure engagement in terms of audience reach and expansion, geographic distribution of traffic, overall engagement, and engagement based on content type (text, photo, video, or link) or text length. The analysis of selected metrics reveals two key trends in US digital diplomacy in Russia: since 2022 there has been a notable shift in focus toward Telegram as the primary platform, and while the content partially reaches the existing Russian-speaking audience, it struggles to expand its overall reach.The authors provide six recommendations for diplomatic agencies and consulting firms to enhance the tools for analyzing digital diplomacy engagement: (1) adopt automated engagement metrics that account for the interface differences among social media platforms; (2) incorporate built-in topic modeling of posts; (3) integrate sentiment analysis of comments; (4) develop mechanisms to detect traffic manipulation; (5) track multi-level reposts, the timing and spread of hashtag diplomacy, and audience growth considering these processes; and (6) evaluate digital diplomacy activity concerning responses to comments. The authors advocate for closer collaboration between the academic community and diplomatic practitioners to improve engagement metrics and thereby enhance the effectiveness of digital diplomacy.
- Published
- 2024
- Full Text
- View/download PDF
36. Using CVSS scores can make more informed and more adapted Intrusion Detection Systems
- Author
-
Robin Duraz, David Espes, Julien Francq, and Sandrine Vaton
- Subjects
Cybersecurity ,Metrics ,Machine Learning ,Intrusio ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Intrusion Detection Systems (IDSs) are essential cybersecurity components. Previous cyberattack detection methods relied more on signatures and rules to detect cyberattacks, although there has been a change in paradigm in the last decade, with Machine Learning (ML) enabling more efficient and flexible statistical methods. However, ML often suffers from the lack of, and proper use of, cybersecurity information, be they for proper evaluation or even improving performance. This paper shows that using a de facto standard in cybersecurity: the Common Vulnerability Scoring System (CVSS), can improve IDSs at different levels, from helping in training an IDS, to more properly evaluating its performance, even taking into account systems with different protection requirements. This paper introduces Cyber Informedness, a new metric considering cybersecurity information to give a more informed representation of performance, influenced by the severity of the attacks encountered. Consequently, this metric is also able to differentiate performance of IDSs when security requirements, Confidentiality, Integrity and Availability, are defined using CVSS’ environmental parameters. Finally, sub-parts of this metric can be integrated into the training phase’s loss of Neural Networks (NNs)-based IDSs to build IDSs that better detect more severe attacks.
- Published
- 2024
- Full Text
- View/download PDF
37. Success factors for measuring agile process changes and their metrics.
- Author
-
Müller, Johannes, Ammersdörfer, Theresa, Luo, Shupei, Grau, Raphael, Inkermann, David, and Albers, Albert
- Subjects
INNOVATION management ,ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,DIGITAL technology ,DEEP learning - Abstract
Agile ways of thinking and acting have spread beyond the software industry to the development of mechatronic products. Goals such as better customer integration, greater responsiveness and shorter development times are often pursued. However, companies are often faced with the challenge of not being able to measure the success of the introduction of agile methods. The perceived effects often deviate from the expected effects and are also perceived very differently between the development team and management. For this reason, success factors were developed as part of this research to measure the success of agile methods rather than the degree of "agilization". To support a possible measurement of success, these success factors were combined with quantitative metrics. In addition, a case study is used to investigate which of these metrics can be identified by conducting retrospectives within the team. [ABSTRACT FROM AUTHOR]
- Published
- 2024
38. Recognition and Rewards in Academia – Recent Trends in Assessment
- Author
-
Kramer, Bianca, author and Bosman, Jeroen, author
- Published
- 2024
- Full Text
- View/download PDF
39. Connecting the dots – A literature review on learning analytics indicators from a learning design perspective.
- Author
-
Ahmad, Atezaz, Schneider, Jan, Griffiths, Dai, Biedermann, Daniel, Schiffner, Daniel, Greller, Wolfgang, and Drachsler, Hendrik
- Subjects
- *
DATA analytics , *EDUCATIONAL tests & measurements , *EDUCATIONAL technology , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *CONCEPTUAL structures , *PROBLEM-based learning , *LEARNING strategies , *COMPUTER assisted instruction , *EDUCATIONAL attainment - Abstract
Background: During the past decade, the increasingly heterogeneous field of learning analytics has been critiqued for an over‐emphasis on data‐driven approaches at the expense of paying attention to learning designs. Method and objective: In response to this critique, we investigated the role of learning design in learning analytics through a systematic literature review. 161 learning analytics (LA) articles were examined to identify indicators that were based on learning design events and their associated metrics. Through this research, we address two objectives. First, to achieve a better alignment between learning design and learning analytics by proposing a reference framework, where we present possible connections between learning analytics and learning design. Second, to present how LA indicators and metrics have been researched and applied in the past. Results and conclusion: In our review, we found that a number of learning analytics papers did indeed consider learning design activities for harvesting user data. We also found a consistent increase in the number and quality of indicators and their evolution over the years. Lay Description: What is already known about this topic?: Learning design (LD) is the pedagogic process used in teaching/learning that leads to the creation and sequencing of learning activities and the environment in which it occurs.Learning analytics (LA) is the measurement, collection, analysis & reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.There are multiple studies on the alignment of LA and LD but research shows that there is still room for improvement. What this paper adds?: To achieve better alignment between LD and LA. We address this aim by proposing a framework, where we connect the LA indicators with the activity outcomes from the LD.To demonstrate how learning events/objectives and learning activities are associated with LA indicators and how an indicator is formed/created by (several) LA metrics. We address this aim in our review.This article also aims to assist the LA research community in the identification of commonly used concepts and terminologies; what to measure, and how to measure. Implications for practice and/or policy: This article can help course designers, teachers, students, and educational researchers to get a better understanding on the application of LA.This study can further help LA researchers to connect their research with LD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. An exploratory evaluation of code smell agglomerations.
- Author
-
Santana, Amanda, Figueiredo, Eduardo, Alves Pereira, Juliana, and Garcia, Alessandro
- Subjects
SOFTWARE measurement ,SOURCE code ,SYSTEMS design ,SMELL ,ENVY - Abstract
Code smell is a symptom of decisions about the system design or code that may degrade its modularity. For example, they may indicate inheritance misuse, excessive coupling and size. When two or more code smells occur in the same snippet of code, they form a code smell agglomeration. Few studies evaluate how agglomerations may impact code modularity. In this work, we evaluate which aspects of modularity are being hindered by agglomerations. This way, we can support practitioners in improving their code, by refactoring the code involved with code smell agglomeration that was found as harmful to the system modularity. We analyze agglomerations composed of four types of code smells: Large Class, Long Method, Feature Envy, and Refused Bequest. We then conduct a comparison study between 20 systems mined from the Qualita Corpus dataset with 10 systems mined from GitHub. In total, we analyzed 1789 agglomerations in 30 software projects, from both repositories: Qualita Corpus and GitHub. We rely on frequent itemset mining and non-parametric hypothesis testing for our analysis. Agglomerations formed by two or more Feature Envy smells have a significant frequency in the source code for both repositories. Agglomerations formed by different smell types impact the modularity more than classes with only one smell type and classes without smells. For some metrics, when Large Class appears alone, it has a significant and large impact when compared to classes that have two or more method-level smells of the same type. We have identified which agglomerations are more frequent in the source code, and how they may impact the code modularity. Consequently, we provide supporting evidence of which agglomerations developers should refactor to improve the code modularity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Core Potentials: The Consensus Segmentation Conjecture.
- Author
-
Santiago Arguello, Anahy, Scholz, Guillaume E., and Stadler, Peter F.
- Abstract
Segmentations are partitions of an ordered set into non-overlapping intervals. The Consensus Segmentation or Segmentation Aggregation problem is a special case of the median problems with applications in time series analysis and computational biology. A wide range of dissimilarity measures for segmentations can be expressed in terms of potentials, a special type of set-functions. In this contribution, we shed more light on the properties of potentials, and how such properties affect the solutions of the Consensus Segmentation problem. In particular, we disprove a conjecture stated in 2021, and we provide further insights into the theoretical foundations of the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Accuracy measurement of automatic remeshing tools
- Author
-
Mattia Sullini
- Subjects
remeshing ,benchmark ,accuracy ,analythic ,metrics ,Architecture ,NA1-9428 ,Architectural drawing and design ,NA2695-2793 - Abstract
Surface Reconstruction from raw Pointclouds generates polygonal meshes characterised by typical local issues that need to be addressed by a global reorganisation of mesh connectivity during the Remeshing process. While in the applications for which remeshing algorithms have historically been developed, the problem can be addressed without affecting the mesh polycount, for the authoring of 3D assets intended for Real-Time Rendering applications, maintaining a low polycount is mandatory. However, there is a trade-off between the number of polygons and accuracy. Hence, in cases where accuracy is a primary concern, such as with Digital Cultural Heritage Objects, the topology of the mesh that ensures this reduction becomes crucial since, to an even polycount, the dimensional discrepancy between the original and different simplified meshes can vary significantly.Therefore, it is vital to evaluate the characteristic quality obtainable from different remeshing algorithms with simplification, but despite the importance of the matter, there is a lack of metrics to perform such comparative benchmarks. This study aims to address this gap by defining and testing a Shape Complexity Index on real-world case studies, which is applicable uniformly and consistently to geometric shapes of generic complexity. Such index allows for pre-determine a target polycount and subsequently normalising signed distance computations among different test simplified meshes, posing the basis for both quantitative and qualitative analysis of accuracy achievable by any automatic remeshing tool. Additionally, the proposed workflow is employable as an instrument for planning, monitoring, and reporting a digitisation campaign of objects belonging to the Cultural Heritage.DOI: https://doi.org/10.20365/disegnarecon.32.2024.10
- Published
- 2024
43. The evolving landscape of publishing in human reproduction: an analysis of scientometric data, open-access publishing, and article processing charges
- Author
-
Hakki Uzun, Görkem Akça, Berat Sönmez, Erdem Orman, Yakup Kaçan, and Eyüp Dil
- Subjects
Metrics ,Open access ,Cost ,Article processing charge ,Medicine (General) ,R5-920 ,Reproduction ,QH471-489 - Abstract
Abstract Background This bibliometric study aims to examine the associations of journals in the field of human reproduction with their access types and article processing charges to evaluate the evolving landscape of publishing in human reproduction. Methods The primary databases, including Clarivate Analytics Master Journal List, Scopus®, PubMed, and Directory of Open Access Journals, were scrutinized to identify pertinent journals within the realm of human reproduction, utilizing keywords such as reproductive, reproduction, fertility, and infertility. Journals were excluded if they were not actively publishing in English or primarily focused on reproductive health, men’s health, sexual medicine, embryogenesis, developmental biology, or veterinary medicine concerning animal reproduction. A thorough characterization of the journals was conducted, followed by a comparative analysis of citation metrics and article processing charges across various access models. Results Forty-one journals were included into the study. A significant increase in the proportion of gold and diamond open-access journals was observed, rising from 42% (13 out of 31) to 53.6% (22 out of 41) by 2023. Hybrid journals demonstrated superior citation metrics compared to diamond open-access journals. For hybrid journals, a statistically significant, moderately positive correlation was found between article processing charges and CiteScore (rs (27) = 0.515, p
- Published
- 2024
- Full Text
- View/download PDF
44. Development model for assessing the structural complexity of programs
- Author
-
A.S. Rvanova, N.S. Kolyeva, and M.V. Panova
- Subjects
algorithm complexity ,metrics ,analysis ,estimation ,graph ,graph vertices ,modeling ,Home economics ,TX1-1110 ,Economics as a science ,HB71-74 - Abstract
The research is devoted to the estimation the structural complexity of programs. The algorithm of finding cyclomatic routes for program executions is described. By now, two directions of obtaining estimates for the complexity estimates in software modules have been defined: structural and statistical. Both directions connect the value of program complexity with the labor intensity related to their development. The structural complexity of program modules is determined by the number of interacting components, the number and complexity of links between them. The complexity of a program's behavior depends to a large extent on the set of routes through which it is executed. The complexity metric obtained from these positions allows us to determine estimates of the cost of designing the program as a whole, as well as to identify the modules that are likely to contain the most errors, especially the logical ones.
- Published
- 2024
- Full Text
- View/download PDF
45. Approaches to Programming and 3D Modeling in the Automated Certification of Industrial Robots’ Metrics
- Author
-
V.A. Kyrylovych, O.O. Dobrzhanskyi, A.R. Kravchuk, E.S. Pukhovskiy, and O.V. Pidtychenko
- Subjects
technology ,robot ,automation ,certification ,modeling ,metrics ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The focus of this work is on the programming features and 3D modeling in the automated certification of metrics for the manipulative system links of nearly all modern models of single-arm and single-grip industrial robots (IR), utilizing the well-known CoppeliaSim software environment. The results of the analysis and a list of existing methods and approaches to automated metric certification are presented. The conceptual essence of the term ‘metric certification,’ the functional capabilities, and the availability of CoppeliaSim for free use determined its selection as the software tool for automating the certification process. This enabled the execution of spatial 3D modeling with full-scale virtual models of each link and the manipulative system as a whole of the analyzed modern industrial robots, which is a necessary component of the certification process. This ensures reliability and precision in the subsequent synthesis of necessary elements of robotic technologies, such as optimizing the placement of equipment in the IR’s working area, forming the optimal trajectory of movement for the manipulative system’s links of industrial robots with working tools or with a grip during their technological interaction with objects of manipulation, and so on. An analysis of the tools and instruments that allow for the consideration of primarily spatial factors on the IR metric, such as the geometric parameters of the IR construction, tools, grip, and possible limitations due to the constructive-technological features of the technological equipment, has been performed. Recommendations are provided for forming a complex of modeling tools and using automated certification to support decision-making in real-world conditions of designing/synthesizing robotic mechanossembly technologies. This makes the certification process practically significant regarding its engineering use. The materials of this study are intended for use by researchers, engineers, students, and postgraduates who deal with problems and practical tasks of industrial robotics in terms of their automated modeling and analysis, with subsequent application in the technological preparation of robotic mechanossembly productions.
- Published
- 2024
- Full Text
- View/download PDF
46. Insights into PCI and DDCI as Key Metrics for Measuring Subnational Competitiveness in Vietnam
- Author
-
Phuong-Duy Nguyen, Tan-Phong Dinh, and Bich-Ngoc Pham-Thi
- Subjects
business environment ,competitiveness ,economic governance ,investment climate ,metrics ,Political institutions and public administration (General) ,JF20-2112 ,Public law ,K3150 ,Law of Europe ,KJ-KKZ - Abstract
The purpose of this study is to review and understand Vietnam’s measurement of competitiveness for its local authorities. This paper delves into the application of two fundamental metrics, namely the Provincial Competitiveness Index (PCI) and the Department and District Competitiveness Index (DDCI). As Vietnam continues its trajectory of rapid economic development and regional integration understanding the dynamics of competitiveness at subnational levels has great importance. Drawing upon an overview of the competitiveness concept, case study, and perspectives, the article provides a holistic understanding of subnational competitiveness metrics in Vietnam. Results offer valuable insights for policymakers, researchers, and stakeholders involved in subnational development strategies and economic governance frameworks in Vietnam and beyond. This study also indicates future opportunities and challenges within research on pillars/indices, indicators and their impacts on creating the ease of doing business at subnational level.
- Published
- 2024
- Full Text
- View/download PDF
47. Enhancing IoT scalability and security through SDN
- Author
-
Diyar HAMAD, Khirota YALDA, Nicolae TAPUS, and Ibrahim Taner OKUMUS
- Subjects
sdn ,iot ,floodlight ,metrics ,security ,scalability ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
The proliferation of Internet of Things (IoT) devices introduces significant challenges in network security and scalability. Software-Defined Networking (SDN) emerges as a promising framework to offer more flexible and intelligent network management capabilities. This paper delves into the integration of SDN principles within an IoT environment to bolster security and scalability. Through a proof-of-concept deployment utilizing the Floodlight controller and Mininet-WiFi, the approach involves the collection and analysis of data on average latency, packet loss, throughput, and jitter to evaluate network performance. Additionally, Python scripts and Wireshark are employed for an in-depth network analysis. The findings illustrate that SDN integration can adeptly manage the augmented network load, evidenced by minimal increases in latency and packet loss, while maintaining acceptable throughput and jitter levels. Furthermore, the Floodlight controller's capability to identify and counter Distributed Denial of Service (DDoS) attacks underscores its potential to enhance IoT network security. The results affirm that SDN can significantly elevate the scalability and security of IoT networks, presenting a viable solution to manage the escalating demands of IoT deployments. Future endeavors will aim to extend the network scale and investigate alternative SDN controllers to substantiate the scalability of the conclusions.
- Published
- 2024
- Full Text
- View/download PDF
48. Improved Decoding Algorithms for Convolutional Codes
- Author
-
Kateryna Sosnenko
- Subjects
convolutional codes ,viterbi algorithm ,fpga basis ,metrics ,Cybernetics ,Q300-390 - Abstract
Introduction. The considered implementation of the Viterbi algorithm provides a reduction in hardware and time costs for decoding convoluted code sequences, and can be used for semi-realistic modeling of existing means of data transmission (for example, in satellite communication). The purpose of the article. Show how when modeling the processes of encoding and decoding convolutional codes according to the improved Viterbi algorithm, as well as its implementation based on programmable logic devices of the FPGA type, it was possible to reduce the number of clocks of reading metrics and tracks from RAM by 2 times. The results. A two-fold decrease in the number of reading cycles of metrics and tracks (input sequences or reverse pointers) from RAM is achieved by joint processing of two receiver nodes that share two source nodes. Relatively small costs for a hardware calculator of edge metrics allow you to organize parallel calculation, comparison and multiplexing of metrics and tracks of two sources at the inputs of block RAM. Two-port block memory makes it possible to significantly (up to two times) speed up the decoding process, to abandon metric and track buffer registers. Conclusions: The Viterbi decoder is widely used in communication systems and is a practical method of error correction at high signal transmission speed in modern telecommunication systems. The Viterbi decoder is designed for decoding convolutional codes and is optimal in the sense of minimizing the probability of an error. The advantage of the Viterbi decoder is that its complexity is a linear function of the number of symbols in the codeword sequence. In addition, the Viterbi algorithm is widely used in pattern recognition systems using hidden Markov models.
- Published
- 2024
- Full Text
- View/download PDF
49. Towards improving aspect-oriented software reusability estimation
- Author
-
Aws A. Magableh, Hana’a Bani Ata, Ahmad A. Saifan, and Adnan Rawashdeh
- Subjects
Aspects ,Aspect-oriented ,AO ,Quality attribute ,Metrics ,Reuse ,Medicine ,Science - Abstract
Abstract Nowadays, large numbers of organizations may opt for Aspect-Oriented Programming (AOP), which is an enhancement to Object-Oriented Programming (OOP). This is due to the addition of a number of concepts that have assisted in the production of more flexible and reusable components. One of the most important elements added by AOP is software reuse, which is based on reusability attributes. These attributes indicate the possibility of reusing one or more components in the development of a new system. It is one of the most essential attributes to evaluate the quality of a system’s components. Thus far, little attention has been paid to the process of measuring AOP reusability, and it has not yet been standardized. The objective of the current study is to come up with a reasonable measurement for AOP software reuse, which is simultaneously a significant topic for researchers while offering several advantages for organizations. Although numerous models have been built to estimate the reusability of software, most of them are not dedicated to Aspect-Oriented Software (AOS). In this study, a model has been designed for AOS reusability estimation and measurement based on a new equation depending on five attributes that have a range of positive and negative impacts on AOS reusability. Three of those attributes, namely coupling, cohesion, and design size, have been included in previous studies. This study proposes complexity and generality as two new attributes to be considered. Each of these attributes was measured based on the metrics also proposed in this study. A new equation to calculate AOS reusability was constructed based on the most important reusability attributes and metrics. Seven aspect projects were employed as a case study to apply the proposed equation. After the proposed equation was applied to the selected projects, we obtained new values of reusability to compare with the values that resulted from applying the previous equation. The fact that new values emerged indicates that the proposed reusability metrics and attributes had a significant effect.
- Published
- 2024
- Full Text
- View/download PDF
50. DATA-DRIVEN INSIGHTS INTO SOCIAL MEDIA'S EFFECTIVENESS IN DIGITAL COMMUNICATION
- Author
-
Bade Sudarshan Chakravarthy, B. Uma Rani, and K. Karunakaran
- Subjects
digital marketing ,social media ,strate-gies ,engagement ,metrics ,content ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
In the ever-evolving landscape of digital marketing, the role of social media in shaping effective communication strategies is paramount. This research paper delves into the world of data-driven insights to examine how social media channels contribute to the success of digital communication efforts. This paper focuses on exploring the multifaceted aspects of social media's impact on brand visibility, engagement, and customer interaction. The data was analyzed quantitatively using descriptive statistics while Analysis of Variance was used to establish associations between variables. The study revealed that major factors influencing the social media's effectiveness in digital communication are Content, Customer Interaction and Social Media metrics. As a result of comprehensive analysis, this research paper not only provides valuable insights into the current state of social media's effective-ness in digital communication but also offers practical recommendations for businesses looking to optimize their social media strategies. In an era where data reigns supreme, this research findings serve as a guide for harnessing the power of social media as a dynamic and influential tool in the digital communication arsenal.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.