82,841 results
Search Results
252. 'We believe we will succeed... because we will "soma kwa bidii"': acknowledging the key role played by aspirations for 'being' in students' navigations of secondary schooling in Tanzania.
- Author
-
Adamson, Laela
- Subjects
SECONDARY school students ,CLASSROOM environment ,SOCIAL change ,DATA analysis - Abstract
With dramatic global expansion of secondary schooling there has been significant research interest in how education is related to future aspirations, with important calls to acknowledge connections within processes of aspiring to young people's social, economic and cultural circumstances. This paper presents findings from thematic analysis of interview, participant observation and classroom observation data from an ethnographic study in two secondary schools in Tanzania. It argues that an important, and often overlooked, aspect of this complex process is the way in which aspirations for the future are connected not only to present realities, but also aspirations in the present. Focusing on students' aspirations relating to 'being a "good" student' and being able to 'soma kwa bidii' or 'study hard', this paper uses the conceptual language of the capability approach to assert the importance of considering aspirations for 'being' in education in conjunction with future aspirations for 'becoming'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
253. Derivational and Compounding Processes in The Hausa and Labur (Jaaku) Languages: Areas of Similarities and Differences.
- Author
-
Fanti, Muhammad Shehu
- Subjects
DESCRIPTIVE statistics ,LINGUISTIC analysis ,WORD formation (Grammar) ,HAUSA language ,DATA analysis - Abstract
This paper examined the relationships between the Hausa and labur (Jaaku) derivation and compound processes. It compared these two languages by showing the similarities and differences in their derivation and compound formation. Although the languages (i.e Hausa and Labur) are from different language phylum, the paper identified some areas of similarities and differences in their derivation and compound processes. Therefore, the paper enumerated and compared some of the derivation and compound processes of forming words in the Hausa and Labur languages. A sample of descriptive approach was employed in analyzing the data obtained. Therefore, the analytical comparative model of Nida (1949) and linguistic analysis of Carl (1996) cited in Rubba (2004) were adopted. It is believed that the paper contributed to the current trend in the field of language comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
254. Velocity equation for grenades while impacting on dry sand media.
- Author
-
Macko, Martin, Xuan Son Bui, Phanthavong, Kongsathit, Duc Hung Pham, Van Gion Do, Van Minh Do, and Jiri Skala
- Subjects
IMPACT (Mechanics) ,VELOCITY measurements ,EQUATIONS of motion ,PARAMETER estimation ,DATA analysis - Abstract
This paper deals with the collision of sphere shape grenades with sand media. The central issue of the article is the establishing of an empirical velocity equation of the grenade while impacting on sand that is used to solve motion equations of the mechanical mechanism inside the impact grenade fuze. The paper focuses on impact velocities that are lower than 5 m s
-1 . An experiment was conducted to study the velocity of the grenade while impacting on dry sand. A high-speed camera video was used to capture the grenade positions. The grenade velocity in the impact process was generated from these video data. Some types of fitting curves are used to regress the velocity equation of the grenade while interacting with the sand media and the best-fitting model is chosen. The result shows the regression curve has a high correlation with the experiment data for grenade velocities below 5 m s-1 . The received regression equation is useful for analyzing the working ability of the inertial mechanism inside the impact grenade or analyzing and choosing the appropriate parameters of each part in the inertial mechanism to meet the required characteristics of the mechanism. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
255. Analysis of Missingness Scenarios for Observational Health Data.
- Author
-
Zamanian, Alireza, von Kleist, Henrik, Ciora, Octavia-Andreea, Piperno, Marta, Lancho, Gino, and Ahmidi, Narges
- Subjects
MISSING data (Statistics) ,DATA analysis ,HEALTH facilities ,SENSITIVITY analysis ,DATA modeling - Abstract
Simple Summary: This paper argues the importance of considering domain knowledge when dealing with missing data in healthcare. We identify fundamental missingness scenarios in healthcare facilities and show how they impact the missing data analysis methods. Despite the extensive literature on missing data theory and cautionary articles emphasizing the importance of realistic analysis for healthcare data, a critical gap persists in incorporating domain knowledge into the missing data methods. In this paper, we argue that the remedy is to identify the key scenarios that lead to data missingness and investigate their theoretical implications. Based on this proposal, we first introduce an analysis framework where we investigate how different observation agents, such as physicians, influence the data availability and then scrutinize each scenario with respect to the steps in the missing data analysis. We apply this framework to the case study of observational data in healthcare facilities. We identify ten fundamental missingness scenarios and show how they influence the identification step for missing data graphical models, inverse probability weighting estimation, and exponential tilting sensitivity analysis. To emphasize how domain-informed analysis can improve method reliability, we conduct simulation studies under the influence of various missingness scenarios. We compare the results of three common methods in medical data analysis: complete-case analysis, Missforest imputation, and inverse probability weighting estimation. The experiments are conducted for two objectives: variable mean estimation and classification accuracy. We advocate for our analysis approach as a reference for the observational health data analysis. Beyond that, we also posit that the proposed analysis framework is applicable to other medical domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
256. E-COMMERCE DATA MINING ANALYSIS BASED ON USER PREFERENCES AND ASSICIATION RULES.
- Author
-
ZHIYING FAN
- Subjects
ASSOCIATION rule mining ,DATA mining ,RECOMMENDER systems ,DATA analysis ,ELECTRONIC commerce - Abstract
Improving the sales of e-commerce platforms is the primary goal of this paper. This paper studies the data of e-commerce product recommendations from the perspective of user preference and association rules. The characteristics of positive and reverse association rules in data mining are analyzed. Then, a multi-dimension association rule calculation method is proposed. Create a data attribute unit set. By analyzing each attribute's weighted coefficient and similarity, the attribute confidence degree is obtained, and the data is preprocessed. An example is given to verify the effectiveness of the proposed method. The recommendation engine based on user preferences and association rules significantly improves the accuracy, recall rate and prediction coverage of e-commerce recommendation systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
257. The potential of creative uses of metonymy for climate protest.
- Author
-
O'Dowd, Niamh A
- Subjects
METONYMS ,CLIMATE change ,DATA analysis ,DESCRIPTIVE statistics ,SOCIAL change - Abstract
This paper develops the notion of metonymy scenarios by exploring the social and cognitive dimensions of various creative uses of metonymy in a collection of digital banners created for the Global Climate Strike movement. The paper argues that the banners exploit existing metonymic relationships to activate dominant anthropocentric discourses in society, and to subvert them via processes of recontextualisation and reappropriation, in order to challenge system conventions and normative attitudes regarding climate change. The literature to date has not adequately considered metonymy as a dynamic and scenario-activating cognitive operation, nor has it thoroughly investigated the relationship between metonymy and irony. However, the data analysed here show that several creative uses of metonymy, including twice-true metonymy, metonymy in combination with metaphor, and the juxtaposition of different metonymies are markers of what this paper posits as metonymic mininarratives or scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
258. MCGCL: A multi-contextual graph contrastive learning-based approach for POI recommendation.
- Author
-
Han, Xueping and Wang, Xueyong
- Subjects
REPRESENTATIONS of graphs ,INFORMATION retrieval ,RECOMMENDER systems ,RANDOM walks ,DATA analysis - Abstract
This paper focused on the point-of-interest (POI) recommendation task. Recently, graph representation learning-based POI recommendation models have gained significant attention due to the powerful modeling capacity of graph structural data. Despite their effectiveness, we have found that recent methods struggle to effectively utilize information from POIs that have not been checked in, which could limit their performance. Hence, in this paper, we proposed a new model, named the multi-contextual graph contrastive learning (MCGCL) model, which introduces the contrastive learning into graph representation learning-based methods. First, MCGCL extracts interactions between POIs under different contextual factors from user check-in records using predefined graph structure information. Next, it samples important POI sets from different contextual factors using a random walk-based method. Then, it introduces a new contrastive learning loss that incorporates contextual information into traditional contrastive learning to enhance its ability to capture contextual information. Finally, MCGCL employs a graph neural network (GNN) model to learn representations of users and POIs. Extensive experiments on real-world datasets have demonstrated the effectiveness of MCGCL on the POI recommendation task compared to representative POI recommendation approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
259. Investigating the 'Bolsonaro effect' on the spread of the Covid-19 pandemic: An empirical analysis of observational data in Brazil.
- Author
-
Razafindrakoto, Mireille, Roubaud, François, Castilho, Marta Reis, Pero, Valeria, and Saboia, João
- Subjects
COVID-19 pandemic ,SOCIAL dynamics ,SOCIAL distancing ,DATA analysis ,COVID-19 vaccines ,CONSPIRACY theories - Abstract
Brazil counts among the countries the hardest hit by the Covid-19 pandemic. A great deal has been said about the negative role played by President Bolsonaro's denialism, but relatively few studies have attempted to measure precisely what impact it actually had on the pandemic. Our paper conducts econometric estimates based on observational data at municipal level to quantitatively assess the 'Bolsonaro effect' over time from March 2020 to December 2022. To our knowledge, this paper presents the most comprehensive investigation of Bolsonaro's influence in the spread of the pandemic from two angles: considering Covid-19 mortality and two key transmission mitigation channels (social distancing and vaccination); and exploring the full pandemic cycle (2020–2022) and its dynamics over time. Controlling for a rich set of relevant variables, our results find a strong and persistent 'Bolsonaro effect' on the death rate: municipalities that were more pro-Bolsonaro recorded significantly more fatalities. Furthermore, evidence suggests that the president's attitude and decisions negatively influenced the population's behaviour. Firstly, pro-Bolsonaro municipalities presented a lower level of compliance with social distancing measures. Secondly, vaccination was relatively less widespread in places more in favour of the former president. Finally, our analysis points to longer-lasting and damaging repercussions. Regression results are consistent with the hypothesis that the 'Bolsonaro effect' impacted not only on Covid-19 vaccination, but has affected vaccination campaigns in general thereby jeopardizing the historical success of the National Immunization Program in Brazil. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
260. FORECASTING AND PRICING OF ROAD VEHICLES: AN INITIAL CONCEPT.
- Author
-
Kováčová, Natália and Veľký, Patrik
- Subjects
USED car sales & prices ,PRICES ,USED cars ,CONGESTION pricing - Abstract
Copyright of Young Science / Mladá Veda is the property of Vydavatelstvo Universum and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
261. Peace in Perspective: A Visual Data Analysis of the Global Peace Index 2023.
- Author
-
Nasar, Hani and Naqvi, Itrat Batool
- Subjects
DATA analysis ,REGIONAL disparities ,PEACE ,SAFETY factor in engineering ,PEACEBUILDING - Abstract
This paper delves into the Global Peace Index (GPI), a comprehensive metric evaluating global peace through factors like safety, security, ongoing conflict, and militarization. The objective is to dissect the GPI's assessment of global peace, identifying key factors that influence a nation's peace status. Employing statistical analysis alongside data visualization techniques, the study methodically examines the GPI's multifaceted criteria. The findings underscore the critical role of safety and security, alongside the impact of ongoing conflicts, in determining a nation's peace status. Notably, the analysis reveals pronounced regional disparities in peace, illustrating the intricate challenges of bolstering global peace. The paper concludes by proposing targeted peace-building initiatives, advocating for a holistic strategy to foster safer, more secure societies globally. It is recommended to address the underlying issues identified, marking a significant step towards realizing the aspirations of the GPI for enhanced global peace. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
262. Study on Abnormal Pattern Detection Method for In-Service Bridge Based on Lasso Regression.
- Author
-
Zhong, Huaqiang, Hu, Hao, Hou, Ning, and Fan, Ziyuan
- Subjects
BRIDGES ,BRIDGE floors ,STRUCTURAL health monitoring ,REGRESSION analysis ,DATA mining ,STRUCTURAL models - Abstract
The real-time operational safety of in-service bridges has received wide attention in recent years. By fully utilizing the health monitoring data of bridges, a structural abnormal pattern detection method based on data mining can be established to effectively ensure the safety of in-service bridges. This paper takes a large-span arch bridge as the research object, analyzes the time-based variation of the main monitoring data of the structure, establishes Lasso regression models for load characteristic indicators and vertical bending fundamental frequency of the structure under different time scales, and uses the residuals of the Lasso model to indicate the structural state and identify abnormal patterns. Firstly, the monitoring data of bridge structural temperature, girder end displacement, and girder acceleration were analyzed, and the interrelationships were studied to extract characteristic parameters of structural load characteristics and structural frequency. Then, the time-varying patterns of structural response were analyzed, and Lasso regression models and their regression variables were discussed based on monitoring data under two different time scales: daily cycle and annual cycle. The abnormal pattern detection method for bridge structures was developed. Finally, the effectiveness of this method was verified by taking the bridge deck pavement replacement as the abnormal pattern. The research results indicate that the proposed bridge structure abnormal pattern detection method based on Lasso regression can effectively monitor changes in the state of the bridge, and the residual dispersion of the model established on the annual cycle scale is relatively smaller than that on the daily cycle scale, resulting in better abnormal detection performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
263. Are China's Renewable Energy Products Competitive in the Context of RCEP?
- Author
-
Qing Guo and Jingyao Wen
- Subjects
RENEWABLE energy industry ,MARKET share ,ENERGY industries ,DATA analysis - Abstract
In recent years, in order to address climate change and energy depletion, countries around the world have been constantly promoting energy transformation and structural upgrading. The effectiveness of RCEP has broadened the international east Asian new energy market. The development of renewable energy trade in the context of RCEP has attracted significant economic, environmental, and technical attention. Based on the background of RCEP in this study, the constant market share (CMS) model and the weighted dominant comparative advantage (IRCA) index are used to assess the export competitiveness of renewable energy products from 2006 to 2021. The data in this paper were obtained from the UN comtrade database according to the categorical statistics of HS codes. The results show that: (1) the overall competitiveness of Chinese renewable energy products shows an upward trend; (2) the comparative advantages of Chinese renewable energy products are strong, with some differences among different industries; (3) the growth effect is the main reason for the fluctuation of Chinese renewable energy product exports; and (4) the signing of RCEP has injected new vitality into Chinese renewable energy trade. Finally, based on the research conclusions, the paper puts forward corresponding policy proposal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
264. SHEWHART CONTROL CHARTS -- AN IRREPLACEABLE TOOL OF EXPLANATORY DATA ANALYSIS WITH UNDERESTIMATED POTENTIAL.
- Author
-
Shper, Vladimir, Sheremetyeva, Svetlana, Smelov, Vladimir, and Khunuzidi, Elena
- Subjects
MATHEMATICAL statistics ,STATISTICAL process control ,STATISTICS ,QUALITY control charts ,DATA analysis - Abstract
In this paper some issues relative to the gap between the traditional theory of control charts and real problems practitioners encounter are discussed. We consider both the general reasons for this discrepancy and different examples of misunderstandings. The trend to develop statistics as mathematical branch of science in the area of statistical process control has led to (i) ignoring many real complexities; (ii) creating many new types of charts that rarely help practitioners to improve their processes. We offer some practical advices, such as the introduction of two types of the assignable causes of variations (internal and external); the refusal from a traditional assumption that repetitive measurements are always normally distributed; the simple and practically convenient technique to calculate control chart limits for highly non-normal data. As the main direction for future efforts, we offer to start discussion about the implementation of Shewhart control charts into the program of school education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
265. Predicting Student Performance in Online Learning: A Multidimensional Time-Series Data Analysis Approach.
- Author
-
Shou, Zhaoyu, Xie, Mingquan, Mo, Jianwen, and Zhang, Huibing
- Subjects
ONLINE education ,AT-risk students ,DATA analysis ,DEEP learning ,COGNITIVE styles ,TEACHING methods ,TIME series analysis - Abstract
As an emerging teaching method, online learning is becoming increasingly popular among learners. However, one of the major drawbacks of this learning style is the lack of effective communication and feedback, which can lead to a higher risk of students failing or dropping out. In response to this challenge, this paper proposes a student performance prediction model based on multidimensional time-series data analysis by considering multidimensional data such as students' learning behaviors, assessment scores, and demographic information, which is able to extract the characteristics of students' learning behaviors and capture the connection between multiple characteristics to better explore the impact of multiple factors on students' performance. The model proposed in this paper helps teachers to individualize education for students at different levels of proficiency and identifies at-risk students as early as possible to help teachers intervene in a timely manner. In experiments on the Open University Learning Analytics Dataset (OULAD), the model achieved 74% accuracy and 73% F1 scores in a four-category prediction task and was able to achieve 99.08% accuracy and 99.08% F1 scores in an early risk prediction task. Compared with the benchmark model, both the multi-classification prediction ability and the early prediction ability, the model in this paper has a better performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
266. Reflexive Content Analysis: An Approach to Qualitative Data Analysis, Reduction, and Description.
- Author
-
Nicmanis, Mitchell
- Subjects
CONTENT analysis ,DATA analysis ,REFLEXIVITY - Abstract
Content analysis, initially a quantitative technique for identifying patterns in qualitative data, has evolved into a widely used qualitative method. However, this evolution has resulted in a confusing array of differing qualitative content analysis approaches that lack clear distinction from other methods. To address these issues, this paper introduces reflexive content analysis, a transtheoretical and flexible researcher-oriented method for the description and reduction of manifest qualitative data. RCA is used to identify patterns in the overt surface meanings of qualitative data through the use of a hierarchical structure of quantifiable analytical strata called codes, subcategories, and categories. Each stratum exists on a continuum of abstraction with codes being the closest to the original data and categories being the most abstract. During each stage of the RCA process, reflexivity is regarded as a valuable analytical resource that is crucial for ensuring adequate description of the data. RCA is intended to be used as method for data analysis, not a methodology, and therefore can be integrated with various methodological and epistemological approaches. This paper provides an introductory guide to conducting RCA. It first presents an overview of existing challenges in qualitative content analysis methods, followed by a rationale for the development of RCA. Then, the foundational principles of RCA and key concepts that support this method are discussed. The paper culminates by outlining the process for conducting an inductive RCA within a qualitative framework, using a previous application of this method as a reference point. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
267. Review of Ground Penetrating Radar Applications for Bridge Infrastructures.
- Author
-
Boldrin, Paola, Fornasari, Giacomo, and Rizzo, Enzo
- Subjects
GROUND penetrating radar ,SOCIAL development ,REINFORCED concrete ,STRUCTURAL health monitoring ,DATA analysis - Abstract
Infrastructure bridges play a crucial role in fostering economic and social development. However, the adverse effects of natural hazard and weather degradation, coupled with escalating rates of traffic, pose a significant threat. The resultant strain on the structure can lead to undue stress, elevating the risk of a critical asset failure. Hence, non-destructive testing (NDT) has become indispensable in the surveillance of bridge infrastructure. Its primary objectives include ensuring safety, optimizing structural integrity, minimizing repair costs, and extending the lifespan of bridges. NDT techniques can be applied to both existing and newly constructed bridge structures. However, it is crucial to recognize that each NDT method comes with its own set of advantages and limitations tailored to specific tasks. No single method can provide an effective and unequivocal diagnosis on its own. Among the various NDT methods, Ground Penetrating Radar (GPR) has emerged as one of the most widely employed techniques for monitoring bridges. In fact, recent technical regulations now mandate the use of GPR for bridge monitoring and characterization, underscoring its significance in ensuring the structural health and longevity of these critical infrastructures. Ground Penetrating Radar (GPR) stands out as one of the most highly recommended non-destructive methods, offering an efficient and timely assessment of the structural conditions of infrastructure. Recognizing the pivotal role of non-destructive testing (NDT) in this context, this paper aims to elucidate recent scientific endeavors related to the application of GPR in bridge engineering structures. The exploration will commence with a focus on studies conducted both at the model level within laboratory settings and on real cases. Subsequently, the discussion will extend to encompass the characterization and monitoring of the bridge's main elements: slab, beam, and pillar. By delving into these scientific experiences, this paper intends to provide valuable insights into the efficacy and applicability of GPR in assessing and ensuring the structural integrity of bridges. This paper provides a concise survey of the existing literature on the application of Ground Penetrating Radar (GPR) in the assessment of bridges and viaducts constructed with masonry and reinforced concrete, taking into account papers of journal articles and proceedings available on open databases. Various approaches employed in both laboratory and field settings will be explored and juxtaposed. Additionally, this paper delves into discussions on novel processing and visualization approaches, shedding light on advancements in techniques for interpreting GPR data in the context of bridge and viaduct evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
268. BYSTANDING INTERVIEW WITH THE VOICELESS: A MIXED METHOD APPROACH TO FIELDWORK IN CHINESE SENSITIVE SURVEY.
- Author
-
Zhaoyin CHU, Siling DONG, and Jingwen YANG
- Subjects
NUCLEAR power plants ,RURAL population ,PUBLIC opinion ,DATA analysis - Abstract
Conducting public opinion surveys on sensitive topics like NIMBY ("Not in My Backyard") facilities among China's rural population (often voiceless) is challenging. To advance relevant research, a new mixed-method approach is proposed in this paper based on fieldwork in Huizhou, China, concerning the T Nuclear Power Plant. The approach combines the paper-and-pencil interviewer-administered questionnaires (PAPIAQ) and bystanding interviews (BI). While PAPIAQ prioritizes researcher neutrality, potentially creating a barrier for less educated respondents, BI fosters deeper interaction and clarifies responses. This strengthens both data reliability (through PAPIAQ illustration) and BI text validity (through deeper understanding). Coded BI texts and statistically analysed PAPIAQ data (via elaboration procedure) ultimately generate a more valid and reliable understanding of rural sentiments on sensitive issues. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
269. Integration Approaches for Heterogeneous Big Data: A Survey.
- Author
-
Alma'aitah, Wafa' Za'al, Quraan, Addy, AL-Aswadi, Fatima N., Alkhawaldeh, Rami S., Alazab, Moutaz, and Awajan, Albara
- Subjects
BIG data ,DATA warehousing ,DATA integration ,ORGANIZATIONAL goals ,RESEARCH personnel ,DATA analysis - Abstract
Modern organizations are currently wrestling with strenuous challenges relating to the management of heterogeneous big data, which combines data from various sources and varies in type, format, and content. The heterogeneity of the data makes it difficult to analyze and integrate. This paper presents big data warehousing and federation as viable approaches for handling big data complexity. It discusses their respective advantages and disadvantages as strategies for integrating, managing, and analyzing heterogeneous big data. Data integration is crucial for organizations to manipulate organizational data. Organizations have to weigh the benefits and drawbacks of both data integration approaches to identify the one that responds to their organizational needs and objectives. This paper aw well presents an adequate analysis of these two data integration approaches and identifies challenges associated with the selection of either approach. Thorough understanding and awareness of the merits and demits of these two approaches are crucial for practitioners, researchers, and decision-makers to select the approach that enables them to handle complex data, boost their decision-making process, and best align with their needs and expectations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
270. Exploration of Project Cost Management Challenges Based on Process Approach.
- Author
-
Titarenko, Boris and Titarenko, Roman
- Subjects
COST control ,PROJECT management ,BUDGET ,OPEN-ended questions ,DATA analysis - Abstract
Effective cost management is necessary to implement projects successfully. However, in practice organizations may face unique challenges that need to be addressed. To achieve this goal, the paper explores project cost management challenges, using survey responses from 51 managerial/senior-level staff on the open-ended questions connected to the project cost management processes. Data analysis, supported by process approach, was conducted with the use of systematic coding. The findings reveal that managing projects creates challenges, connected to the project cost management processes such as (1) cost management planning, (2) cost estimation, (3) cost budgeting, and (4) cost control. The research allowed for the discovery of a set of project cost management challenges, definition of their causes and effects, and development of possible countermeasures. Implications and recommendations for future research are given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
271. Intelligent Analysis of Import and Export in Green Trade Barrier Based on Big Data Analysis.
- Author
-
Mao, Yu and Lu, Shan
- Subjects
TRADE regulation ,PARTICLE swarm optimization ,SUSTAINABLE development ,DATA analysis ,ECONOMIC development ,BIG data - Abstract
With the rapid development of economic globalisation, global economic and trade activities are escalating. However, environmental problems and the emergence of green economy, a response to these problems, has led to the widespread introduction of green trade barriers. These barriers implicitly limit the development of trade activities. This paper focuses on the export difficulties caused by green trade barriers and proposes a method to quantify discrete product characteristics, explore the internal characteristics of commodities and decide optimally on intended export regions. Firstly, the discrete feature of products is quantified by quantitative transformation method. Secondly, the quantitative data are used to derive the best decision for export regions through support vector regression (SVR) method. Particle swarm optimisation is used in optimising SVR parameters to achieve high-precision decision making. Comparison with historical data from the industry park shows the identification accuracy of the optimised SVR model to be better than that of the traditional regression model. This finding presents a novel perspective for developing import and export under the background of green trade barriers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
272. The impact of the enterprise financial risk management function on financial performance in Bosnia and Herzegovina.
- Author
-
Abdić, Adem, Rovčanin, Adnan, and Abdić, Ademir
- Subjects
FINANCIAL risk management ,FINANCIAL performance ,INDUSTRIAL management ,STATISTICAL sampling ,DATA analysis - Abstract
Adequate enterprise financial risk management (EFRM) represents a leading competitive advantage of enterprises that determines market survival and business success in an uncertain global environment. Over time, EFRM has become a constituent part of integral business dealings of enterprises and one of the strategic functions of enterprise management. The main purpose of the paper is to explore the effects of the EFRM function/system on the financial performance of enterprises in Bosnia and Herzegovina (BiH). The basic source of data in the research was collected by means of a structured questionnaire. The target population in the research consists of large enterprises that have continuously operated in the territory of BiH (2013-2017). The selection of enterprises was made applying a random sampling method and contains 72 enterprises. Appropriate descriptive and inferential statistical methods were used in the data analysis and panel data analysis was used to assess effects of EFRM function on financial performance. The scientific contribution of the paper is reflected in the fact that the research is pioneering for Bosnia and Herzegovina with the analysis of effects of the EFRM function on enterprise financial performance (EFP). The results show that there are no systematic, statistically significant differences between large enterprises that engage in risk management ('hedgers') and enterprises that do not engage in risk management ('non-hedgers') in BiH. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
273. From training to practice: long-term perspectives of graphic facilitation used in organisations.
- Author
-
Hautopp, Heidi and Ørngreen, Rikke
- Subjects
GRAPHIC arts ,ORGANIZATION management ,EMPLOYEE attitudes ,KNOWLEDGE management ,DATA analysis - Abstract
Graphic facilitation is a growing international practice and is often used to describe what professionals do when visually facilitating group processes. Although the professional arena has grown, there is a lack of empirical research in the field, especially regarding long-term perspectives on applying the practice in organisations. This paper aims at investigating employees' experiences and competence development over time within graphic facilitation. The study followed three employees, first in a 2-day basic graphic facilitation course, then in follow-up interviews, eight months, and two years after completing the course. The empirical data were analysed based on a literature review conducted on long-term perspectives, focusing on three themes: 1. The graphic facilitation practice at individual, group, and organisational level; 2. Contextual knowledge and knowing about the participants; 3. Relation between objects, processes and competencies needed. The findings show that all three employees from different organisations continued to use graphic facilitation and found it valuable for giving new insights and overviews of processes and tasks. The methods aided in creating common ground and goals. The employees activated their contextual organisational knowledge to aid the process and found that being sensitive to various groups' needs and personal preferences were effective when applying the methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
274. Enhancing Learning Analytics in Open-Source Software Mailing Archives using Machine Learning and Process Discovery Techniques.
- Author
-
Mukala, Patrick and Ullah, Obaid
- Subjects
DATA analysis ,MACHINE learning ,MINING software ,NATURAL language processing ,INSTITUTIONAL repositories - Abstract
Existing evidence indicates that Free/Libre Open-Source Software (FLOSS) ecosystems offer extensive learning opportunities. Community members actively participate in various activities, both during their interactions with peers and while utilizing these environments. Given that FLOSS repositories contain valuable data on participant interactions and activities, our study focuses on analyzing knowledge exchange and interactions within emails to track learning activities across different phases of the learning process, with a focus on the first phase (Initiation). In this paper, we leverage Natural Language Processing (NLP) and Machine Learning (ML) techniques within a process mining framework. Specifically, we employ NLP techniques to analyze the contents of emails and messages exchanged in these FLOSS repositories to generate event logs for the purpose of modeling learning patterns. Subsequently, we construct corresponding event logs, which serve as input to Disco, the process mining tool, for learning process discovery in these environments. The output comprises visual workflow nets that we interpret as representations of learning activity traces within FLOSS, capturing their sequential occurrences. To enhance the understanding of these models, we incorporate additional statistical details for contextualization and description. This approach enables a nuanced exploration of learning dynamics within FLOSS environments, emphasizing the role of NLP and ML in uncovering valuable insights on how FLOSS participants acquire and exchange knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
275. LEARNING ANALYTICS VISUALISATIONS OF SCHOOL DROPOUT IN VOCATIONAL EDUCATION.
- Author
-
Samaras, Christos, Mavroudi, Anna, and Verykios, Vassilios S.
- Subjects
SCHOOL dropouts ,VOCATIONAL education ,SCHOOL absenteeism ,SCHOOL year ,DATA analysis - Abstract
Vocational Education and Training (VET) suffers from an increasing rate of student dropout in the school sector, while it is largely neglected in the learning analytics (LA) research. At the same time school dropout rates are consistently rising globally and researchers are trying to better understand this rising phenomenon. Some concentrate on its relation to chronic absenteeism from school and to the student socioeconomic background. In addition, other contextual factors can come into play to explain this complex and challenging social phenome-non. Research also shows that the use of LA could contribute to alleviating school dropout. The main purpose of this paper is to present a LA system that visualizes student dropout in the VET school context providing the rea-sons behind it. The paper describes both the backend and the frontend of the system. Regarding the former, the paper focuses on the LA system design and architecture. Regarding the latter, it presents a set of interactive LA visualisations provided by our system based on existing student data on school dropout. In particular, the paper demonstrates its LA visualisations via a case study in a large VET school in Greece based on student data gathered from four consecutive school years. The LA visualisations show students' dropouts across years in the school unit, trajectories of students' dropouts in a specific school year, and the degree of the association between some geographical area (as the area of student residence) with chronic absenteeism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
276. Research Methods: Is Agile Different?
- Author
-
Gromova, Elizaveta and Tomé, Eduardo
- Subjects
RESEARCH methodology ,LITERATURE reviews ,DATA analysis ,RESEARCH ,AGILE manufacturing systems - Abstract
The fourth industrial revolution was a radically new round in the evolution of many processes in society, including management processes. The digital revolution has provoked the development of new management models and concepts. Agile manufacturing is one of such managerial production concepts that meets modern challenges and requirements of the business environment. Literature review as a research method plays an important role in the studies. This paper presents a systematic literature review of the papers published in SCOPUS and Web of Science database about agile manufacturing since 2000. Specifically, we aim at analysing the research methods on this very popular topic on the economy and management. We believe that by defining the methods used in the agile field we may understand the nature of the research. It is well known that agile manufacturing methods are meant to be upfront in terms of efficiency, but in this paper we want to make a review on research methods to check how research has been made. Therefore, we believe this research is useful for scientists and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
277. Into the Future with Cloud: A Comparison with On-premises Data Warehouse.
- Author
-
Noor, Iman, Tariq, Saad Bin, Shabbir, Aisha, and Aksa, Mary
- Subjects
CLOUD computing ,WAREHOUSE management ,DATA analysis ,COST analysis ,DATA recovery - Abstract
The need for data is growing at an extremely steep rate in the ever-digital realm, where terms like "big data" are becoming a thing of the past. All this development requires the use of modern and advanced data handling techniques, where users and researchers can analyze and predict vast amounts of data efficiently. Data warehouses are centralized repositories of data used for business intelligence activities such as analysis and reporting. In this paper, a comparative emphasis has been laid down on two different types of data warehouses, on-premises, and cloud data warehouses. The on-premises are known to be physically housed inside an organization's infrastructure. Cloud data warehouses are online-accessible repositories for data that is stored on cloud platforms. This paper provides a comparative analysis of both types in the context of deployment, scalability, flexibility, query management, cost analysis, access and integration, data security, data storage, data recovery, self-service capabilities and nonetheless, speed and performance. This article further highlights the evolution of data warehouses onto cloud and accentuates the growing demand for an efficient data warehouse, due to the amplification of volume, velocity, variety, value, and veracity of the incoming data in all realms. Furthermore, it provides an in-depth analysis of the advantages of the most suitable data warehouse and discusses the limitations of both. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
278. Tax Heterogeneity and Misallocation.
- Author
-
Kaymak, Barış and Schott, Immo
- Subjects
BUSINESS tax ,TAX rates ,INCOME distribution ,FINANCIAL statements ,DATA analysis - Abstract
Companies face different effective marginal tax rates on their income. This can be detrimental to allocative efficiency unless taxes offset other distortions in the economy. This paper estimates the effect of tax rate heterogeneity on aggregate productivity in distorted economies with multiple frictions. Using firm-level balance-sheet data and estimates of marginal tax rates, we find that tax heterogeneity reduces total factor productivity by about 3 percent. Our findings highlight the positive correlation between marginal tax rates and other distortions to capital and especially labor. This implies that tax rate heterogeneity exacerbates the distortionary effects of other frictions in the economy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
279. Quantitative and qualitative similarity measure for data clustering analysis.
- Author
-
AlShaqsi, Jamil, Wang, Wenjia, Drogham, Osama, and Alkhawaldeh, Rami S.
- Subjects
CLUSTER analysis (Statistics) ,EUCLIDEAN distance ,DATA analysis ,K-means clustering ,HAMMING distance - Abstract
This paper introduces a novel similarity function that evaluates both the quantitative and qualitative similarities between data instances, named QQ-Means (Qualitative and Quantitative-Means). The values are naturally scaled to fall within the range of − 1 to 1. The magnitude signifies the extent of quantitative similarity, while the sign denotes qualitative similarity. The effectiveness of the QQ-Means for cluster analysis is tested by incorporating it into the K-means clustering algorithm. We compare the results of the proposed distance measure with commonly used distance or similarity measures such as Euclidean distance, Hamming distance, Mutual Information, Manhattan distance, and Chebyshev distance. These measures are also applied to the classic K-means algorithm or its variations to ensure consistency in the experimental procedure and conditions. The QQ-Means similarity metric was evaluated on gene-expression datasets and real-world complex datasets. The experimental findings demonstrate the effectiveness of the novel similarity measurement method in extracting valuable information from the data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
280. A Systematic Review of Sophisticated Predictive and Prescriptive Analytics in Child Welfare: Accuracy, Equity, and Bias.
- Author
-
Hall, Seventy F., Sage, Melanie, Scott, Carol F., and Joseph, Kenneth
- Subjects
CHILD welfare ,RISK assessment ,SOCIAL workers ,PREDICTION models ,DATA analysis ,RESEARCH funding ,DATA analytics ,DECISION making ,DESCRIPTIVE statistics ,MANN Whitney U Test ,SYSTEMATIC reviews ,STATISTICS ,MACHINE learning ,ALGORITHMS - Abstract
Child welfare agencies increasingly use machine learning models to predict outcomes and inform decisions. These tools are intended to increase accuracy and fairness but can also amplify bias. This systematic review explores how researchers addressed ethics, equity, bias, and model performance in their design and evaluation of predictive and prescriptive algorithms in child welfare. We searched EBSCO databases, Google Scholar, and reference lists for journal articles, conference papers, dissertations, and book chapters published between January 2010 and March 2020. Sources must have reported on the use of algorithms to predict child welfare-related outcomes and either suggested prescriptive responses, or applied their models to decision-making contexts. We calculated descriptive statistics and conducted Mann-Whitney U tests, and Spearman's rank correlations to summarize and synthesize findings. Of 15 articles, fewer than half considered ethics, equity, or bias or engaged participatory design principles as part of model development/evaluation. Only one-third involved cross-disciplinary teams. Model performance was positively associated with number of algorithms tested and sample size. No other statistical tests were significant. Interest in algorithmic decision-making in child welfare is growing, yet there remains no gold standard for ameliorating bias, inequity, and other ethics concerns. Our review demonstrates that these efforts are not being reported consistently in the literature and that a uniform reporting protocol may be needed to guide research. In the meantime, computer scientists might collaborate with content experts and stakeholders to ensure they account for the practical implications of using algorithms in child welfare settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
281. Effect of Number of Exploration Criteria in Data-driven Mineral Potential Mapping Approaches.
- Author
-
Agah, A., Ghadirisufi, E., and Yousefi, M.
- Subjects
MINERALIZATION ,ANALYTICAL geochemistry ,COPPER ,PREDICTION models ,DATA analysis - Abstract
Copyright of International Journal of Engineering Transactions C: Aspects is the property of International Journal of Engineering (IJE) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
282. A parsimonious Bayesian predictive model for forecasting new reported cases of West Nile disease.
- Author
-
Hosseini, Saman, Cohnstaedt, Lee W., Humphreys, John M., and Scoglio, Caterina
- Subjects
WEST Nile fever ,LOGISTIC distribution (Probability) ,PROBABILITY density function ,DATA analysis ,ACCURACY - Abstract
Upon researching predictive models related toWest Nile virus disease, it is discovered that there are numerous parameters and extensive information in most models, thus contributing to unnecessary complexity. Another challenge frequently encountered is the lead time, which refers to the period for which predictions are made and often is too short. This paper addresses these issues by introducing a parsimonious method based on ICC curves, offering a logistic distribution model derived from the vector-borne SEIR model. Unlike existing models relying on diverse environmental data, our approach exclusively utilizes historical and present infected human cases (number of new cases). With a yearlong lead time, the predictions extend throughout the 12 months, gaining precision as new data emerge. Theoretical conditions are derived to minimize Bayesian loss, enhancing predictive precision. We construct a Bayesian forecasting probability density function using carefully selected prior distributions. Applying these functions, we predict monthspecific infections nationwide, rigorously evaluating accuracy with probabilistic metrics. Additionally, HPD credible intervals at 90%, 95%, and 99% levels is performed. Precision assessment is conducted for HPD intervals, measuring the proportion of intervals that does not include actual reported cases for 2020e2022. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
283. Vibration measurements of paper prints and the data analysis.
- Author
-
Kilikevicius, Arturas, Jurevicius, Mindaugas, Urbanavicius, Robertas, Turla, Vytautas, Kilikeviciene, Kristina, and Fursenko, Antanas
- Subjects
PAPER industry ,DATA analysis ,ANALYSIS of covariance ,PRINTING industry ,QUANTIZATION (Physics) - Abstract
This paper discusses about the scatter of the intensity of vibration signals of paper prints and analyses their mechanical parameters applying the theory of covariance functions. It is an important practical problem, before starting printing process of colour prints, expecting the correct position of fixed raster points, to adjust the paper sheet tension between printing machine sections. The results of measuring the intensity of vibration signals at the fixed points were presented on a time scale in the form of arrays (matrices). The estimates of cross-covariance functions between digital arrays result in measuring the intensity of vibrations, and the estimates of auto-covariance functions of single arrays were calculated upon changing the quantization interval on the time scale. Application of normed auto-covariance and crosscovariance functions enables reduction of preprinting experimental measurements, which saves time (what is actual for industry). Tension force depends on the mechanical properties of the paper sheet and print. These characteristics depend on paper type, layers of printing colors and positioning of the coverage. In the calculation, the software Matlab 7 in batch statement environment was applied.
- Published
- 2020
- Full Text
- View/download PDF
284. THE FIVE STAGES OF BUSINESS ANALYTICS.
- Author
-
WOLNIAK, Radosław and GREBSKI, Wies
- Subjects
BUSINESS analytics ,PROCESS capability ,POLISH literature ,LEGAL literature - Abstract
Purpose: The goal of the paper is to analyze the main features, benefits and problems with the business analytics usage. Design/methodology/approach: Critical literature analysis. Analysis of international literature from main databases and polish literature and legal acts connecting with researched topic. Findings: The paper explores the main concepts of business analytics, including descriptive, real-time, diagnostic, predictive, and prescriptive analytics. Each stage of development builds upon the previous one, addressing specific needs in data analysis and decision-making. The paper also presents a detailed comparison of the five types of business analytics, showcasing their unique characteristics, techniques, and applications. Understanding these differences helps organizations select the appropriate analytics type to suit their requirements and drive success. As technology and data processing capabilities advance, business analytics continues to evolve. Embracing the power of data and analytics grants organizations a competitive advantage, unlocking opportunities and driving innovation. Integrating analytics into decision-making processes is essential for thriving in a data-driven world, ensuring sustained growth and success in an ever-changing marketplace. Originality/value: Detailed analysis of all subjects related to the problems connected with the prospective analytics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
285. Using qualitative study designs to understand treatment burden and capacity for self-care among patients with HIV/NCD multimorbidity in South Africa: A methods paper.
- Author
-
van Pinxteren, Myrna, Mbokazi, Nonzuzo, Murphy, Katherine, Mair, Frances S., May, Carl, and Levitt, Naomi S.
- Subjects
NON-communicable diseases ,EXPERIMENTAL design ,EVALUATION of medical care ,RESEARCH ,CAREGIVERS ,MIDDLE-income countries ,RESEARCH methodology ,BURDEN of care ,DISEASES ,INTERVIEWING ,PATIENTS ,QUALITATIVE research ,COMPARATIVE studies ,CONCEPTUAL structures ,LOW-income countries ,DECISION making ,RESEARCH funding ,EPIDEMICS ,DATA analysis ,JUDGMENT sampling ,HEALTH self-care ,HIV ,MEDICAL research ,EVALUATION - Abstract
Background: Low- and middle-income countries (LMICs), including South Africa, are currently experiencing multiple epidemics: HIV and the rising burden of non-communicable diseases (NCDs), leading to different patterns of multimorbidity (the occurrence of two or more chronic conditions) than experienced in high income settings. These adversely affect health outcomes, increase patients' perceived burden of treatment, and impact the workload of self-management. This paper outlines the methods used in a qualitative study exploring burden of treatment among people living with HIV/NCD multimorbidity in South Africa. Methods: We undertook a comparative qualitative study to examine the interaction between individuals' treatment burden (self-management workload) and their capacity to take on this workload, using the dual lenses of Burden of Treatment Theory (BoTT) and Cumulative Complexity Model (CuCoM) to aid conceptualisation of the data. We interviewed 30 people with multimorbidity and 16 carers in rural Eastern Cape and urban Cape Town between February-April 2021. Data was analysed through framework analysis. Findings: This paper discusses the methodological procedures considered when conducting qualitative research among people with multimorbidity in low-income settings in South Africa. We highlight the decisions made when developing the research design, recruiting participants, and selecting field-sites. We also explore data analysis processes and reflect on the positionality of the research project and researchers. Conclusion: This paper illustrates the decision-making processes conducting this qualitative research and may be helpful in informing future research aiming to qualitatively investigate treatment burden among patients in LMICs. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
286. Global scientific collaboration: A social network analysis and data mining of the co-authorship networks.
- Author
-
Isfandyari-Moghaddam, Alireza, Saberi, Mohammad Karim, Tahmasebi-Limoni, Safieh, Mohammadian, Sajjad, and Naderbeigi, Farahnaz
- Subjects
SOCIAL network analysis ,COOPERATIVE research ,DATA analysis ,DATA mining ,SCIENCE databases ,WEB databases ,COUNTRIES ,HIGH-income countries - Abstract
Co-authorship networks consist of nodes and numerous links indicating scientific collaboration of researchers. These networks could be studied through social networks analysis and data mining techniques. The focus of the article is twofold: the first objective is the analysis of the co-authorship networks of the top 60 countries that had the highest number of scientific publications in the world, and the second one is the discovery of collaboration patterns of highly cited papers of these countries. To do so, all scientific publications of the top 60 countries in all fields as well as their highly cited papers were included in the study period between 2011 and 2015. The research samples in the first part included 10,460,999 documents and in the second part encompassed 711,025 highly cited papers. Required data were extracted from web of science database. To analyse co-authorship networks, centrality indices and clustering coefficient were used. UCINET, Pajek, VOSviewer and BibExcel software were used to map co-authorship networks of the countries and to calculate indices. Finally, the discovery of collaboration patterns in highly cited papers is studied through association rules. The research data indicated that over 95% of documents has been produced by the top 60 countries. In addition, the USA, Germany, England, France and Spain launched the most co-authorship. Quantitatively, there have been the most powerful collaboration links between China and the USA, the USA and England, the USA and Germany, and the USA and Canada. The clustering data indicated that collaborations of the top countries of the world were in three main clusters. The Friedman test showed that there was a significant difference in the priorities of the countries for collaboration; and the USA, China, England, Germany, France, Japan and Italy are in the top priority for collaboration, respectively. The results of collaboration pattern in highly cited papers indicated that the USA participates in more than half of collaboration patterns for producing highly cited papers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
287. Digital Analysis of Occlusion in Fixed Partial Implant Prostheses: How to Overcome Age-Related Changes in the Stomatognathic System.
- Author
-
Dib Zakkour, Juan, Dib Zakkour, Sara, Montero, Javier, García-Cenador, Begoña, Flores-Fraile, Javier, and Dib Zaitun, Abraham
- Subjects
DENTAL implants ,COMPUTER software ,STATISTICS ,KRUSKAL-Wallis Test ,DENTAL offices ,ONE-way analysis of variance ,BRIDGES (Dentistry) ,PATIENT satisfaction ,FISHER exact test ,DENTAL occlusion ,COMPARATIVE studies ,RANDOMIZED controlled trials ,SEX distribution ,T-test (Statistics) ,PEARSON correlation (Statistics) ,STOMATOGNATHIC system ,AGING ,MASTICATION ,BLIND experiment ,CHI-squared test ,STATISTICAL sampling ,ELECTROMYOGRAPHY ,DATA analysis ,DATA analysis software ,PERIODONTAL ligament - Abstract
Due to their lack of periodontal ligaments (PLs) and the differences between dental implants and natural teeth, it is necessary to improve and generate a new occlusal scheme to prolong the life of implants and prostheses. The age and the sex of patients must be considered because of their effects on the stomatognathic system's physiology. Operators must manage all the changes to obtain good sensations during mastication and a better occlusal scheme for implanting fixed partial prostheses. Dentists should try to protect this type of prosthesis using adjacent teeth and the PL. This is why new digital systems were created. The combination of T-Scan
® (digital software for occlusal analysis) and electromyography (EMG) could allow doctors to find areas where it is necessary to act and to find suitable solutions for the problems generated by using conventional methods of occlusal analysis (such as articulating paper). In this study, a new method for establishing occlusion on fixed partial implant prostheses has been created, combining digital systems with conventional articulating paper. This method consists of asking the patient to bite down with different forces and situations in an attempt to achieve Implant-Protected Occlusion (IPO). The use of digital systems has been shown to be more effective than using only conventional systems. This new method allows a safer mode of occlusion which protects implants and prostheses, saving all the differences between them and natural teeth, and increasing the satisfaction of patients. This method also helps to overcome the changes in the stomatognathic system as age increases, adjusting the occlusion to changes in PLs with age. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
288. Refugee education: homogenized policy provisions and overlooked factors of disadvantage.
- Author
-
Molla, Tebeje
- Subjects
REFUGEES ,HIGHER education ,EDUCATIONAL attainment ,DATA analysis - Abstract
For forcibly displaced people, high educational attainment is economically and socially empowering. Using experiences of African refugee youth in Australia as an empirical case and drawing on the capability approach to social justice, this paper aims to assess the substantiveness of education opportunities of refugees. Qualitative data were generated through policy review and semi-structured interviews. The analysis shows that not only are refugees invisible in equity policies, but educational inequality is also framed homogeneously as a lack of access. The restrictive framing disregards differences in people's ability to convert resources into valuable outcomes. Specifically, the paper identifies four overlooked factors of educational inequality among African refugee youth: early disadvantage, limited navigational capacity, adaptive preferences, and racial stereotypes. Without an expansive view of disadvantage, it is hardly possible to break the link between marginal social position and low educational attainment of refugees. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
289. PEAD.txt: Post-Earnings-Announcement Drift Using Text.
- Author
-
Meursault, Vitaly, Liang, Pierre Jinghong, Routledge, Bryan R., and Scanlon, Madeline Marco
- Subjects
EARNINGS announcements ,MATHEMATICAL models ,DATA analysis ,PUBLIC companies ,DEEP learning ,FINANCIAL statements ,QUARTERLY reports ,STOCK prices - Abstract
We construct a new numerical measure of earnings announcement surprises, standardized unexpected earnings call text (SUE.txt), that does not explicitly incorporate the reported earnings value. SUE.txt generates a text-based post-earnings-announcement drift (PEAD.txt) larger than the classic PEAD. The magnitude of PEAD.txt is considerable even in recent years when the classic PEAD is close to 0. We explore our text-based empirical model to show that the calls' news content is about details behind the earnings number and the fundamentals of the firm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
290. Assisting researchers in bibliographic tasks: A new usable, real‐time tool for analyzing bibliographies.
- Author
-
Dattolo, Antonina and Corbatto, Marco
- Subjects
SEMANTICS ,BIBLIOGRAPHIC databases ,METADATA ,USER interfaces ,TASK performance ,BIBLIOGRAPHY ,DOCUMENTATION ,BIBLIOGRAPHICAL citations ,DESCRIPTIVE statistics ,INFORMATION retrieval ,STATISTICAL models ,DATA analysis - Abstract
The amount of scientific papers is growing together with the development of science itself; but, although there is an unprecedented availability of large citation indexes, some daily activities of researchers remain time‐consuming and poorly supported. In this paper, we present Visual Bibliographies (VisualBib), a real‐time visual platform, designed using a zz‐structure‐based model for linking metadata and a narrative, visual approach for showing bibliographies. VisualBib represents a usable, advanced, and visual tool, which simplifies the management of bibliographies, supports a core set of bibliographic tasks, and helps researchers during complex analyses on scientific bibliographies. We present the variety of metadata formats and visualization methods, proposing two use case scenarios. The maturity of the system implementation allowed us two studies, for evaluating both the effectiveness of VisualBib in providing answers to specific data analysis tasks and to support experienced users during real‐life uses. The results of the evaluation are positive and describe an effective and usable platform. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
291. Efforts to Patch Ghana's Leaky Educational Pipeline' for Promoting Gender Equity in STEM Field of Study: A Position Paper.
- Author
-
Mireku, Dickson Okoree and Dzamesi, Prosper Dzifa
- Subjects
STEM education ,GENDER role ,DATA analysis - Abstract
This position paper aims to highlight some progressive steps by successive Ghana governments to patch the leaks in Ghana's educational pipeline for training females for careers in the field of Science, Technology, Engineering and Mathematics (STEM). Documentary analysis techniques were employed to review the literature to follow the line of discussions on the topic. After the review, it was found that at the end of British rule in 1957, Ghana adopted various science and technology policies geared towards pushing it into the class of front-runners in modern science and technology. Adopting policies became necessary after Ghana assessed the pivotal roles that science and technology would play in its economic development agenda. The Gender Parity Index in the primary and secondary school enrollments in Ghana between 2011 to 2020 increased from 0.96 to 1.01, indicating that the differences in the rates at which males and females were admitted to reading STEM programmes closed up. Through the government of Ghana's interventions, the gender gap was reduced, a situation that supported the stands of the authors against that of some social critics who were of the view that Ghana is among the countries that are still struggling to patch 'leaks' in its educational pipeline for promoting gender balance in STEM education. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
292. Addressing depression and comorbid health conditions through solution-focused brief therapy in an integrated care setting: a randomized clinical trial.
- Author
-
Cooper, Zach W, Mowbray, Orion, Ali, Mohammed K., and Johnson, Leslie C. M.
- Subjects
PREVENTION of mental depression ,ANXIETY prevention ,PUBLIC hospitals ,REPEATED measures design ,BODY mass index ,GLYCOSYLATED hemoglobin ,AFRICAN Americans ,T-test (Statistics) ,DATA analysis ,RESEARCH funding ,PRIMARY health care ,STATISTICAL sampling ,QUESTIONNAIRES ,HYPERTENSION ,HISPANIC Americans ,BEHAVIOR ,TREATMENT effectiveness ,BRIEF psychotherapy ,RANDOMIZED controlled trials ,WHITE people ,DESCRIPTIVE statistics ,RURAL health clinics ,MEDICAL records ,ACQUISITION of data ,ANALYSIS of variance ,STATISTICS ,SYSTOLIC blood pressure ,COMORBIDITY ,WELL-being ,OBESITY ,DIABETES - Abstract
Background: Co-occurring physical and mental health conditions are common, but effective and sustainable interventions are needed for primary care settings. Purpose: Our paper analyzes the effectiveness of a Solution-Focused Brief Therapy (SFBT) intervention for treating depression and co-occurring health conditions in primary care. We hypothesized that individuals receiving the SFBT intervention would have statistically significant reductions in depressive and anxiety symptoms, systolic blood pressure (SBP), hemoglobin A1C (HbA1c), and body mass index (BMI) when compared to those in the control group. Additionally, we hypothesized that the SFBT group would have increased well-being scores compared to the control group. Methods: A randomized clinical trial was conducted at a rural federally qualified health center. Eligible participants scored ≥ 10 on the Patient Health Questionnaire (PHQ-9) and met criteria for co-occurring health conditions (hypertension, obesity, diabetes) evidenced by chart review. SFBT participants (n = 40) received three SFBT interventions over three weeks in addition to treatment as usual (TAU). The control group (n = 40) received TAU over three weeks. Measures included depression (PHQ-9) and anxiety (GAD-7), well-being (Human Flourishing Index), and SFBT scores, along with physical health outcomes (blood pressure, body mass index, and hemoglobin A1c). Results: Of 80 consented participants, 69 completed all measures and were included in the final analysis. 80% identified as female and the mean age was 38.1 years (SD = 14.5). Most participants were white (72%) followed by Hispanic (15%) and Black (13%). When compared to TAU, SFBT intervention participants had significantly greater reductions in depression (baseline: M = 18.17, SD = 3.97, outcome: M = 9.71, SD = 3.71) and anxiety (baseline: M = 14.69, SD = 4.9, outcome: M = 8.43, SD = 3.79). SFBT intervention participants also had significantly increased well-being scores (baseline: M = 58.37, SD = 16.36, outcome: M = 73.43, SD = 14.70) when compared to TAU. Changes in BMI and blood pressure were not statistically significant. Conclusion: The SFBT intervention demonstrated efficacy in reducing depressive and anxiety symptoms and increasing well-being but did not affect cardio-metabolic parameters over a short period of intervention. Trial Registration: The study was pre-registered at ClinicalTrials.gov Identifier: NCT05838222 on 4/20/2023. *M = Mean, SD = Standard deviation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
293. Efficient Detection of Irrelevant User Reviews Using Machine Learning.
- Author
-
Kim, Cheolgi and Kim, Hyeon Gyu
- Subjects
MACHINE learning ,LANGUAGE models ,TOURIST attractions ,KEYWORD searching ,DATA analysis - Abstract
User reviews such as SNS feeds and blog writings have been widely used to extract opinions, complains, and requirements about a given place or product from users' perspective. However, during the process of collecting them, a lot of reviews that are irrelevant to a given search keyword can be included in the results. Such irrelevant reviews may lead to distorted results in data analysis. In this paper, we discuss a method to detect irrelevant user reviews efficiently by combining various oversampling and machine learning algorithms. About 35,000 user reviews collected from 25 restaurants and 33 tourist attractions in Ulsan Metropolitan City, South Korea, were used for learning, where the ratio of irrelevant reviews in the two kinds of data sets was 53.7% and 71.6%, respectively. To deal with skewness in the collected reviews, oversampling algorithms such as SMOTE, Borderline-SMOTE, and ADASYN were used. To build a model for the detection of irrelevant reviews, RNN, LSTM, GRU, and BERT were adopted and compared, as they are known to provide high accuracy in text processing. The performance of the detection models was examined through experiments, and the results showed that the BERT model presented the best performance, with an F1 score of 0.965. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
294. Iterative Data-adaptive Autoregressive (IDAR) whitening procedure for long and short TR fMRI.
- Author
-
Kun Yue, Webster, Jason, Grabowski, Thomas, Shojaie, Ali, and Jahanian, Hesamoddin
- Subjects
FUNCTIONAL magnetic resonance imaging ,AUTOREGRESSIVE models ,ERROR rates ,DATA analysis - Abstract
Introduction: Functional magnetic resonance imaging (fMRI) has become a fundamental tool for studying brain function. However, the presence of serial correlations in fMRI data complicates data analysis, violates the statistical assumptions of analyses methods, and can lead to incorrect conclusions in fMRI studies. Methods: In this paper, we show that conventional whitening procedures designed for data with longer repetition times (TRs) (>2 s) are inadequate for the increasing use of short-TR fMRI data. Furthermore, we comprehensively investigate the shortcomings of existing whitening methods and introduce an iterative whitening approach named "IDAR" (Iterative Data-adaptive Autoregressivemodel) to address these shortcomings. IDAR employs high-order autoregressive (AR) models with flexible and data-driven orders, offering the capability to model complex serial correlation structures in both short-TR and long-TR fMRI datasets. Results: Conventional whiteningmethods, such as AR(1), ARMA(1,1), and higher-order AR, were effective in reducing serial correlation in long-TR data but were largely ineffective in even reducing serial correlation in short-TR data. In contrast, IDAR significantly outperformed conventional methods in addressing serial correlation, power, and Type-I error for both long-TR and especially short-TR data. However, IDAR could not simultaneously address residual correlations and inflated Type-I error effectively. Discussion: This study highlights the urgent need to address the problem of serial correlation in short-TR (<1 s) fMRI data, which are increasingly used in the field. Although IDAR can address this issue for a wide range of applications and datasets, the complexity of short-TR data necessitates continued exploration and innovative approaches. These efforts are essential to simultaneously reduce serial correlations and control Type-I error rates without compromising analytical power. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
295. Performance Evaluation of Structural Health Monitoring System Applied to Full-Size Composite Wing Spar via Probability of Detection Techniques.
- Author
-
Galasso, Bernardino, Ciminello, Monica, Apuleo, Gianvito, Bardenstein, David, and Concilio, Antonio
- Subjects
STRUCTURAL health monitoring ,FIBER optics ,RELIABILITY in engineering ,COMPOSITE structures ,DATA analysis - Abstract
Probability of detection (POD) is an acknowledged mean of evaluation for many investigations aiming at detecting some specific property of a subject of interest. For instance, it has had many applications for Non-Destructive Evaluation (NDE), aimed at identifying defects within structural architectures, and can easily be used for structural health monitoring (SHM) systems, meant as a compact and more integrated evolution of the former technology. In this paper, a probability of detection analysis is performed to estimate the reliability of an SHM system, applied to a wing box composite spar for bonding line quality assessment. Such a system is based on distributed fiber optics deployed on the reference component at specific locations for detecting strains; the attained data are then processed by a proprietary algorithm whose capability was already tested and reported in previous works, even at full-scale level. A finite element (FE) model, previously validated by experimental results, is used to simulate the presence of damage areas, whose effect is to modify strain transfer between adjacent parts. Numerical data are used to verify the capability of the SHM system in revealing the presence of the modeled physical discontinuities with respect to a specific set of loads, running along the beam up to cover its complete extension. The POD is then estimated through the analysis of the collected data sets, wide enough to assess the global SHM system performance. The results of this study eventually aim at improving the current strategies adopted for SHM for bonding analysis by identifying the intimate behavior of the system assessed at the date. The activities herein reported have been carried out within the RESUME project. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
296. First Release of the Optimal Cloud Analysis Climate Data Record from the EUMETSAT SEVIRI Measurements 2004–2019.
- Author
-
Bozzo, Alessio, Doutriaux-Boucher, Marie, Jackson, John, Spezzi, Loredana, Lattanzio, Alessio, and Watts, Philip D.
- Subjects
ICE clouds ,TIME series analysis ,ORBITS (Astronomy) ,DATA analysis ,LIDAR - Abstract
Clouds are key to understanding the atmosphere and climate, and a long series of satellite observations provide invaluable information to study their properties. EUMETSAT has published Release 1 of the Optimal Cloud Analysis (OCA) Climate Data Record (CDR), which provides a homogeneous time series of cloud properties of up to two overlapping layers, together with uncertainties. The OCA product is derived using the 15 min Spinning Enhanced Visible and Infrared Imager (SEVIRI) measurements onboard Meteosat Second Generation (MSG) in geostationary orbit and covers the period from 19 January 2004 until 31 August 2019. This paper presents the validation of the OCA cloud-top pressure (CTP) against independent lidar-based estimates and the quality assessment of the cloud optical thickness (COT) and cloud particle effective radius (CRE) against a combination of products from satellite-based active and passive instruments. The OCA CTP is in good agreement with the CTP sensed by lidar for low thick liquid clouds and substantially below in the case of high ice clouds, in agreement with previous studies. The retrievals of COT and CRE are more reliable when constrained by solar channels and are consistent with other retrievals from passive imagers. The resulting cloud properties are stable and homogeneous over the whole period when compared against similar CDRs from passive instruments. For CTP, the OCA CDR and the near-real-time OCA products are consistent, allowing for the use of OCA near-real time products to extend the CDR beyond August 2019. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
297. Towards Robust Pansharpening: A Large-Scale High-Resolution Multi-Scene Dataset and Novel Approach.
- Author
-
Wang, Shiying, Zou, Xuechao, Li, Kai, Xing, Junliang, Cao, Tengfei, and Tao, Pin
- Subjects
REMOTE sensing ,ENVIRONMENTAL monitoring ,LAND cover ,DATA analysis ,PIXELS ,MULTISPECTRAL imaging - Abstract
Pansharpening, a pivotal task in remote sensing, involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information. These pansharpened images enhance precision in land cover classification, change detection, and environmental monitoring within remote sensing data analysis. While deep learning techniques have shown significant success in pansharpening, existing methods often face limitations in their evaluation, focusing on restricted satellite data sources, single scene types, and low-resolution images. This paper addresses this gap by introducing PanBench, a high-resolution multi-scene dataset containing all mainstream satellites and comprising 5898 pairs of samples. Each pair includes a four-channel (RGB + near-infrared) multispectral image of 256 × 256 pixels and a mono-channel panchromatic image of 1024 × 1024 pixels. To avoid irreversible loss of spectral information and achieve a high-fidelity synthesis, we propose a Cascaded Multiscale Fusion Network (CMFNet) for pansharpening. Multispectral images are progressively upsampled while panchromatic images are downsampled. Corresponding multispectral features and panchromatic features at the same scale are then fused in a cascaded manner to obtain more robust features. Extensive experiments validate the effectiveness of CMFNet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
298. Lessons of the COVID-19 Pandemic for Ambulance Service in Kazakhstan.
- Author
-
Messova, Assylzhan, Pivina, Lyudmila, Ygiyeva, Diana, Batenova, Gulnara, Dyussupov, Almas, Jamedinova, Ulzhan, Syzdykbayev, Marat, Adilgozhina, Saltanat, and Bayanbaev, Arman
- Subjects
CROSS-sectional method ,HELPLINES ,DATA analysis ,RESEARCH funding ,EMERGENCY medical services ,RETROSPECTIVE studies ,AMBULANCES ,RESEARCH methodology ,ANALYSIS of variance ,FRIEDMAN test (Statistics) ,STATISTICS ,DATA analysis software ,COVID-19 pandemic - Abstract
Background: Emergency medical services (EMS) are intended to provide people with immediate, effective, and safe access to the healthcare system. The effects of pandemics on emergency medical services (EMS) have not been studied sufficiently. The aim of this paper is to assess the frequency and structure of calls at an ambulance station in Kazakhstan during the period of 2019–2023. Methods: A retrospective analysis was conducted to estimate the incidence of emergency assistance cases from 2019 to 2023. Results: An analysis of the structure and number of ambulance calls before the pandemic, during the pandemic, and post-pandemic period did not reveal significant changes, except for calls in urgency category IV. Patients of urgency category IV handled by an ambulance decreased by 2 and 1.7 times in 2020 and 2021, respectively, which appears to be related to quarantine measures. In 2022 and 2023, category IV calls were 4.7 and 4.5 times higher than in 2019. Conclusions: This study's findings suggest no changes in the dynamics of ambulance calls, except urgency category IV calls. The number of category IV urgent calls decreased significantly during the COVID-19 pandemic and increased in the post-pandemic period. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
299. Exploring how green innovation moderates the relationship between innovation strategies and CSR performance in the African States.
- Author
-
Omonijo, Oluwole Nurudeen and Zhang, Yunsheng
- Subjects
MARKET value ,INNOVATIONS in business ,GREEN marketing ,DATA analysis - Abstract
Innovation is increasingly recognized as a linchpin for CSR transformation within organizations. Beyond mere compliance, businesses are embracing innovation as a strategic imperative to address societal challenges. Using panel-data analysis spanning from 2014 to 2023, this paper investigates the relationship between innovation strategies and Corporate Social Responsibility (CSR) performance, with a special emphasis on the moderating effect of Green Innovation (GI). We examine how panel data analysis shows, green innovation is found to be an important moderator that lessens the possible harm that China's high-tech may cause to CSR. Results further shows that green innovations can positively moderate relationship between China's high-tech and CSR. Further, we found several other key variables (state equity, innovation level, market value) to improve firm's innovations and CSR performance. The findings of this study have important implications for practitioners to give sustainable innovation strategies top priority, and policymakers are advised to encourage ecologically friendly behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
300. On the part that NMR should play in mass spectrometry metabolomics in natural products studies.
- Author
-
Borges, Ricardo M. and Magno Teixeira, Andrew
- Subjects
NATURAL products ,NUCLEAR magnetic resonance ,MASS spectrometry ,METABOLOMICS ,DATA analysis - Abstract
The field of metabolomics has witnessed remarkable growth in the context of natural products studies, with Mass Spectrometry (MS) being the predominant analytical tool for data acquisition. However, MS has inherent limitations when it comes to the structural elucidation of key metabolites, which can hinder comprehensive compound identification. This review paper discusses the integration of Nuclear Magnetic Resonance (NMR) spectroscopy as a complementary technique to address these limitations. We explore the concept of Quality Control (QC) samples, emphasizing their potential use for in-depth compound annotation and identification. Additionally, we discuss NMR's advantages, limitations, and strategies to enhance sensitivity. We present examples where MS alone falls short in delivering accurate compound identification and introduce various tools for NMR compound identification in complex mixtures and the integration of MS and NMR data. Finally, we delve into the concept of DBsimilarity to broaden the chemical space understanding, aiding in compound annotation and the creation of compound lists for specific sample analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.