71,115 results on '"Term (time)"'
Search Results
2. Construct Validity in Software Engineering
- Author
-
Dag I. K. Sjøberg and Gunnar R. Bergersen
- Subjects
Empirical research ,Quantitative analysis (finance) ,Computer science ,business.industry ,Cheating ,Statistical conclusion validity ,Construct validity ,Common ground ,Software engineering ,business ,Software ,Conceptual level ,Term (time) - Abstract
Empirical research aims to establish generalizable claims from data. Such claims involve concepts that often must be measured indirectly by using indicators. Construct validity is concerned with whether one can justifiably make claims at the conceptual level that are supported by results at the operational level. We report a quantitative analysis of the awareness of construct validity in the software engineering literature between 2000 and 2019 and a qualitative review of 83 articles about human-centric experiments published in five high-quality journals between 2015 and 2019. Over the two decades, the appearance in the literature of the term construct validity increased sevenfold. Some of the reviewed articles we reviewed employed various ways to ensure that the indicators span the concept in an unbiased manner. We also found articles that reuse formerly validated constructs. However, the articles disagree about how to define construct validity. Several interpret construct validity excessively by including threats to internal, external, or statistical conclusion validity. A few articles also include fundamental challenges of a study, such as cheating and misunderstandings of experiment material. The diversity of topics discussed makes us recommend a minimalist approach to construct validity. We propose seven guidelines to establish a common ground for addressing construct validity in software engineering.
- Published
- 2023
3. Understanding the Long-Term Evolution of Mobile App Usage
- Author
-
Sasu Tarkoma, Tong Li, Pan Hui, Yong Li, and Yali Fan
- Subjects
Computer Networks and Communications ,Emerging technologies ,Computer science ,business.industry ,Mobile computing ,Mobile apps ,Service provider ,Popularity ,Term (time) ,World Wide Web ,Market research ,mental disorders ,The Internet ,Electrical and Electronic Engineering ,business ,Software - Abstract
The prevalence of smartphones has promoted the popularity of mobile apps in recent years. In this paper, we study how mobile app usage evolves over a long-term period. We first introduce an app usage collection platform named carat, from which we have gathered app usage records of 1,465 users from 2012 to 2017. We then conduct the first study on the long-term evolution processes on a macro-level, i.e., app-category, and micro-level, i.e., individual app. We discover that, on both levels, there is a growth stage enabled by the introduction of new technologies. Then there is a plateau stage caused by high correlations between app categories and a Pareto effect in individual app usage, respectively. Additionally, the evolution of individual app usage undergoes an elimination stage due to fierce intra-category competition. The inter-diversity of app-category and individual app usage exhibits opposing trends: app-category usage assimilates while individual app usage diversifies. Nevertheless, the intra-diversity of both app-category and app usage declines over time. Also, we demonstrate the country barriers of app category usage. We further investigate how different demographics affect the evolutionary processes of app usage. Our study provides useful implications for app developers, market intermediaries, and service providers
- Published
- 2023
4. Detecting Dependency-Related Sentiment Features for Aspect-Level Sentiment Classification
- Author
-
Changxi Zhu, Jingyun Xu, Xing Zhang, Xingwei Tan, and Yi Cai
- Subjects
Dependency (UML) ,Artificial neural network ,Computer science ,Polarity (physics) ,Dependency relation ,business.industry ,Parse tree ,computer.software_genre ,Term (time) ,Human-Computer Interaction ,Syntactic structure ,Artificial intelligence ,business ,computer ,Software ,Sentence ,Natural language processing - Abstract
Aspect level sentiment classification aims to classify the sentiment polarity of a given aspect term or aspect category in a sentence. For sentiment classification towards a given aspect term, since a sentence may contain more than one aspect term, there may exist some opinions which are not the modifiers of the given aspect term. It is necessary to capture relevant opinion for a certain aspect term. Previous works use the relative distance between an aspect term and all other words in a sentence, in order to capture the nearest opinion of the aspect term. This can be infeasible when the sentence has a complex syntactic structure. In this paper, we detect the dependency relation between an aspect term and its related sentiment words in the dependency parse tree. Then, we integrate this relationship into CNN and Bi-LSTM respectively. Experiments show that the related sentiment features for an aspect term is helpful for models to discriminate its sentiment polarity, and our proposed models achieve state-of-the-art results among neural network models.
- Published
- 2023
5. Impact of Technical Parameters for Short- and Long-term Analysis of Stock Behavior
- Author
-
E.R. Al Silni Ahmed and S.B. Goyal
- Subjects
010302 applied physics ,business.industry ,Computer science ,Deep learning ,02 engineering and technology ,General Medicine ,Variation (game tree) ,021001 nanoscience & nanotechnology ,01 natural sciences ,Term (time) ,Recurrent neural network ,Software ,Work (electrical) ,0103 physical sciences ,Econometrics ,Position (finance) ,Artificial intelligence ,0210 nano-technology ,business ,Stock (geology) - Abstract
Stock price forecasting is a type of time series problem that forecasts the future price or status of a company on the basis of analysis of time respective values. As the price of stock or company varies with respect to time, its behavior can be analyzed by different machine learning approaches. In this work, methodology is proposed to evaluate the stock position with variation in time using deep learning approach such as recurrent neural network (RNN). This methodology used the technical parameters to evaluate the long term and short-term analysis of any stock or share. This approach also evaluates and gives suggestions to investors either to buy or sell any stock for long term and gives return at very low risk. In this paper work, hybridization of co-relation analysis and deep learning approach for stock price and long-term behavior analysis. The proposed work is termed as time lagged weight optimized RNN (TL-WO-RNN) that is adopted in this work and effectively predict the technical parameters and on the basis of that stock behavior is also predicted. The result analysis was performed on data from different sectors and such as Telecom, Powers, Manufacturing, Finance, Software sectors, etc. The result analysis shows the effectiveness of the TL-WO-RNN algorithm as compared to existing work.
- Published
- 2023
6. Comparison of common adverse neonatal outcomes among preterm and term infants at the National Referral Hospital in Tanzania: a case-control study
- Author
-
Siriel Nanzia Massawe, Sylvester Leonard Lyantagaye, Erik Bongcam-Rudloff, Raphael Z. Sangeda, and Bernadether Terentius Rugumisa
- Subjects
medicine.medical_specialty ,Tanzania ,biology ,Referral ,business.industry ,Neonatal outcomes ,Emergency medicine ,Case-control study ,Medicine ,infants ,prematurity ,biology.organism_classification ,business ,Term (time) - Abstract
BackgroundThe first month of life is the most critical in a child’s heath because it is associated with the highest risk of adverse health outcomes. In Tanzania the risk of adverse health outcomes in preterm infants is five times higher compared to term infants.The objective of this study was to assess common adverse health outcomes and compare the risk of such outcomes between preterm and term infants, in Tanzania, within the first 28 days of life.MethodsThis was a case-control study involving preterm (cases) and term (controls) infants delivered at the Muhimbili National Hospital between August and October 2019 . About 222 pairs of cases and controls were reviewed for their medical records. Logistic regression was used to compare the risk of neonatal outcomes between the study groups. Statistical significance was achieved at P-value < 0.05 and 95% confidence interval.ResultsPreterm infants have an increased risk of mortality (OR = 7.2, 95% CI: 3.4-15.1), apnea (OR = 4.7, 95% CI: 3.4-15.1), respiratory distress syndrome (OR = 4.8, 95% CI: 3.2-7.3), necrotizing enterocolitis (OR = 5.5, 95% CI: 1.2-25.3), anemia (OR = 4.3 , 95% CI: 2.8-6.6), pneumonia (OR = 2.7, 95% CI: 1.6-4.6) and sepsis (OR = 2.6, 95% CI: 1.7-3.9) in the first month of life compared to term infants. No differences in the risk of intraventricular hemorrhage, bronchopulmonary dysplasia, patent ductus arteriosus and jaundice were observed between preterm and term infants. ConclusionThe findings of this study informs the Tanzanian health sector about the most common and high risk neonatal outcomes in preterm infants. Additionaly, for promoting neonates' health, the health sector needs to consider preventing and treating the most common and high risk adverse neonatal outcomes in preterm infants.
- Published
- 2022
7. Long-term Effects of Uncomplicated Traumatic Hyphema on Corneal and Lenticular Clarity
- Author
-
Furkan Emre Sogut, Mustafa Salih Karatepe, Pinar Kosekahya, and Ali Keles
- Subjects
medicine.medical_specialty ,Ophthalmology ,law ,business.industry ,CLARITY ,Medicine ,business ,Traumatic hyphema ,Term (time) ,law.invention - Abstract
Purpose: To evaluate the long-term effects of uncomplicated traumatic hyphema on endothelial morphology, anterior segment structure, and corneal and lenticular densitometryMethods: In this retrospective comparative study, eyes with a history of uncomplicated traumatic hyphema were compared with the healthy contralateral unaffected eyes. The corneal endothelial cell properties were captured using specular microscopy. Anterior segment analysis, corneal densitometry (12-mm corneal diameter), and lens densitometry measurements were performed using the Pentacam imaging system.Results: Measurements were obtained at a mean follow-up of 49.5 ± 15.8 months after injury. The average endothelial cell density was significantly lower in the study group than in the control group (2,506.6 ± 294.0 cells/mm² vs. 2,665.7 ± 195.0 cells/mm², p = 0.020). There was no difference between the groups in respect of polymegathism and pleomorphism (p = 0.061 and p = 0.558, respectively). All the investigated corneal tomographic and angle parameters were similar in both groups (all p > 0.05). The corneal densitometry values in all concentric zones and layers showed no statistically significant difference between the groups (p > 0.05 for all). The lens zone 1 densitometry value was significantly higher in the study group than in the control group (9.6% ± 1.1% vs. 8.9% ± 1.2%, p = 0.031). No difference was observed in zone 2 and 3 (p = 0.170 and p = 0.322, respectively). The degree of hyphema was not correlated with endothelial cell and lenticular clarity loss (p = 0.087 and p = 0.294, respectively).Conclusions: Even if traumatic hyphema is not complicated, long-term outcomes indicate endothelial cell loss and increased lenticular density.
- Published
- 2022
8. A Socioeconomic Ripple Effect Analysis of Integrative National Construction Standards Codification Efforts: System Dynamics Approach
- Author
-
Jae-Ho Choi, Young Hoon Kwak, and Yongsoo Lee
- Subjects
Emergency management ,business.industry ,Computer science ,Process (engineering) ,Strategy and Management ,Investment (macroeconomics) ,System dynamics ,Term (time) ,Risk analysis (engineering) ,Order (exchange) ,Code (cryptography) ,Electrical and Electronic Engineering ,business ,Uncertainty analysis - Abstract
South Korea established the National Construction Standards Center to efficiently manage the national construction standards and orchestrate various efforts such as the development and export of the unified code system, and the conduction of code reform research for securing safety and ever increasing disaster prevention of buildings and infrastructure. Significant national budgets must be put in place to develop, promote, and continually improve the unified construction code system. In order to confirm the appropriateness of this continuous investment, this article intends to estimate the socioeconomic ripple effect of the coordinated efforts over the next 30 years by developing a hybrid method that combines an analytical hierarchical process and system dynamics (SD). In this article, the interacting behavior of the developed SD model is illustrated parametrically. We found that the center's integrative codification efforts including the development and diffusion of the unified code system has the effect of offsetting the decreasing national construction budget. We also found that the Monte Carlo multivariate simulation-based uncertainty analysis on the developed SD model is well-suited for effectively quantifying and integrating various benefits to empower decision makers; in particular, providing information both on the mean and the most conservative socioeconomic ripple effects over the long term.
- Published
- 2022
9. A Novel Discriminative Dictionary Pair Learning Constrained by Ordinal Locality for Mixed Frequency Data Classification
- Author
-
Hong Yu, Guoyin Wang, Qian Yang, and Yongfang Xie
- Subjects
business.industry ,Computer science ,Locality ,Data classification ,Pattern recognition ,Computer Science Applications ,Term (time) ,Constraint (information theory) ,Computational Theory and Mathematics ,Discriminative model ,Sample size determination ,Norm (mathematics) ,Artificial intelligence ,business ,Information Systems - Abstract
A dilemma faced by classification is that the data is not collected at the same frequency in some applications. We investigate the mixed frequency data in a new way and recognize them as a special style of multi-view data, in which each view data is collected at a different sampling frequency. This paper proposes a discriminative dictionary pair learning method constrained by ordinal locality for mixed frequency data classification (shorted by DPLOL-MF). This method integrates synthesis dictionary and analysis dictionary into a dictionary pair, which not only improves computational cost caused by the ${\ell_0}$ or ${\ell_1}$ -norm constraint, but also can deal with the sampling frequency inconsistency. The DPLOL-MF utilizes a synthesis dictionary to learn class-specified reconstruction information and employs an analysis dictionary to generate coding coefficients by analyzing samples. Particularly, the ordinal locality preserving term is leveraged to constrain the atoms of dictionaries pair to further facilitate the learned dictionary pair to be more discriminative. Besides, we design a specific classification scheme for the inconsistent sample size of mixed frequency data. This paper illustrates a novel idea to solve the classification task of mixed frequency data and the experimental results demonstrate the effectiveness of the proposed method.
- Published
- 2022
10. Diversity of interpretations of the concept 'patient-centered care for breast cancer patients'
- Author
-
Ingeborg Engelberts, Elise Pel, Maartje Schermer, and Public Health
- Subjects
Acknowledgement ,Psychological intervention ,Breast Neoplasms ,Consistency (negotiation) ,Breast cancer ,Nursing ,SDG 3 - Good Health and Well-being ,Component (UML) ,Patient-Centered Care ,Health care ,medicine ,Humans ,Quality of Health Care ,business.industry ,Health Policy ,Interpretation (philosophy) ,Public Health, Environmental and Occupational Health ,Patient-centered care ,medicine.disease ,Epistemology ,Term (time) ,Clinical Practice ,Content analysis ,Female ,business ,Psychology ,Diversity (business) - Abstract
Rationale, aims and objectives: Patient-centered care is considered a vital component of good quality care for breast cancer patients. Nevertheless, the implementation of this valuable concept in clinical practice appears to be difficult. The goal of this study is to bridge the gap between theoretical elaboration of “patient-centered care” and clinical practice. To that purpose, a scoping analysis was performed of the application of the term “patient-centered care in breast cancer treatment” in present-day literature. Method: For data-extraction, a literature search was performed extracting references that were published in 2018 and included the terms “patient-centered care” and “breast cancer”. The articles were systematically traced for answers to the following three questions: “What is patient-centered care?”, “Why perform patient-centered care?”, and “How to realize patient-centered care?”. For the content analysis, these answers were coded and assembled into meaningful clusters until separate themes arose which concur with various interpretations of the term “patient-centered care”. Results: A total of 60 publications were retained for analysis. Traced answers to the three questions “what”, “why”, and “how” varied considerably in recent literature concerning breast cancer treatment. Despite the inconsistent use of the term “patient-centered care,” we did not find any critical consideration about the nature of the concept, regardless of the applied interpretation. Interventions that are supposed to contribute to the heterogeneous concept of patient-centered care as such, seem to be judged desirable, virtually without empirical justification. Conclusions: We propose, contrary to previous efforts to define “patient-centered care” more accurately, to embrace the heterogeneity of the concept and apply “patient-centered care” as an umbrella-term for all healthcare that intends to contribute to the acknowledgement of the person in the patient. For the justification of measures to realize patient-centered care for breast cancer patients, instead of a mere contribution to the abstract concept, we insist on the demonstration of desirable real-world effects.
- Published
- 2022
11. Short Term and Long term Building Electricity Consumption Prediction Using Extreme Gradient Boosting
- Author
-
Singh Pratima and Tyagi Sakshi
- Subjects
Consumption (economics) ,General Computer Science ,business.industry ,Econometrics ,Environmental science ,Electricity ,Extreme gradient boosting ,business ,Term (time) - Abstract
Background: Electricity is considered as the essential unit in today’s high-tech world. The electricity demand has been increased very rapidly due to increased urbanization,(smart buildings, and usage of smart devices to a large extent). Building a reliable and accurate electricity consumption prediction model becomes necessary with the increase in demand for energy. From recent studies, prediction models such as support vector regression (SVR), gradient boosting decision tree (GBDT), artificial neural network (ANN), random forest (RF), and extreme gradient boosting (XGBoost) have been compared for the prediction of electricity consumption and XGBoost is found to be the most efficient method that leads to the motivation for the research. Objective: The objective of this research is to propose a model that performs future electricity consumption prediction for different time horizons: short term prediction and long term prediction using the extreme gradient boosting method and reduce prediction errors. Also, based on the prediction of the electricity consumption, the best and worst predicted days are being recognized. Methods: The method used in this research is the extreme gradient boosting for future building electricity consumption prediction. The extreme gradient boosting method performs predictions for different time horizons(short term and long term) for different seasons(summer and winter). The model was designed for a house building located in Paris. Results: The model has been trained and tested on the dataset and its prediction is accurate with the low rate of errors compared to other machine learning techniques. The model predicts accurately with RMSE of 140.45 and MAE of 28, which is the least value for errors when compared to the baseline prediction models. Conclusion: A model that is robust to all the conditions should be built by enhancing the prediction mechanism such that the model should be dependent on a few factors to make electricity consumption prediction.
- Published
- 2022
12. Signed Social Networks With Biased Assimilation
- Author
-
Claudio Altafini, Yiguang Hong, Lingfei Wang, and Guodong Shi
- Subjects
Physics::Physics and Society ,Social network ,business.industry ,Computer Science::Social and Information Networks ,Domain (mathematical analysis) ,Computer Science Applications ,Term (time) ,Control and Systems Engineering ,Econometrics ,Exponent ,Sannolikhetsteori och statistik ,Social networking (online) ,Bifurcation ,Analytical models ,Network topology ,Stability analysis ,Hypercubes ,Topology ,Biased assimilation ,opinion dynamics ,signed social networks ,Hypercube ,Electrical and Electronic Engineering ,Probability Theory and Statistics ,Extreme value theory ,business ,Signed graph ,Value (mathematics) ,Mathematics - Abstract
A biased assimilation model of opinion dynamics is a nonlinear model, in which opinions exchanged in a social network are multiplied by a state-dependent term having the bias as exponent and expressing the bias of the agents toward their own opinions. The aim of this article is to extend the bias assimilation model to signed social networks. We show that while for structurally balanced networks, polarization to an extreme value of the opinion domain (the unit hypercube) always occurs regardless of the value of the bias, for structurally unbalanced networks, a stable state of indecision (corresponding to the centroid of the opinion domain) also appears, at least for small values of the bias. When the bias grows and passes a critical threshold, which depends on the amount of "disorder" encoded in the signed graph, then a bifurcation occurs and opinions become again polarized. Funding Agencies|National Natural Science Foundation of China [61733018]; Australian Research Council [DP190103615]; Swedish Research Council [2020-03701]
- Published
- 2022
13. Design of a Social Robot Interact With Artificial Intelligence by Versatile Control Systems
- Author
-
Md. Mizanur Rahman, Mohammad Shamim Islam, M. Shamim Hossain, and Ghulam Muhammad
- Subjects
Social robot ,Android phone ,business.industry ,Control system ,Robot ,The Internet ,Robotics ,Artificial intelligence ,Electrical and Electronic Engineering ,Mechatronics ,business ,Instrumentation ,Term (time) - Abstract
Robots are a combination of mechatronics, computer science, and artificial intelligence. Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of actions. Whenever a robot has to interact with the human-society, it has to adopt a special skill called Human-Robot Interaction and thus the term Social-Robot comes into account. A social robot has to be able to express emotions, communicate with high-level dialogue, use natural cues, and learn to recognize models of other agents. An autonomous social robot cannot follow orders instead of doing something on its own. To make the robot more interactive and communicative, lots of sensors and modules have to be used along with its moving mechanism. Therefore, a social robot becomes complex and expensive. To overcome the issue of the complexity and costliness, in this paper, a design of social robot using a combination of embedded systems, the Internet of Robotic Things (IORT) and Android operating system has been introduced to be interactive and communicative to human, be intelligent enough to solve complex mathematics and be able to follow the operator’s command simultaneously. By using the Internet as the robot’s source of information and the Android phone as the robot’s sensory and control system partially, and adding them all to the robot’s embedded system wirelessly, we have not only become able to make the robot more advanced and intelligent, but also reduce the cost of construction by a significant amount.
- Published
- 2022
14. A Data Fusion Powered Bi-Directional Long Short Term Memory Model for Predicting Multi-Lane Short Term Traffic Flow
- Author
-
Wenjian Liu and Lumin Xing
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Real-time computing ,Process (computing) ,Cloud computing ,Sensor fusion ,Traffic flow ,Computer Science Applications ,Term (time) ,Multiple time dimensions ,Automotive Engineering ,Information Framework ,business ,Intelligent transportation system - Abstract
In intelligent transportation system (ITS), accurate and real-time prediction of short-term multi-lane traffic flow with existing traffic data is an important part of urban traffic planning, traffic management and control. The data generated in the process of vehicle driving has the cooperative characteristics of multi-source, space-time and dynamic. Combining the data with high-performance computing or cloud computing, a new space-time information framework is designed, which is of great significance for the analysis and prediction of traffic flow, as well as intelligent traffic management, service and decision-making. This paper analyzes the statistical characteristics of urban road traffic flow from the two dimensions of time and space through the spatial-temporal correlation between multi-lane short term traffic flow in single observation point and multi observation points. We constructed a data fusion powered bi-directional long short term memory (DFBD-LSTM) model for individual lane and aggregate traffic flow, then used this model to predict multi-lane short term traffic flow. By taking individual lane traffic flow and aggregate traffic flow as different variables, the model produces more accurate predictions, which can better guide people to travel, alleviate the congestion of urban road traffic network to a certain extent, and improve the utilization rate and transportation efficiency of traffic road.
- Published
- 2022
15. ADP-Based Spacecraft Attitude Control Under Actuator Misalignment and Pointing Constraints
- Author
-
Hongyang Dong, Xiaowei Zhao, Haoyang Yang, and Qinglei Hu
- Subjects
Lyapunov stability ,T1 ,Spacecraft ,TL ,Computer science ,business.industry ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Function (mathematics) ,Term (time) ,Attitude control ,Dynamic programming ,TA ,Control and Systems Engineering ,Control theory ,TJ ,Electrical and Electronic Engineering ,Actuator ,business - Abstract
This paper is devoted to real-time optimal attitude reorientation control of rigid spacecraft control. Particularly, two typical practical problems - actuator misalignment and forbidden pointing constraints are considered. Within the framework of adaptive dynamic programming (ADP), a novel constrained optimal attitude control scheme is proposed. In this design, a special reward function is developed to characterize the environment feedback and deal with the pointing constraints. Notably, a novel argument term is introduced to the reward function for overcoming the inevitable difficulty in actuator misalignment. By virtue of the Lyapunov stability theory, the ultimate boundedness of state error and the optimality of the proposed method can be guaranteed. Finally, the effectiveness and performance of the developed ADP-based controller are evaluated by not only numerical simulations but also experimental tests with a hardware-in-loop platform.
- Published
- 2022
16. Geothermal pavements: field observations, numerical modelling and long-term performance
- Author
-
Suksun Horpibulsuk, Arul Arulrajah, Guillermo A. Narsilio, Nikolas Makasis, Xiaoying Gu, and Yaser Motamedi
- Subjects
Field (physics) ,Petroleum engineering ,business.industry ,020209 energy ,0211 other engineering and technologies ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Term (time) ,0202 electrical engineering, electronic engineering, information engineering ,Earth and Planetary Sciences (miscellaneous) ,Environmental science ,business ,Geothermal gradient ,Energy (signal processing) ,Thermal energy ,021101 geological & geomatics engineering - Abstract
Geothermal pavement systems are a novel type of energy geostructure. They use sub-surface structures to exchange heat with the ground and, thereby, provide thermal energy in addition to structural support. The thermo-activation of pavements has been largely overlooked in the literature. This research focuses on the development of a detailed three-dimensional (3D) finite-element (FE) model to explore the thermal performance of geothermal pavement systems. The 3D FE model developed was successfully validated with both data measured from a full-scale experiment undertaken in Adelaide, South Australia and other published data. The validated model is further employed to evaluate the long-term performance of a geothermal pavement system under both a traditional system configuration and a hybrid system. Furthermore, a life-cycle cost analysis is performed to explore the cost implication of such pavement systems. Results show that a geothermal pavement with total pipe length of 640 m, or a hybrid system (a geothermal pavement system with a pipe length of 320 m and an auxiliary system) can provide for sufficient space heating and cooling for a typical residential building in Australia. It is found that, compared with conventional heating and cooling systems, the geothermal pavement system is indeed a cost-effective solution. This research study indicates that this pavement technology can be successfully implemented in the field and accurately modelled using FE techniques.
- Published
- 2022
17. Long-term effectiveness of the midwifery initiated oral health-dental service program on maternal oral health knowledge, preventative dental behaviours and the oral health status of children in Australia
- Author
-
Mariana S. Sousa, Ariana Kong, Ravi Srinivas, Hannah G Dahlen, Maree Johnson, Albert Yaacoub, Ajesh George, Amy R. Villarosa, Shilpi Ajwani, and Sameer Bhole
- Subjects
Service (business) ,medicine.medical_specialty ,stomatognathic diseases ,business.industry ,Family medicine ,Medicine ,General Medicine ,Oral health ,business ,General Dentistry ,Term (time) - Abstract
Background: Early childhood caries remains a public health challenge and many interventions to manage this disease have focused on prevention during early infancy. Promoting oral health during pregnancy may also improve the oral health of children, however, there is limited evidence in Australia. The Midwifery Initiated Oral Health-Dental Service (MIOH-DS) was developed to train midwives to promote maternal oral health and a large trial showed the program substantially improved the oral health status, knowledge and behaviours of pregnant women. This study evaluated the long-term effectiveness of the program (post trial) on maternal oral health knowledge, preventative dental behaviours, and early childhood caries in offspring.Methods: A prospective cohort study was conducted in three large metropolitan health services in Sydney, Australia. The study followed 204 women and their children three to four years after participating in the original MIOH-DS trial (intervention and control groups). The outcome measures included child dental decay (cariogenic bacteria), and a maternal oral health knowledge and behaviours questionnaire. Descriptive statistics were used to analyse the main outcomes and a regression model was constructed to explore predictors of dental decay among children.Results: There were no significant differences across the outcome measures between the MIOH-DS participants (mother/child) and control groups except for a small difference in maternal oral health knowledge. Most mothers across both groups demonstrated high oral health knowledge and positive oral health practices, and the regression model found that these outcomes provided a protective effect (low levels of bacteria and dental caries) among children. Some aspects of oral health remained poorly understood by mothers across both groups―the oral health impact of sugary foods and drinks, at-risk feeding practices, and the recommended age for first dental visits.Conclusions: The long-term impact of the program demonstrates the effectiveness of improving maternal oral health knowledge and preventative behaviours to reduce the risk of early childhood caries, although the specific effect of the MIOH-DS program was not found. Although oral health knowledge was high across participants the findings suggest the need for reinforced education around feeding, diet and dental visiting through postnatal early childhood services to show sustained improvements.
- Published
- 2022
18. A Physics-Informed Deep Learning Paradigm for Traffic State and Fundamental Diagram Estimation
- Author
-
Qiang Du, Kuang Huang, Zhaobin Mo, Xuan Di, and Rongye Shi
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Small data ,Relation (database) ,business.industry ,Mechanical Engineering ,Deep learning ,Diagram ,Machine Learning (cs.LG) ,Computer Science Applications ,Term (time) ,Microscopic traffic flow model ,Data efficiency ,Component (UML) ,Automotive Engineering ,Artificial intelligence ,business ,Algorithm - Abstract
Traffic state estimation (TSE) bifurcates into two categories, model-driven and data-driven (e.g., machine learning, ML), while each suffers from either deficient physics or small data. To mitigate these limitations, recent studies introduced a hybrid paradigm, physics-informed deep learning (PIDL), which contains both model-driven and data-driven components. This paper contributes an improved version, called physics-informed deep learning with a fundamental diagram learner (PIDL+FDL), which integrates ML terms into the model-driven component to learn a functional form of a fundamental diagram (FD), i.e., a mapping from traffic density to flow or velocity. The proposed PIDL+FDL has the advantages of performing the TSE learning, model parameter identification, and FD estimation simultaneously. We demonstrate the use of PIDL+FDL to solve popular first-order and second-order traffic flow models and reconstruct the FD relation as well as model parameters that are outside the FD terms. We then evaluate the PIDL+FDL-based TSE using the Next Generation SIMulation (NGSIM) dataset. The experimental results show the superiority of the PIDL+FDL in terms of improved estimation accuracy and data efficiency over advanced baseline TSE methods, and additionally, the capacity to properly learn the unknown underlying FD relation., arXiv admin note: substantial text overlap with arXiv:2101.06580
- Published
- 2022
19. Short-Term Forecast of Bicycle Usage in Bike Sharing Systems: A Spatial-Temporal Memory Network
- Author
-
Xinyu Li, Xiaohu Zhang, Wenzhong Shi, Yang Xu, Lei Wang, and Qi Chen
- Subjects
Feature engineering ,Service (systems architecture) ,business.industry ,Computer science ,Mechanical Engineering ,Deep learning ,Reliability (computer networking) ,Demand forecasting ,Machine learning ,computer.software_genre ,Computer Science Applications ,Term (time) ,Task (project management) ,Robustness (computer science) ,Automotive Engineering ,Artificial intelligence ,business ,computer - Abstract
Bike-sharing systems have made notable contributions to cities by providing green and sustainable mobility service to users. Over the years, many studies have been conducted to understand or anticipate the usage of these systems, with the hope to inform their future developments. One important task is to accurately predict usage patterns of the systems. Although many deep learning algorithms have been developed in recent years to support travel demand forecast, they have mainly been used to predict traffic volume or speed on roadways. Few studies have applied them to bike-sharing systems. Moreover, these studies usually focus on one single dataset or study area. The effectiveness and robustness of the prediction algorithms are not systematically evaluated. In this study, we propose a Spatial-Temporal Memory Network (STMN) to predict short-term usage of bicycles in bike-sharing systems. The framework employs Convolutional Long Short-Term Memory models and a feature engineering technique to capture the spatial-temporal dependencies in historical data for the prediction task. Four testing sites are used to evaluate the model. These four sites include two station-based systems (Chicago and New York) and two dockless bike-sharing systems (Singapore and New Taipei City). By assessing STMN with several baseline models, we find that STMN achieves the best overall performance in all the four cities. The model also achieves superior performance in urban areas with varying levels of bicycle usage and during peak periods when demand is high. The findings suggest the reliability of STMN in predicting bicycle usage for different types of bike-sharing systems.
- Published
- 2022
20. Subdomain Adaptation Transfer Learning Network for Fault Diagnosis of Roller Bearings
- Author
-
Bin Yang, Xinxin He, Naipeng Li, and Zhijian Wang
- Subjects
business.industry ,Generalization ,Computer science ,Pattern recognition ,Conditional probability distribution ,Fault (power engineering) ,Field (computer science) ,Term (time) ,Control and Systems Engineering ,Artificial intelligence ,Limit (mathematics) ,Electrical and Electronic Engineering ,business ,Transfer of learning ,Adaptation (computer science) - Abstract
Due to the data distribution discrepancy, fault diagnosis models, trained with labeled data in one scene, likely fails in classifying by unlabeled data acquired from the other scenes. Transfer learning is capable to generalize successful application trained in one scene to the fault diagnosis in the other scenes. However, the existing transfer methods do not pay much attention to reduce adaptively marginal and conditional distribution biases, and also ignore the degree of contribution between both biases and among network layers, which limit classification performance and generalization in reality. To overcome these weaknesses, we established a new fault diagnosis model, called subdomain adaptation transfer learning network (SATLN). Firstly, two convolutional building blocks were stacked to extract transferable features from raw data. Then, the pseudo label learning was amended to construct target subdomain of each class. Furthermore, a sub-domain adaptation was combined with domain adaptation to reduce both marginal and conditional distribution biases simultaneously. Finally, a dynamic weight term was applied for adaptive adjustment of the contributions from both discrepancies and each network layers. The SATLN method was tested with six transfer tasks. The results demonstrate the effectiveness and superiority of the SATLN in the cross-domain fault diagnosis field.
- Published
- 2022
21. Short-Term Lateral Behavior Reasoning for Target Vehicles Considering Driver Preview Characteristic
- Author
-
Chengliang Yin, Haiping Du, Ronghui Liu, Zhisong Zhou, Yafei Wang, and Chongfeng Wei
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Automotive Engineering ,Probabilistic logic ,Cognition ,Artificial intelligence ,Hidden Markov model ,business ,Computer Science Applications ,Term (time) - Abstract
A timely understanding of target vehicles (TVs) lateral behavior is essential for the decision-making and control of host vehicle. Existing physical model-based methods such as motion-based method and multiple centerline-based method are generally constructed based on TV pose and longitudinal velocity, and tend to ignore TV preview driving characteristic and other useful information such as lateral velocity and yaw rate. To address these issues, a driver preview and multiple centerline model-based probabilistic behavior recognition architecture is proposed for timely and accurate TV lateral behavior prediction. Firstly, a driver preview model is used to describe vehicle preview driving characteristic, and TV preview lateral offset and preview lateral velocity are calculated with TV states and road reference information. Then, the preview lateral offset and preview lateral velocity are combined with multiple centerline model for TV lateral behavior reasoning based on the interacting multiple model-based probabilistic behavior recognition algorithm. With this method, TV preview driving characteristic and lateral motion states are combined for precise TV lateral behavior description. Furthermore, to predict short-term lateral behavior, a preview lateral velocity-dependent transition probability matrix model constructed with Gaussian cumulative distribution function is proposed. Simulation and experimental results show that the proposed method considering vehicle preview driving characteristic predicts TV lateral behavior earlier than the conventional method.
- Published
- 2022
22. A Domain-Specific Bayesian Deep-Learning Approach for Air Pollution Forecast
- Author
-
Qi Zhang, Jacqueline C.K. Lam, Yang Han, and Victor O. K. Li
- Subjects
Information Systems and Management ,010504 meteorology & atmospheric sciences ,Computer science ,business.industry ,Deep learning ,Bayesian probability ,Feature selection ,010501 environmental sciences ,Machine learning ,computer.software_genre ,01 natural sciences ,Term (time) ,Feature (machine learning) ,Artificial intelligence ,Time series ,business ,computer ,Air quality index ,0105 earth and related environmental sciences ,Information Systems ,Interpretability - Abstract
Given that poor air quality has obvious negative health impacts, predicting air pollution concentration is crucial and beneficial for public health. Motivated by recent advancements in deep-learning time series prediction, this study proposes a domain-specific Bayesian deep-learning model for long-term air pollution forecast in China and the United Kingdom. Our proposed model carries three novelties: First, we integrate a domain-specific knowledge taking into account the strong statistical relationship between PM2.5 and PM10 as a regularization term; Second, we include an attention layer capable of capturing the influential historical feature, the recursive temporal correlation of air quality data, in our pollution prediction; Third, results generated from different multi-step forecast strategies have been combined based on corresponding uncertainty measures to improve our model's performance. Using Beijing and London as our case studies, our results have shown that the Bayesian deep-learning model outperforms the baseline models. In particular, the incorporation of domain-specific knowledge into the Bayesian deep-learning model reduces prediction errors whilst the integration of Bayesian techniques allows the fusing of different forecast strategies to improve prediction accuracy. Feature selection can be performed and additional influential domain-specific features can be added in future to further improve our deep-learning model's prediction accuracy and interpretability.
- Published
- 2022
23. Short-term wind power prediction with harmony search algorithm: Belen region
- Author
-
Esra Saraç Eşsiz
- Subjects
Artificial neural network ,Computer science ,business.industry ,Mühendislik ,Wind power forecasting ,General Medicine ,Term (time) ,Engineering ,ComputerApplications_MISCELLANEOUS ,Harmony search ,Artificial intelligence ,business ,Physics::Atmospheric and Oceanic Physics ,Renewable Energy,Wind Power,Artificial neural networks,Feature selection,Short-term forecast - Abstract
Wind power is the fastest-growing technology among alternative energy production sources. Reliable forecasting of short-term wind power plays a critical role in the acquisition of most of the generated energy. In this study, short-term wind power forecast is performed using radial-based artificial neural networks, forecast error and cost to be minimized with the harmony search algorithm. Experimented results show that, we can predict wind power with fewer features and less error by using harmony search algorithm. A %7 percent improvement in RMSE rate has been achieved with the proposed method for short-term wind power prediction.
- Published
- 2022
24. TermEnsembler
- Author
-
Vid Podpečan, Senja Pollak, Nada Lavrač, Anže Vavpetič, and Andraž Repar
- Subjects
050101 languages & linguistics ,Computer science ,business.industry ,Communication ,05 social sciences ,Evolutionary algorithm ,02 engineering and technology ,Library and Information Sciences ,computer.software_genre ,File format ,Ensemble learning ,Language and Linguistics ,Domain (software engineering) ,Term (time) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,Language industry ,computer ,Natural language processing - Abstract
This paper describes TermEnsembler, a bilingual term extraction and alignment system utilizing a novel ensemble learning approach to bilingual term alignment. In the proposed system, the processing starts with monolingual term extraction from a language industry standard file type containing aligned English and Slovenian texts. The two separate term lists are then automatically aligned using an ensemble of seven bilingual alignment methods, which are first executed separately and then merged using the weights learned with an evolutionary algorithm. In the experiments, the weights were learned on one domain and tested on two other domains. When evaluated on the top 400 aligned term pairs, the precision of term alignment is over 96%, while the number of correctly aligned multi-word unit terms exceeds 30% when evaluated on the top 400 term pairs.
- Published
- 2022
25. Improving term candidates selection using terminological tokens
- Author
-
Antoni Oliver and Mercè Vàzquez
- Subjects
Computer science ,mètode de filtrat TSR ,tokens terminológicos ,02 engineering and technology ,term candidates ,termes candidats ,Library and Information Sciences ,Security token ,computer.software_genre ,Language and Linguistics ,Ranking (information retrieval) ,Reduction (complexity) ,automatic term extraction ,corpus específicos de dominio ,tokens terminològics ,Natural language processing (Computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Tractament del llenguatge natural (Informàtica) ,unidades terminológicas ,Selection (genetic algorithm) ,términos candidatos ,business.industry ,Communication ,05 social sciences ,Rank (computer programming) ,extractores de terminología ,unitats terminològiques ,terminological tokens ,método de filtrado TSR ,extractors de terminologia ,Term (time) ,Identification (information) ,extracció automàtica de termes ,terminological units ,TBXTools ,Tratamiento del lenguaje natural (Informática) ,020201 artificial intelligence & image processing ,Artificial intelligence ,extracción automática de términos ,0509 other social sciences ,050904 information & library sciences ,business ,Precision and recall ,computer ,domain-specific corpora ,corpus específics de domini ,Natural language processing ,TSR filtering method ,terminology extraction - Abstract
The identification of reliable terms from domain-specific corpora using computational methods is a task that has to be validated manually by specialists, which is a highly time-consuming activity. To reduce this effort and improve term candidate selection, we implemented the Token Slot Recognition method, a filtering method based on terminological tokens which is used to rank extracted term candidates from domain-specific corpora. This paper presents the implementation of the term candidates filtering method we developed in linguistic and statistical approaches applied for automatic term extraction using several domain-specific corpora in different languages. We observed that the filtering method outperforms term candidate selection by ranking a higher number of terms at the top of the term candidate list than raw frequency, and for statistical term extraction the improvement is between 15% and 25% both in precision and recall. Our analyses further revealed a reduction in the number of term candidates to be validated manually by specialists. In conclusion, the number of term candidates extracted automatically from domain-specific corpora has been reduced significantly using the Token Slot Recognition filtering method, so term candidates can be easily and quickly validated by specialists. La identificación de términos apropiados de corpus específicos de dominio utilizando métodos computacionales es una tarea que debe ser validada manualmente por especialistas, lo cual es una actividad que consume mucho tiempo. Para reducir este esfuerzo y mejorar la selección de los términos candidatos, implementamos el método Token Slot Recognition, un método de filtrado basado en tokens terminológicos que se utiliza para clasificar candidatos de términos extraídos de corpus específicos de dominio. Este artículo presenta la implementación del término con un método de filtrado de candidatos que desarrollamos en los enfoques lingüísticos y estadísticos aplicados para la extracción automática de términos utilizando varios corpus específicos de dominio en diferentes idiomas. Observamos que el método de filtrado supera la selección de candidatos a término al clasificar un mayor número de términos en la lista de candidatos a término que la frecuencia sin procesar, y para la extracción de términos estadísticos la mejora es entre 15% y 25% tanto en precisión como en recuperación. Nuestros análisis revelaron además una reducción en el número de candidatos a término para ser validados manualmente por especialistas. En conclusión, el número de candidatos a término extraídos automáticamente de corpus específicos del dominio se ha reducido significativamente utilizando el método de filtrado Token Slot Recognition, por lo que los candidatos a término pueden ser validados fácil y rápidamente por especialistas. La identificació de termes apropiats de corpus específics de domini utilitzant mètodes computacionals és una tasca que ha de ser validada manualment per especialistes, la qual cosa és una activitat que consumeix molt temps. Per reduir aquest esforç i millorar la selecció dels termes candidats, implementem el mètode Token Slot Recognition, un mètode de filtrat basat en tokens terminològics que s'utilitza per classificar candidats de termes extrets de corpus específics de domini. Aquest article presenta la implementació del terme amb un mètode de filtrat de candidats que desenvolupem en els enfocaments lingüístics i estadístics aplicats per a l'extracció automàtica de termes utilitzant diversos corpus específics de domini en diferents idiomes. Observem que el mètode de filtrat supera la selecció de candidats a terme en classificar un major nombre de termes en la llista de candidats a terme que la freqüència sense processar, i per a l'extracció de termes estadístics la millora és entre 15% i 25% tant en precisió com en recuperació. Les nostres anàlisis van revelar a més una reducció en el nombre de candidats a terme per ser validats manualment per especialistes. En conclusió, el nombre de candidats a terme extrets automàticament de corpus específics del domini s'ha reduït significativament utilitzant el mètode de filtrat Token Slot Recognition, per la qual cosa els candidats a terme poden ser validats fàcil i ràpidament per especialistes.
- Published
- 2022
26. Understanding and Predicting the Short-Term Passenger Flow of Station-Free Shared Bikes: A Spatiotemporal Deep Learning Approach
- Author
-
Ziyan Feng, Huijun Sun, Xu Bao, Guang Wang, Ximing Chang, and Jianjun Wu
- Subjects
Flow (mathematics) ,Computer science ,business.industry ,Mechanical Engineering ,Deep learning ,Automotive Engineering ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,computer ,Computer Science Applications ,Term (time) - Published
- 2022
27. Long-Term Urban Traffic Speed Prediction With Deep Learning on Graphs
- Author
-
James J.Q. Yu, Christos Markos, and Shiyao Zhang
- Subjects
Process (engineering) ,business.industry ,Computer science ,Mechanical Engineering ,Deep learning ,Logistics & Transportation ,Machine learning ,computer.software_genre ,0801 Artificial Intelligence and Image Processing, 0905 Civil Engineering, 1507 Transportation and Freight Services ,Computer Science Applications ,Term (time) ,Information extraction ,Software deployment ,Automotive Engineering ,Granularity ,Artificial intelligence ,Architecture ,business ,Focus (optics) ,computer - Abstract
Traffic speed prediction is among the foundations of advanced traffic management and the gradual deployment of internet of things sensors is empowering data-driven approaches for the prediction. Nonetheless, existing research studies mainly focus on short-term traffic prediction that covers up to one hour forecast into the future. Previous long-term prediction approaches experience error accumulation, exposure bias, or generate future data of low granularity. In this paper, a novel data-driven, long-term, high-granularity traffic speed prediction approach is proposed based on recent development of graph deep learning techniques. The proposed model utilizes a predictor-regularizer architecture to embed the spatial-temporal data correlation of traffic dynamics in the prediction process. Graph convolutions are widely adopted in both sub-networks for geometrical latent information extraction and reconstruction. To assess the performance of the proposed approach, comprehensive case studies are conducted on real-world datasets and consistent improvements can be observed over baselines. This work is among the pioneering efforts on network-wide long-term traffic speed prediction. The design principles of the proposed approach can serve as a reference point for future transportation research leveraging deep learning.
- Published
- 2022
28. Quantized dissipative control based on T–S fuzzy model for wind generation systems
- Author
-
Xiao Cai, Jun Wang, Kaibo Shi, Tingting Jiang, and Shouming Zhong
- Subjects
Wind power ,business.industry ,Computer science ,Applied Mathematics ,Control (management) ,Fuzzy model ,Function (mathematics) ,Fuzzy logic ,Computer Science Applications ,Term (time) ,Control and Systems Engineering ,Control theory ,Dissipative system ,Electrical and Electronic Engineering ,business ,Instrumentation - Abstract
In this paper, we address the extended dissipativity (ED) performance of delayed T–S fuzzy model (TSFM)-based wind generation systems (WGSs). First, a concept of coupled leakage time-varying delay (CLTVD) is proposed to construct a more widespread TSFM. Second, the relaxed term with time-delay-product function (TDPF) is introduced. Then, a befitting Lyapunov-Krasovskii functional (LKF) is constructed, which can handle the delay and its derivation. Third, by using valid integral inequalities, new stabilization criteria are established. Meanwhile, the desired fuzzy quantized control with CLTVD is designed. Finally, simulation results are given to show the validity and superiority of our derived results.
- Published
- 2022
29. Scenario and Sensitivity Based Stability Analysis of the High Renewable Future Grid
- Author
-
Ahmad Shabir Ahmadyar, Gregor Verbic, Mehdi Garmroodi, Hesamodin Marzooghi, David J. Hill, and Ruidong Liu
- Subjects
Computer science ,business.industry ,Stability (learning theory) ,Energy Engineering and Power Technology ,System stability ,Grid ,Stability assessment ,Term (time) ,Renewable energy ,Reliability engineering ,Electric power system ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,business - Abstract
It can be expected that the power systems of the future will be significantly different from today's, especially due to increasing renewable energy sources (RESs), storage systems, and price-responsive users leading to large uncertainty and complexity. The operation of these future grids (FGs) at levels of renewable energy approaching 100% will require all the usual stability analysis along with new issues in a much more complex situation than has been encountered in the past. In fact, how close we can get to this desired level will likely be dependent on the assessed stability limits. Therefore, in this study, we use a novel scenario-sensitivity-contingency based framework to evaluate the system stability along possible evolution pathways towards high renewable FGs. As a case study, we carry out our studies based on proposed future scenarios and sensitivities for the Australian FG. Using a simulation platform that encompasses market simulation, load flow calculation and stability assessment altogether, the impact of grid strength, level of prosumers, and utility storage on the stability of the FG is studied and quantified indices for long term stability have been devised. The results of this study enable us to address the underlying stability issues of the FGs.
- Published
- 2022
30. Deep Learning-Based Incorporation of Planar Constraints for Robust Stereo Depth Estimation in Autonomous Vehicle Applications
- Author
-
Reza Hoseinnezhad, Alireza Bab-Hadiashar, Ruwan Tennakoon, and Weiqin Chuah
- Subjects
Pixel ,Plane (geometry) ,Computer science ,business.industry ,Mechanical Engineering ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Convolutional neural network ,Computer Science Applications ,Term (time) ,Planar ,Automotive Engineering ,Benchmark (computing) ,Computer vision ,Artificial intelligence ,Affine transformation ,business - Abstract
In autonomous vehicles, depth information for the environment surrounding the vehicle is commonly extracted using time-of-flight (ToF) sensors such as LiDARs and RADARs. Those sensors have some limitations that may potentially degrade the quality and utility of the depth information to a substantial extent. An alternative solution is depth estimation from stereo pairs. However, stereo matching and depth estimation often fails at ill-posed regions including areas with repetitive patterns or textureless surfaces which are commonly found on planar surfaces. This paper focuses on designing an efficient framework for stereo depth estimation, using deep learning technique, that is robust against the mentioned ill-posed regions. With the observation that disparities of all pixels belonging to planar areas (scene plane) viewed by two rectified stereo images can be described using affine transformations, our proposed method predicts pixel-wise affine transformation parameters based on the depth information encoded in the aggregated cost volume. We also introduce a propagation term which enforces all pixels belonging to the same scene plane to be transformed using the same parameters. Disparity can then be computed by multiplying the predicted affine parameters with the corresponding pixel locations. The proposed method was evaluated on several benchmark datasets. We are able to obtain competitive results and at the same time reducing the processing time of common convolution neural network (CNN) in stereo matching by 50%. Analysis of the findings shows that our method can produce reliable results at the ill-posed regions which are challenging to the current state-of-the-arts methods.
- Published
- 2022
31. Improving multi-target cooperative tracking guidance for UAV swarms using multi-agent reinforcement learning
- Author
-
Jie Li, Lincheng Shen, Zhihong Liu, and Wenhong Zhou
- Subjects
Computational complexity theory ,Artificial neural network ,Computer science ,business.industry ,Mechanical Engineering ,Aerospace Engineering ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Dot product ,Pointwise mutual information ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Regularization (mathematics) ,Term (time) ,Reinforcement learning ,Artificial intelligence ,business ,Reciprocal - Abstract
Multi-Target Tracking Guidance (MTTG) in unknown environments has great potential values in applications for Unmanned Aerial Vehicle (UAV) swarms. Although Multi-Agent Deep Reinforcement Learning (MADRL) is a promising technique for learning cooperation, most of the existing methods cannot scale well to decentralized UAV swarms due to their computational complexity or global information requirement. This paper proposes a decentralized MADRL method using the maximum reciprocal reward to learn cooperative tracking policies for UAV swarms. This method reshapes each UAV's reward with a regularization term that is defined as the dot product of the reward vector of all neighbor UAVs and the corresponding dependency vector between the UAV and the neighbors. And the dependence between UAVs can be directly captured by the Pointwise Mutual Information (PMI) neural network without complicated aggregation statistics. Then, the experience sharing Reciprocal Reward Multi-Agent Actor-Critic (MAAC-R) algorithm is proposed to learn the cooperative sharing policy for all homogeneous UAVs. Experiments demonstrate that the proposed algorithm can improve the UAVs’ cooperation more effectively than the baseline algorithms, and can stimulate a rich form of cooperative tracking behaviors of UAV swarms. Besides, the learned policy can better scale to other scenarios with more UAVs and targets.
- Published
- 2022
32. Long-Term IaaS Selection Using Performance Discovery
- Author
-
Athman Bouguettaya, Sheik Mohammad Mostakim Fattah, and Sajib Mistry
- Subjects
FOS: Computer and information sciences ,Skyline ,Information Systems and Management ,Computer Networks and Communications ,Computer science ,business.industry ,Quality of service ,Workload ,Machine learning ,computer.software_genre ,Computer Science Applications ,Term (time) ,Set (abstract data type) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Hardware and Architecture ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Artificial intelligence ,business ,computer ,Selection (genetic algorithm) - Abstract
We propose a novel framework to select IaaS providers according to a consumer's long-term performance requirements. The proposed framework leverages free short-term trials to discover the unknown QoS performance of IaaS providers. We design a temporal skyline-based filtering method to select candidate IaaS providers for the short-term trials. A novel cooperative long-term QoS prediction approach is developed that utilizes past trial experiences of similar consumers using a workload replay technique. We propose a new trial workload generation model that estimates a provider's long-term performance in the absence of past trial experiences. The confidence of the prediction is measured based on the trial experience of the consumer. A set of experiments are conducted based on real-world datasets to evaluate the proposed framework., Comment: 14 pages, accepted and to appear in IEEE Ttransactions on Services Computing
- Published
- 2022
33. Forward to the Past: Short-Term Effects of the Rent Freeze in Berlin
- Author
-
Anja M. Hahn, Konstantin A. Kholodilin, Sofie R. Waltl, Aix-Marseille Sciences Economiques (AMSE), École des hautes études en sciences sociales (EHESS)-Aix Marseille Université (AMU)-École Centrale de Marseille (ECM)-Centre National de la Recherche Scientifique (CNRS), This research benefits from funding by the FNR Luxembourg National Research Fund [CORE Grant 3886] (ASSESS) and the OeNB Anniversary Fund [Grant 18767] (LocHouse). M. Fongoni further thanks the Department of Economics at the University of Strathclyde for support and acknowledges funding from the French government under the 'France 2030' investment plan managed by the French National Research Agency [Reference ANR-17-EURE-0020] and from the Excellence Initiative of Aix-Marseille University - A*MIDEX., ANR-17-EURE-0020,AMSE (EUR),Aix-Marseille School of Economics(2017), and ANR-11-IDEX-0001,Amidex,INITIATIVE D'EXCELLENCE AIX MARSEILLE UNIVERSITE(2011)
- Subjects
History ,Labour economics ,Polymers and Plastics ,Scope (project management) ,Supply disruption ,business.industry ,media_common.quotation_subject ,Strategy and Management ,Economic rent ,Urban policy ,legal uncertainty ,Management Science and Operations Research ,[SHS.ECO]Humanities and Social Sciences/Economics and Finance ,Industrial and Manufacturing Engineering ,Term (time) ,Renting ,first-generation rent control ,urban policy ,rent freeze ,supply disruptions ,[SHS.GESTION]Humanities and Social Sciences/Business administration ,Substitution effect ,Business and International Management ,business ,media_common - Abstract
In 2020, Berlin introduced a rigorous rent-control policy responding to soaring prices by capping rents: the Mietendeckel (rent freeze). The German Constitutional Court revoked the policy only one year later. Although successful in lowering rents during its duration, the consequences for Berlin’s rental market and close-by markets are per se not clear. This article evaluates the short-term causal supply-side effects in terms of prices, quantities, and landlords’ strategic behavior. We develop a theoretical framework capturing the key features of first-generation rent control policies and Berlin-specific aspects. Using a rich pool of detailed rent advertisements, predictions are tested, and further empirical causal inference techniques are applied for comparing price trajectories of dwellings inside and outside the policy’s scope. Mechanically, advertised rents drop significantly upon the policy’s enactment. A substantial rent gap along Berlin’s administrative border emerges, and rapidly growing rents in Berlin’s (unregulated) adjacent municipalities are observed. Landlords started adopting a hedging strategy insuring themselves against the risk of contractually long-term fixed low rents following a potentially unconstitutional law. Whereas this hedge was beneficial for landlords, the risk was completely borne by tenants. Moreover, the number of available properties for rent dropped significantly, a share of which appears to be permanently lost for the rental sector. This hampers a successful housing search for first-time renters and people moving within the city. Overall, negative consequences for renters appear to outweigh positive ones. This paper was accepted by Victoria Ivashina, finance. Funding: This research benefits from funding by the FNR Luxembourg National Research Fund [CORE Grant 3886] (ASSESS) and the OeNB Anniversary Fund [Grant 18767] (LocHouse). M. Fongoni further thanks the Department of Economics at the University of Strathclyde for support and acknowledges funding from the French government under the “France 2030” investment plan managed by the French National Research Agency [Reference ANR-17-EURE-0020] and from the Excellence Initiative of Aix-Marseille University - A*MIDEX. Supplemental Material: The online appendix and data are available at https://doi.org/10.1287/mnsc.2023.4775 .
- Published
- 2023
34. Helping People Living with HIV Learn Skills to Manage Their Care
- Author
-
Kevin Fiscella, Jonathan Tobin, Subrina Farah, Wendi Cross, Jennifer Carroll, and Amneris Luque
- Subjects
medicine.medical_specialty ,business.industry ,Internal medicine ,medicine.medical_treatment ,Medicine ,Smoking cessation ,business ,Nicotine replacement ,Term (time) - Published
- 2023
35. Efficiency and reliability of sewage purification in long-term exploitation of the municipal wastewater treatment plant with activated sludge and hydroponic system
- Author
-
Michał Marzec and Karolina Jóźwiakowska
- Subjects
Activated sludge ,Wastewater ,Waste management ,business.industry ,Environmental science ,Sewage ,Sewage treatment ,General Medicine ,business ,Reliability (statistics) ,Term (time) - Published
- 2023
36. Elevating the Value of Health to Guide Decision-Making in the Long Term
- Author
-
Roger D. Vaughan and Sandro Galea
- Subjects
Male ,Adult ,2019-20 coronavirus outbreak ,Coronavirus disease 2019 (COVID-19) ,Databases, Factual ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Decision Making ,Disease-Free Survival ,Sex Factors ,Life Expectancy ,Medicine ,Humans ,Disabled Persons ,Aged ,Aged, 80 and over ,Research & Analysis ,business.industry ,Public Health, Environmental and Occupational Health ,Health Status Disparities ,Middle Aged ,United States ,Term (time) ,Opinions, Ideas, & Practice ,Life expectancy ,Female ,business ,Value (mathematics) ,Demography - Abstract
Objectives. To estimate total life expectancy (TLE), disability-free life expectancy (DFLE), and disabled life expectancy (DLE) by US state for women and men aged 25 to 89 years and examine the cross-state patterns. Methods. We used data from the 2013–2017 American Community Survey and the 2017 US Mortality Database to calculate state-specific TLE, DFLE, and DLE by gender for US adults and hypothetical worst- and best-case scenarios. Results. For men and women, DFLEs and DLEs varied widely by state. Among women, DFLE ranged from 45.8 years in West Virginia to 52.5 years in Hawaii, a 6.7-year gap. Men had a similar range. The gap in DLEs across states was 2.4 years for women and 1.6 years for men. The correlation among DFLE, DLE, and TLE was particularly strong in southern states. The South is doubly disadvantaged: residents have shorter lives and spend a greater proportion of those lives with disability. Conclusions. The stark variation in DFLE and DLE across states highlights the large health inequalities present today across the United States, which have significant implications for individuals’ well-being and US states’ financial costs and medical care burden.
- Published
- 2023
37. Short- and Long-Term Health Consequences of Gaps in Health Insurance Coverage among Young Adults
- Author
-
Amber Gautam, Dmitry Tumin, and Gabrielle Horne
- Subjects
Medically Uninsured ,Insurance, Health ,Adolescent ,Health consequences ,Leadership and Management ,business.industry ,Health Policy ,Public Health, Environmental and Occupational Health ,Health Services Accessibility ,Insurance Coverage ,United States ,Preventive care ,Term (time) ,Young Adult ,Cross-Sectional Studies ,Environmental health ,Health insurance ,Humans ,Medicine ,Longitudinal Studies ,Young adult ,business - Abstract
In cross-sectional data, gaps in health insurance coverage are associated with worse health and lower utilization of preventive services. The authors investigated if these associations persisted 2-6 years after disruption of insurance coverage in a cohort of young adults. Data from the National Longitudinal Survey of Youth 1997, a longitudinal cohort study of participants who were ages 13-17 years in 1997, were analyzed. Annual interview data from 2007 through 2017 were included and analyzed in 2021. Health outcomes (general self-rated health, annual preventive care use, and work-related health limitations) in each year were regressed on insurance coverage status, classified as: continuous private coverage, continuous public coverage, gap in coverage, or year-round lack of coverage. In a series of models, insurance coverage status was lagged by 2, 4, or 6 years to capture long-term associations with health outcomes. The analytic sample included 8197 young adults contributing 49,580 observations. Contemporaneous gaps in coverage were associated with 17% lower odds of reporting better self-rated health (odds ratio [OR]: 0.83, 95% confidence interval [CI]: 0.78, 0.88
- Published
- 2022
38. Centroid Estimation With Guaranteed Efficiency: A General Framework for Weakly Supervised Learning
- Author
-
Jane J. You, Jian Yang, Chen Gong, and Masashi Sugiyama
- Subjects
Computer Science::Machine Learning ,Computer science ,02 engineering and technology ,Minimum-variance unbiased estimator ,Artificial Intelligence ,Hinge loss ,0202 electrical engineering, electronic engineering, information engineering ,business.industry ,Applied Mathematics ,Supervised learning ,Centroid ,Estimator ,Term (time) ,Benchmarking ,ComputingMethodologies_PATTERNRECOGNITION ,Efficiency ,Computational Theory and Mathematics ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Supervised Machine Learning ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Algorithms ,Software - Abstract
In this paper, we propose a general framework termed "Centroid Estimation with Guaranteed Efficiency" (CEGE) for Weakly Supervised Learning (WSL) with incomplete, inexact, and inaccurate supervision. The core of our framework is to devise an unbiased and statistically efficient risk estimator that is applicable to various weak supervision. Specifically, by decomposing the loss function (e.g., the squared loss and hinge loss) into a label-independent term and a label-dependent term, we discover that only the latter is influenced by the weak supervision and is related to the centroid of the entire dataset. Therefore, by constructing two auxiliary pseudo-labeled datasets with synthesized labels, we derive unbiased estimates of centroid based on the two auxiliary datasets, respectively. These two estimates are further linearly combined with a properly decided coefficient which makes the final combined estimate not only unbiased but also statistically efficient. This is better than some existing methods that only care about the unbiasedness of estimation but ignore the statistical efficiency. The good statistical efficiency of the derived estimator is guaranteed as we theoretically prove that it acquires the minimum variance when estimating the centroid. As a result, intensive experimental results on a large number of benchmark datasets demonstrate that our CEGE generally obtains better performance than the existing approaches related to typical WSL problems including semi-supervised learning, positive-unlabeled learning, multiple instance learning, and label noise learning.
- Published
- 2022
39. Effects of long-term exposure to the low-earth orbit environment on drag augmentation systems
- Author
-
S. A. Impey, James Beck, Stephen Hobbs, Ian Holbrough, Adrianus I. Aria, Jennifer Kingston, and Zaria Serfontein
- Subjects
020301 aerospace & aeronautics ,Spacecraft ,Atomic Oxygen Undercutting ,business.industry ,Process (engineering) ,Aerospace Engineering ,02 engineering and technology ,Aluminised Kapton ,01 natural sciences ,Term (time) ,Material Degradation ,0203 mechanical engineering ,Low earth orbit ,Drag ,0103 physical sciences ,Space Debris ,Orbit (dynamics) ,Environmental science ,Low Earth Orbit ,Drag Sails ,Aerospace engineering ,business ,010303 astronomy & astrophysics ,Space debris - Abstract
Spacecraft in low-Earth orbit are exposed to environmental threats which can lead to material degradation and component failures. The presence of atomic oxygen and collisions from orbital debris have detrimental effects on the structures, thus affecting their performance. Cranfield University has developed a family of drag augmentation systems (DAS), for end-of-life de-orbit of satellites, addressing the space debris challenge and ensuring that satellites operate responsibly and sustainably. Deorbit devices are stowed on-orbit for the duration of the mission lifetime and, once deployed, the devices must withstand this harsh low-Earth environment until re-entry; a process which can take several years. The DAS’ deployable aluminised Kapton sails are particularly susceptible to undercutting by atomic oxygen. In preparation for commercialising the DAS, Cranfield University and Belstead Research Ltd. have submitted several joint proposals to better understand the degradation process of the drag sail materials and to qualify the materials for the specific application of drag sails in low Earth Orbit (LEO). This paper will outline the proposals and the expected benefits from the projects. Additionally, collisions with debris could accelerate the degradation of the system and generate additional debris. This paper will discuss a future ESABASE2 risk assessment study, aiming to quantifying the probability of collisions between the deployed drag sail and orbital debris. The atmospheric models required to simulate the aforementioned risks are complex and often fail to accurately predict performance or degradation observed in the space environment. A previous UKSA Pathfinder project highlighted this issue when different atmospheric models with varying levels of solar activity yielded drastically different re-entry times. Since Cranfield University has two deployed drag sails in orbit, previous de-orbit analysis performed using STELA and DRAMA will be updated and the simulations will be compared to actual data. This paper will conclude in a summation of the different on-going research projects at Cranfield University related to commercialising the DAS family. This research will benefit the wider space community by expanding the understanding of the effects of long-term exposure on certain materials, as well as improving the validity of future low Earth atmospheric models.
- Published
- 2022
40. Missing Air Pollution Data Recovery Based on Long-Short Term Context Encoder
- Author
-
Yangwen Yu, Jacqueline C.K. Lam, and Victor O. K. Li
- Subjects
Information Systems and Management ,business.industry ,Computer science ,Air pollution ,medicine ,Context (language use) ,Environmental economics ,business ,medicine.disease_cause ,Encoder ,Information Systems ,Data recovery ,Term (time) - Published
- 2022
41. Unsupervised Health Indicator Construction by a Novel Degradation-Trend-Constrained Variational Autoencoder and Its Applications
- Author
-
Chen Dingliang, Jianghong Zhou, and Yi Qin
- Subjects
Computer science ,business.industry ,Reliability (computer networking) ,Contrast (statistics) ,Pattern recognition ,Construct (python library) ,Rotary machine ,Autoencoder ,Computer Science Applications ,Term (time) ,Control and Systems Engineering ,Hidden variable theory ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Degradation (telecommunications) - Abstract
Health indicator (HI) affects the accuracy and reliability of the remaining useful life (RUL) prediction model. The hidden variables of variational autoencoder (VAE) can represent the HI values for a life-cycle dataset with obvious degradation trend. However, for an irregular dataset of a rotary machine, it is still a great challenge to construct the HI that can effectively represent the machinery degradation tendency. Therefore, this work proposes a novel degradation-trend-constrained VAE (DTC-VAE) to construct the HI vector with the distinct degradation trend. Firstly, the multi-dimensional time-domain and frequency-domain characteristics are calculated via the collected vibration samples. Secondly, a new degradation-constraint loss term is proposed and introduced into VAE for constructing DTC-VAE. Thirdly, with the multi-dimensional features and DTC-VAE, various HIs can be generated without supervision. The proposed method is applied to construct the HI vectors of bearing life-cycle datasets and gear fatigue datasets, and then macroscopic-microscopic-attention-based LSTM (MMALSTM) is used to predict the corresponding RULs with the constructed HIs. Via several contrast experiments, the results prove that the proposed unsupervised HI construction approach is superior to other typical methods, and the obtained HI vectors are more suitable for the RUL prediction.
- Published
- 2022
42. Regional Refined Long-Term Predictions Method of Usable Frequency for HF Communication Based on Machine Learning Over Asia
- Author
-
Wenxing An, Jian Wang, and Cheng Yang
- Subjects
Computer science ,business.industry ,Reliability (computer networking) ,Broadcasting ,USable ,Communications system ,Machine learning ,computer.software_genre ,Term (time) ,Coupling (computer programming) ,Maximum usable frequency ,Wireless ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer - Abstract
Due to the unique advantages of long-distance, non-relay, and flexible-deployment, HF communication plays a vital role in military communication, disaster relief, and global broadcasting, etc. To satisfying the spectrum planning requirement for the next generation intelligent HF communication system, the machine learning method is introduced to develop the long-term prediction method of the usable working frequency for HF wireless communication. Specially, the refined mapping model of maximum usable frequency (MUF) propagation factor for one hop on the F2 layer is first reconstructed by using the statistical machine learning method. Then, the new mapping models of conversion factors of optimum working frequency (OWF) and the highest probable frequency (HPF) are proposed by using the fine-grained solar activity parameters and coupling with two geomagnetic activity parameters. This proposed model has higher prediction accuracy for the MUF, OWF, and HPF over Asia. Compared with the ITU recommended model, the root-mean-square errors of MUF, OWF, and HPF are reduced by 1.18 MHz, 1.64 MHz, and 1.06 MHz, and the accuracies are improved by 10.89%, 15.47%, and 9.10%, respectively. The proposed model can achieve intelligent frequency planning for HF communication and has great potential in terms of improving HF communication quality, reliability, and efficiency.
- Published
- 2022
43. Neighborhood Preserving and Weighted Subspace Learning Method for Drift Compensation in Gas Sensor
- Author
-
Wanfeng Shang, Xinyu Wu, Zhengkun Yi, and Tiantian Xu
- Subjects
business.industry ,Computer science ,Gaussian ,020208 electrical & electronic engineering ,Pattern recognition ,02 engineering and technology ,Function (mathematics) ,Computer Science Applications ,Term (time) ,Weighting ,Human-Computer Interaction ,symbols.namesake ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminative model ,Control and Systems Engineering ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software ,Distribution (differential geometry) ,Subspace topology - Abstract
This article presents a novel discriminative subspace-learning-based unsupervised domain adaptation (DA) method for the gas sensor drift problem. Many existing subspace learning approaches assume that the gas sensor data follow a certain distribution such as Gaussian, which often does not exist in real-world applications. In this article, we address this issue by proposing a novel discriminative subspace learning method for DA with neighborhood preserving (DANP). We introduce two novel terms, including the intraclass graph term and the interclass graph term, to embed the graphs into DA. Besides, most existing methods ignore the influence of the subspace learning on the classifier design. To tackle this issue, we present a novel classifier design method (DANP+) that incorporates the DA ability of the subspace into the learning of the classifier. The weighting function is introduced to assign different weights to different dimensions of the subspace. We have verified the effectiveness of the proposed methods by conducting experiments on two public gas sensor datasets in comparison with the state-of-the-art DA methods.
- Published
- 2022
44. Modeling long-term video semantic distribution for temporal action proposal generation
- Author
-
Sicheng Zhao, Xiaoshuai Sun, Tingting Han, and Jun Yu
- Subjects
Computer science ,business.industry ,Cognitive Neuroscience ,Context (language use) ,ENCODE ,Machine learning ,computer.software_genre ,Semantics ,Computer Science Applications ,Term (time) ,Action (philosophy) ,Artificial Intelligence ,Benchmark (computing) ,Embedding ,Segmentation ,Artificial intelligence ,business ,computer - Abstract
Video temporal segmentation plays a vital role in video analysis since many higher-level computer vision tasks rely on it. Some recent efforts have been dedicated to generating temporal action proposals for long and untrimmed videos, which requires methods to generate accurate boundaries for video semantics. In this paper, we propose a novel and efficient Temporal Distribution Network (TDN), to model the long-term distribution of video semantic units (video dictionary). Firstly, we encode the semantics and context relations of video segments with a boundary-specified video embedding method. Then based on temporal convolutional layers, we design a Temporal Distribution Network (TDN) enumerating all the possible temporal locations in one pass and generating proposals that have high action confidence scores by capturing the long-term distributions of video semantics. We validate our method on temporal action proposal generation tasks and action detection tasks. Experimental results on two benchmark datasets, THUMOS14 and ActivityNet-1.3, show that the proposed method can significantly outperform the state-of-the-art approaches. Our model could obtain high-quality action proposals with a much faster speed.
- Published
- 2022
45. ACCURACY OF ULTRASOUND VERSUS CLINICAL FETAL WEIGHT ESTIMATION AT TERM WITH ACTUAL BIRTH WEIGHT IN KENYATTA NATIONAL HOSPITAL
- Author
-
Koigi Kamau and Daniel K. Wanjaria
- Subjects
Estimation ,Pediatrics ,medicine.medical_specialty ,business.industry ,Obstetrics ,Birth weight ,Ultrasound ,Fetal weight ,Clinical method ,Term (time) ,Correlation ,medicine ,Population study ,business - Abstract
Purpose: The purpose of this study was to correlate fetal weight estimation by ultrasound and clinical methods with actual birth weight in KNH.Methodology: This is a prospective comparative study. The design was suitable because it enabled comparison of the predictive value, sensitivity and specificity in estimating fetal weight which is known after birth. Study area was KNH Obstetric wards. The study population was all pregnant women admitted to obstetric wards for elective caesarean delivery and study period was February -March 2016. Data was analysed using SPSS version 20. Categorical variables were presented as proportions in tables and graphs, bars or pie charts). Continuous variable were summarized as means or medians and presented in table form.Results: The findings show that the correlation between actual weight and Ultra Sound estimated weight was significant (r=0.65, p
- Published
- 2022
46. Forecasting with Economic News
- Author
-
Sergio Consoli, Sebastiano Manzan, and Luca Barbaglia
- Subjects
Statistics and Probability ,History ,Economics and Econometrics ,Computer Science - Computation and Language ,Polymers and Plastics ,business.industry ,Computer Science - Artificial Intelligence ,Sentiment analysis ,Distribution (economics) ,Statistics - Applications ,Industrial and Manufacturing Engineering ,Term (time) ,Newspaper ,Econometrics ,Business cycle ,Economics ,Business and International Management ,Time series ,Statistics, Probability and Uncertainty ,Proxy (statistics) ,business ,Computer Science - Computational Engineering, Finance, and Science ,Economic forecasting ,Social Sciences (miscellaneous) - Abstract
The goal of this paper is to evaluate the informational content of sentiment extracted from news articles about the state of the economy. We propose a fine-grained aspect-based sentiment analysis that has two main characteristics: 1) we consider only the text in the article that is semantically dependent on a term of interest (aspect-based) and, 2) assign a sentiment score to each word based on a dictionary that we develop for applications in economics and finance (fine-grained). Our data set includes six large US newspapers, for a total of over 6.6 million articles and 4.2 billion words. Our findings suggest that several measures of economic sentiment track closely business cycle fluctuations and that they are relevant predictors for four major macroeconomic variables. We find that there are significant improvements in forecasting when sentiment is considered along with macroeconomic factors. In addition, we also find that sentiment matters to explains the tails of the probability distribution across several macroeconomic variables., Comment: 46 pages, 11 figures, to be published in Journal of Business & Economic Statistics
- Published
- 2022
47. Forecasting solar energy consumption using a fractional discrete grey model with time power term
- Author
-
Yi Wang and Huiping Wang
- Subjects
Consumption (economics) ,Mathematical optimization ,Economics and Econometrics ,Environmental Engineering ,business.industry ,Environmental science ,Environmental Chemistry ,Management, Monitoring, Policy and Law ,Solar energy ,business ,General Business, Management and Accounting ,Term (time) ,Power (physics) - Abstract
Accurate prediction of energy consumption is an important basis for policymakers to formulate and improve energy policies and measures. In this paper, a new grey prediction model FDGM(1,1, tα ) is proposed. The grey wolf optimizer (GWO) is used to optimize the fractional-order r and the time power α in the model. A numerical example and four sets of solar energy consumption data (France, South Korea, OECD, and Asia Pacific region) are used to establish the FDGM(1,1, tα ) model. Based on the idea of metabolism, the solar energy consumption of the above four economies in the next 10 years is predicted. The results show that the FDGM(1,1, tα ) model is more reliable and effective than the other seven grey models. From 2020 to 2029, the solar energy consumption in South Korea, the OECD, and the Asia Pacific region will gradually increase; the solar energy consumption in France will slowly increase in the next few years and will gradually decrease after reaching a peak in 2026. The grey prediction model FDGM(1,1, tα ) proposed in this paper has strong adaptability and can be used not only for the prediction of solar energy consumption but also for the prediction of other energy sources.
- Published
- 2022
48. Long-term immunosuppressive therapy for leads to poor outcomes in patients with oral squamous cell carcinoma
- Author
-
Kota Morishita, Kohei Okuyama, Mitsunobu Otsuru, Souichi Yanamoto, Tomofumi Naruse, Shin-ichi Yamada, and Masahiro Umeda
- Subjects
Oncology ,medicine.medical_specialty ,Otorhinolaryngology ,business.industry ,Internal medicine ,Medicine ,Surgery ,In patient ,Basal cell ,Oral Surgery ,business ,Pathology and Forensic Medicine ,Term (time) - Published
- 2022
49. Long-Term Effects of a Comprehensive Police Suicide Prevention Program
- Author
-
Louis-Francis Fortin and Brian L. Mishara
- Subjects
business.industry ,Context (language use) ,Suicide rates ,Suicide prevention ,030227 psychiatry ,Term (time) ,03 medical and health sciences ,Psychiatry and Mental health ,0302 clinical medicine ,Medicine ,030212 general & internal medicine ,Extended time ,business ,Demography - Abstract
Abstract. Background: Mishara and Martin (2012) reported decreases in suicides 12 years after implementation of a police suicide prevention program. Aims: We aimed to determine whether suicide decreases were sustained 10 years later. Method: We examined coroners’ investigations of police deaths from 2009 through 2018. Results: From 2009 to 2018, Montreal suicide rates increased but this was not significantly different from the previous 12 years and the rate for other Quebec police remained significantly higher than Montreal ( p < .006). The 22-year Montreal postprogram rate was significantly lower than the preprogram rate ( p < .002), and the 22-year rate for other police during the same years was not significantly different from earlier. Limitations: Uncontrolled factors may have influenced the rates, including the 11% increase in women in the Montreal police. The observed mean aging of the Montreal police personnel would have been expected to bias toward finding increases in suicides. However, the maintenance of decreases in suicide rates was observed. Conclusion: The decrease in suicides observed 12 years after the program was sustained for another 10 years, and appears related to the program. Rates for comparable police remained higher. A continuing comprehensive suicide prevention program tailored to the context may reduce suicides for extended time periods.
- Published
- 2022
50. Continuous Support Vector Regression for Nonstationary Streaming Data
- Author
-
Jie Lu, Hang Yu, and Guangquan Zhang
- Subjects
Concept drift ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Learning ,Artificial Intelligence & Image Processing ,Quadratic programming ,Electrical and Electronic Engineering ,Series (mathematics) ,business.industry ,Process (computing) ,0102 Applied Mathematics, 0801 Artificial Intelligence and Image Processing, 0906 Electrical and Electronic Engineering ,Function (mathematics) ,Computer Science Applications ,Term (time) ,Human-Computer Interaction ,Support vector machine ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Algorithms ,Software ,Information Systems - Abstract
Quadratic programming is the process of solving a special type of mathematical optimization problem. Recent advances in online solutions for quadratic programming problems (QPPs) have created opportunities to widen the scope of applications for support vector regression (SVR). In this vein, efforts to make SVR compatible with streaming data have been met with substantial success. However, streaming data with concept drift remain problematic because the trained prediction function in SVR tends to drift as the data distribution drifts. Aiming to contribute a solution to this aspect of SVR's advancement, we have developed continuous SVR (C-SVR) to solve regression problems with nonstationary streaming data, that is, data where the optimal input-output prediction function can drift over time. The basic idea of C-SVR is to continuously learn a series of input-output functions over a series of time windows to make predictions about different periods. However, strikingly, the learning process in different time windows is not independent. An additional similarity term in the QPP, which is solved incrementally, threads the various input-output functions together by conveying some learned knowledge through consecutive time windows. How much learned knowledge is transferred is determined by the extent of the concept drift. Experimental evaluations with both synthetic and real-world datasets indicate that C-SVR has better performance than most existing methods for nonstationary streaming data regression.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.