2,604 results
Search Results
2. Didactic Strategies for the Understanding of the Kalman Filter in Industrial Instrumentation Systems
- Author
-
Flórez C., Oscar D., Camargo L., Julián R., and Hurtado, Orlando García
- Abstract
This paper presents an application of the Kalman filter in signal processing in instrumentation systems when the conditions of the environment generate a large amount of interference for the acquisition of signals from measurement systems. The unwanted interferences make important use of the instrumentation system resources and do not represent useful information under any aspect. A simulation is presented using the Matlab tool, which remarkably facilitates the information processing so that the corresponding actions are taken according to the information obtained, taking advantage of the current resources offered by the embedded systems and the required measurements are obtained with enough accuracy.
- Published
- 2022
3. An Operations Research-Based Teaching Unit for Grade 10: The ROAR Experience, Part I
- Author
-
Colajanni, Gabriella, Gobbi, Alessandro, Picchi, Marinella, Raffaele, Alice, and Taranto, Eugenia
- Abstract
We introduce "Ricerca Operativa Applicazioni Reali" (ROAR; in English, "Real Applications of Operations Research"), a three-year project for higher secondary schools. Its main aim is to improve students' interest, motivation, and skills related to Science, Technology, Engineering, and Mathematics disciplines by integrating mathematics and computer science through operations research. ROAR offers examples and problems closely connected with students' everyday life or with the industrial reality, balancing mathematical modeling and algorithmics. The project is composed of three teaching units, addressed to grades 10, 11, and 12. The implementation of the first teaching unit took place in Spring 2021 at the scientific high school IIS Antonietti in Iseo (Brescia, Italy). In particular, in this paper, we provide a full description of this first teaching unit in terms of objectives, prerequisites, topics and methods, organization of the lectures, and digital technologies used. Moreover, we analyze the feedback received from students and teachers involved in the experimentation, and we discuss advantages and disadvantages related to distance learning that we had to adopt because of the COVID-19 pandemic.
- Published
- 2023
- Full Text
- View/download PDF
4. Algorithm-Oriented SIMD Computer Mathematical Model and Its Application
- Author
-
Jiang, Yongfeng and Li, Yuan
- Abstract
This paper has designed a professional and practical SIMD computer mathematical model based on the SIMD physical machine model combined with the variable addition method. Furthermore, the model is applied in image collection, processing, and display operations, and a SIMD data parallel image processing system is finally established by absorbing the parallel computing advantages of the mathematical model. In addition, the data-parallel image processing algorithm is introduced and the convolutional neural network algorithm is optimized to promote the significant improvement of the main performance such as the accuracy of the application system. The final experimental results have shown that the highest accuracy of the data-parallel image processing algorithm reaches 93.3% and the lowest error rate reaches 0.11%, which proves the superiority of the SIMD computer mathematical model in image processing applications.
- Published
- 2022
- Full Text
- View/download PDF
5. An Examination of the Effect of Multidimensionality on Parallel Forms Construction.
- Author
-
Ackerman, Terry A.
- Abstract
This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper focuses on one such algorithm, which was developed by R. M. Luecht and T. M. Hirsch. Unidimensional item parameter estimates were obtained by calibrating response data generated from two-dimensional item parameters. Using these unidimensional estimates, three sets of test items from a pool of 200 multidimensional items were selected for each of two different test lengths for three differently shaped target information functions. The item parameter estimates were obtained by calibrating five forms of the EAAP Mathematics usage test using the multidimensional IRT calibration program NOHARM. Response data were generated for 2,000 abilities. Observed score differences for each triad, based on the multidimensional item parameters, were then compared. Despite the multidimensionality of the selected items, the created forms appear to be quite parallel both unidimensionally and multidimensionally. Two tables and seven figures are included. (SLD)
- Published
- 1991
6. New Method of Calibrating IRT Models.
- Author
-
Jiang, Hai and Tang, K. Linda
- Abstract
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and saddle points. Because these procedures do not use gradient information, they can be implemented easily to higher dimensional data, even though they converge more slowly than the Newton-Raphson approach. However, the two approaches can be combined to exploit the advantages of both. That is, GA can be used to find a suitable starting point close to the global optima, and then Newton-Raphson can be used to speed up the convergence. The focus in this paper is on calibrating the unidimensional three-parameter logistic model (3PL) because that is the model most widely used in large-scale standardized tests. Using recent 3PL model estimates from recent Test of English as a Foreign Language administrations to generate examinee responses, the effectiveness of the new method is demonstrated using simulated data. How to implement the new methods with multidimensional data is discussed. (Contains 3 tables, 2 figures, and 10 references.) (SLD)
- Published
- 1998
7. Estimation of Item Response Models Using the EM Algorithm for Finite Mixtures.
- Author
-
American Coll. Testing Program, Iowa City, IA., Woodruff, David J., and Hanson, Bradley A.
- Abstract
This paper presents a detailed description of maximum parameter estimation for item response models using the general EM algorithm. In this paper the models are specified using a univariate discrete latent ability variable. When the latent ability variable is discrete the distribution of the observed item responses is a finite mixture, and the EM algorithm for finite mixtures can be used. Maximum likelihood estimates of the item parameters and of the discrete probabilities of the latent ability distribution are given using the EM algorithm for finite mixtures. Results are presented in general for both dichotomous and polytomous item response models. The relation between the EM estimates and the Bock Aitken marginal maximum likelihood estimates is discussed. Estimates for the item parameters will depend on the specific form of the item response functions, and will usually require iterative numerical procedures. The EM algorithm is the same as the Bock-Aitken algorithm (R. D. Bock and M. Aitken, 1981) for marginal maximum likelihood estimation of the item parameters. (Contains 28 references.) (Author/SLD)
- Published
- 1996
8. A Model for Investigating Predictive Validity at Highly Selective Institutions.
- Author
-
Gross, Alan L.
- Abstract
A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that the missing data are missing at random (MAR) (i.e., the missing data can be accounted for in terms of the observed measures), and there are no unmeasured variables that underlie the missing data process. The proposed model represents an attempt to account for any unmeasured selection variables by assuming that applicants are first placed into admission categories by the institution and then selected within each category in terms of the observed predictor variables. Thus, although the MAR assumption may not hold for the set of all applicants, it may very well hold within each admission category. The model uses the EM algorithm to obtain estimates of validity separately within each category. The model is quite general and can be used when there are missing data on the predictor and criterion variables, and even if the admission category is not known for each applicant. The proposed model is illustrated in terms of a real life data set for a selective secondary school with over 2,000 applicants. Four tables present analysis data. (Author/SLD)
- Published
- 1993
9. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
POWER transformers ,TRANSFORMER insulation ,MOISTURE ,ALGORITHMS ,MACHINE learning ,CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Future Directions in Computational Mathematics, Algorithms, and Scientific Software. Report of the Panel.
- Author
-
Society for Industrial and Applied Mathematics, Philadelphia, PA.
- Abstract
The critical role of computers in scientific advancement is described in this panel report. With the growing range and complexity of problems that must be solved and with demands of new generations of computers and computer architecture, the importance of computational mathematics is increasing. Multidisciplinary teams are needed; these are found in most advanced and industrial laboratories, but rarely in universities. The existing educational opportunities are not producing the required personnel to meet substantial shortages. Therefore, the panel strongly recommends increased federal support for: (1) research in computational mathematics, methods, algorithms, and software for scientific computing; (2) the development of interdisciplinary research teams; (3) the establishment and continued operation of a suitable research infrastructure for the teams; (4) graduate and post-doctoral students directly involved in the research of some interdisciplinary team; and (5) young researchers and cross-disciplinary visitors. In the second section, research opportunities in a number of mathematical areas are described. New modes of research are discussed next, followed by comments on educational needs and a final section on funding considerations. Appendices contain a list of related reports, information on laboratory facilities for scientific computing, and letters and position papers. (MNS)
- Published
- 1985
11. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.
- Author
-
Twente Univ., Enschede (Netherlands). Dept. of Education. and Kelderman, Henk
- Abstract
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is desirable if the contingency table becomes too large to store. Special attention is given to log-linear Item Response Theory (IRT) models that are used for the analysis of educational and psychological test data. To calculate the necessary expected sufficient statistics and other marginal sums of the table, a method is described that avoids summing large numbers of elementary cell frequencies by writing them out in terms of multiplicative model parameters and applying the distributive law of multiplication over summation. These algorithms are used in the computer program LOGIMO, and are illustrated with simulated data for 10,000 cases. Two tables, 3 graphs, and a 34-item list of references are included. (Author/SLD)
- Published
- 1991
12. Boolean Algebra Applied to Determination of Universal Set of Knowledge States.
- Author
-
Educational Testing Service, Princeton, NJ. and Tatsuoka, Kikumi K.
- Abstract
Diagnosing cognitive errors possessed by examinees can be considered as a pattern classification problem that is designed to classify a sequential input of stimuli into one of several predetermined groups. The sequential inputs in this paper's context are item responses, and the predetermined groups are various states of knowledge resulting from misconceptions or different degrees of incomplete knowledge in a domain. In this study, the foundations of a combinatorial algorithm that will provide the universal set of states of knowledge will be introduced. Each state of knowledge is represented by a list of "can/cannot" cognitive tasks and processes (cognitively relevant attributes or latent variables) that are usually unobservable. A Boolean descriptive function is introduced as a mapping between the attribute space spanned by latent attribute variables and the item response space spanned by the item score variables. This function uncovers the unobservable content of a "black box." Once all possible classes are retrieved explicitly and expressed by a set of ideal item response patterns described by a "can/cannot" list of latent attributes, the notion of bug distributions and statistical pattern classification techniques will enable the accurate diagnosis of students' states of knowledge. Moreover, investigations on algebraic properties of these logically-derived-ideal-response patterns will provide insight into the structures of the test and dataset. There are 11 references and three illustrative tables. (Author/SLD)
- Published
- 1991
13. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.]
- Author
-
Illinois State Psychiatric Inst., Chicago. and Gibbons, Robert D.
- Abstract
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate normal density function. C. W. Dunnett and M. Sobel's formula for the univariate ND function, and R. E. Bohrer and M. J. Schervish's error-bounded algorithm for evaluating "F(sub n)" for general "rho(sub ij)" are discussed. Computationally, the latter algorithm is restricted to "n = 7"; even at "n = 7", it can take up to 24 hours for it to compute a single probability with 10(sup -3) accuracy on a computer than is capable of about 1-2 million scalar floating point operations/second. This report presents a fast and general approximation (APX) for rectangular regions of the multivariate ND function based on C. E. Clark's (1961) APX to the moments of the maximum of "n" jointly normal random variables. The performance of this APX compared to special cases in which the exact results are known and error-bounded reduction formulae show that the APX's accuracy is adequate for many practical applications where multivariate normal probabilities are required. The computational speed of the Clark APX is unparalleled. The error bound for the APX is about 10(sup -3) regardless of dimensionality, and accuracy increases with increases in "rho". The Clark algorithm provides a generalization of Dunnett's (1955) results to the case of general "rho(sub ij)", a natural application of which would be a generalization of Dunnett's test to the case of unequal sample sizes among the "k + 1" groups (i.e., multiple treatment groups compared to a single control group). One data table is included. (RLC)
- Published
- 1990
14. IRT-Based Test Construction. Project Psychometric Aspects of Item Banking No. 15. Research Report 87-2.
- Author
-
Twente Univ., Enschede (Netherlands). Dept. of Education. and van der Linden, Wim J.
- Abstract
Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T. J. J. M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads to mathematical programming. General models of test construction are discussed, with information about algorithms and heuristics; ideas about the analysis and refinement of test constraints are also considered. The second paper, "Methods for Simultaneous Test Construction" (Ellen Boekkooi-Timminga), gives an overview of simultaneous test construction using zero-one programming. The item selection process is based on IRT. Some objective functions and practical constraints are presented, the construction of parallel tests is considered, and two tables are provided. The third paper, "Automated Test Construction Using Minimax Programming" (Wim J. van der Linden), proposes the use of the minimax principle in IRT test construction and indicates how this results in test information functions deviating less systematically from the target function than for the usual criterion of minimal test length. An alternative approach and some practical constraints are considered. The final paper, "A Procedure To Assess Target Information Functions" (Henk Kelderman), discusses the concept of an information function and its properties. An interpretable function of information is chosen: the probability of a wrong order of the ability estimates of two subjects. (SLD)
- Published
- 1987
15. Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
- Author
-
Stanford Univ., CA. School of Education., Catholic Univ. of America, Washington, DC. School of Education., Belgard, Maria R., and Min, Leo Yoon-Gee
- Abstract
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated non-linear teaching-learning process is converted into a neat linear form. A theorem is proposed and proven which provides the theoretical basis for treating the teaching-learning process as a piece-wise linear form. By taking probability of success as a negative cost coefficient, a mathematical programing model is proposed for the local optimizations which lead to the global optimization when the theorem is applied. Through this Stage Increment Model, sound and scientific optimization of the teaching-learning process for the individual becomes a reality. (Author/MF)
- Published
- 1971
16. Application of an Automated Item Selection Method to Real Data.
- Author
-
Stocking, Martha L.
- Abstract
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
- Published
- 1993
17. The Role of Partial and Best Matches in Knowledge Systems.
- Author
-
Rand Corp., Santa Monica, CA. and Hayes-Roth, Frederick
- Abstract
This paper is a theoretical discussion of several functions of knowledge systems based on the idea of partial matching, that is, comparison of two or more descriptions by identification of their similarities. Several knowledge system functions are described in terms of partial or best matchings including analogical reasoning, inductive inference, predicate discovery, pattern-directed inference, semantic interpretation, and speech and image understanding. It is difficult to determine admissible algorithms for these functions; economical solutions seem to be possible only for globally organized knowledge bases. Examples are provided and directions for research are discussed. (Author/SD)
- Published
- 1977
18. Noneconomic Analysis Considerations for Management and Information System for Occupational Education.
- Author
-
Management and Information System for Occupational Education, Winchester, MA. and Creager, John A.
- Abstract
As the first of two papers delineating the design of Massachusetts' Management and Information System for Occupational Education (MISOE), these specific dimensions of MISOE structure and function are considered: (1) the distinction between economic and noneconomic analysis, (2) distinctions among census, sample, and other data, (3) the distinction between descriptive and simulative analysis, and (4) functional levels, management levels, and management scope. Information retrieval and analysis for MISOE necessitates: (1) translation of inquiries into analytic hypotheses, (2) the selection of pertinent MISOE subsystems, data types and levels, analytical operations, and models, (3) performing the analyses and interpreting their results, and (4) reporting the results to the inquiry source. Discussions of general analysis requirements and considerations precede the detailing of specific analytical models and algorithms for MISOE, such as multiple linear regression and factor analysis. Dynamic simulation, linear programing, and nonlinear programing models are discussed, in addition to specific noneconomic analysis factors to consider within and among MISOE's subsystems of static space. Technical reports on MISOE's research methodology are appended. Related documents are available in this issue as VT 018 600, VT 018 602, VT 018 809, and VT 018 810. (AG)
- Published
- 1972
19. Mathematically Improved XGBoost Algorithm for Truck Hoisting Detection in Container Unloading.
- Author
-
Wu, Nian, Hu, Wenshan, Liu, Guo-Ping, and Lei, Zhongcheng
- Subjects
LOADING & unloading ,TRUCKS ,ALGORITHMS ,WEATHER ,TRUCK loading & unloading ,MATHEMATICAL models ,SHIPPING containers - Abstract
Truck hoisting detection constitutes a key focus in port security, for which no optimal resolution has been identified. To address the issues of high costs, susceptibility to weather conditions, and low accuracy in conventional methods for truck hoisting detection, a non-intrusive detection approach is proposed in this paper. The proposed approach utilizes a mathematical model and an extreme gradient boosting (XGBoost) model. Electrical signals, including voltage and current, collected by Hall sensors are processed by the mathematical model, which augments their physical information. Subsequently, the dataset filtered by the mathematical model is used to train the XGBoost model, enabling the XGBoost model to effectively identify abnormal hoists. Improvements were observed in the performance of the XGBoost model as utilized in this paper. Finally, experiments were conducted at several stations. The overall false positive rate did not exceed 0.7% and no false negatives occurred in the experiments. The experimental results demonstrated the excellent performance of the proposed approach, which can reduce the costs and improve the accuracy of detection in container hoisting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Approximate numerical algorithms and artificial neural networks for analyzing a fractal-fractional mathematical model.
- Author
-
Najafi, Hashem, Bensayah, Abdallah, Tellab, Brahim, Etemad, Sina, Ntouyas, Sotiris K., Rezapour, Shahram, and Jessada Tariboon
- Subjects
ARTIFICIAL neural networks ,MATHEMATICAL models ,ALGORITHMS ,INTEGRAL equations ,VIRUS diseases - Abstract
In this paper, an analysis of a mathematical model of the coronavirus is carried out by using two fractal-fractional parameters. This dangerous virus infects a person through the mouth, eyes, nose or hands. This makes it so dangerous that no one can get rid of it. One of the main factors contributing to increasing infections of this deadly virus is crowding. We believe that it is necessary to model this effect mathematically to predict the possible outcomes. Hence, the study of neural network-based models related to the spread of this virus can yield new results. This paper also introduces the use of artificial neural networks (ANNs) to approximate the solutions, which is a significant contribution in this regard. We suggest employing this new method to solve a system of integral equations that explain the dynamics of infectious diseases instead of the classical numerical methods. Our study shows that, compared to the Adams-Bashforth algorithm, the ANN is a reliable candidate for solving the problems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Identifying Anomalous Citations for Objective Evaluation of Scholarly Article Impact.
- Author
-
Bai, Xiaomei, Xia, Feng, Lee, Ivan, Zhang, Jun, and Ning, Zhaolong
- Subjects
APPLIED mathematics ,CONFLICT of interests ,RESEARCH grants ,CITATION analysis ,RANKING (Statistics) - Abstract
Evaluating the impact of a scholarly article is of great significance and has attracted great attentions. Although citation-based evaluation approaches have been widely used, these approaches face limitations e.g. in identifying anomalous citations patterns. This negligence would inevitably cause unfairness and inaccuracy to the article impact evaluation. In this study, in order to discover the anomalous citations and ensure the fairness and accuracy of research outcome evaluation, we investigate the citation relationships between articles using the following factors: collaboration times, the time span of collaboration, citing times and the time span of citing to weaken the relationship of Conflict of Interest (COI) in the citation network. Meanwhile, we study a special kind of COI, namely suspected COI relationship. Based on the COI relationship, we further bring forward the COIRank algorithm, an innovative scheme for accurately assessing the impact of an article. Our method distinguishes the citation strength, and utilizes PageRank and HITS algorithms to rank scholarly articles comprehensively. The experiments are conducted on the American Physical Society (APS) dataset. We find that about 80.88% articles contain contributed citations by co-authors in 26,366 articles and 75.55% articles among these articles are cited by the authors belonging to the same affiliation, indicating COI and suspected COI should not be ignored for evaluating impact of scientific papers objectively. Moreover, our experimental results demonstrate COIRank algorithm significantly outperforms the state-of-art solutions. The validity of our approach is verified by using the probability of Recommendation Intensity. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Analysis of Persian Bioinformatics Research with Topic Modeling.
- Author
-
Ebrahimi, Fezzeh, Dehghani, Mohammad, and Makkizadeh, Fatemah
- Subjects
LIFE sciences ,RESEARCH ,BIOMARKERS ,PHONOLOGICAL awareness ,MATHEMATICAL models ,NATURAL language processing ,RESEARCH methodology ,BIBLIOMETRICS ,MOLECULAR models ,BIOINFORMATICS ,MATHEMATICS ,CITATION analysis ,GENE expression ,THEORY ,MEDICAL research ,INFORMATION technology ,ALGORITHMS - Abstract
Purpose. As a scientific field, bioinformatics has drawn remarkable attention from various fields, such as information technology, mathematics, and modern biological sciences, in recent years. The topic models originating from the field of natural language processing have become the focus of attention with the rapid accumulation of biological datasets. Thus, this research is aimed at modeling the topic content of the bioinformatics literature presented by Iranian researchers in the Scopus Citation Database. Methodology. This research was a descriptive-exploratory study, and the studied population included 3899 papers indexed in the Scopus database, which had been indexed in this database until March 9, 2022. The topic modeling was then performed on the abstracts and titles of the papers. A combination of LDA and TF-IDF was utilized for topic modeling. Findings. The data analysis with topic modeling resulted in identifying seven main topics "Molecular Modeling," "Gene Expression," "Biomarker," "Coronavirus," "Immunoinformatics," "Cancer Bioinformatics," and "Systems Biology." Moreover, "Systems Biology" and "Coronavirus" had the largest and smallest clusters, respectively. Conclusion. The present investigation demonstrated an acceptable performance for the LDA algorithm in classifying the topics included in this field. The extracted topic clusters indicated excellent consistency and topic connection with each other. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Variable Selection in Data Analysis: A Synthetic Data Toolkit.
- Author
-
Mitra, Rohan, Ali, Eyad, Varam, Dara, Sulieman, Hana, and Kamalov, Firuz
- Subjects
DATA analysis ,MATHEMATICAL analysis ,MATHEMATICAL models ,ALGORITHMS - Abstract
Variable (feature) selection plays an important role in data analysis and mathematical modeling. This paper aims to address the significant lack of formal evaluation benchmarks for feature selection algorithms (FSAs). To evaluate FSAs effectively, controlled environments are required, and the use of synthetic datasets offers significant advantages. We introduce a set of ten synthetically generated datasets with known relevance, redundancy, and irrelevance of features, derived from various mathematical, logical, and geometric sources. Additionally, eight FSAs are evaluated on these datasets based on their relevance and novelty. The paper first introduces the datasets and then provides a comprehensive experimental analysis of the performance of the selected FSAs on these datasets including testing the FSAs' resilience on two types of induced data noise. The analysis has guided the grouping of the generated datasets into four groups of data complexity. Lastly, we provide public access to the generated datasets to facilitate bench-marking of new feature selection algorithms in the field via our Github repository. The contributions of this paper aim to foster the development of novel feature selection algorithms and advance their study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A Novel Weld-Seam Defect Detection Algorithm Based on the S-YOLO Model.
- Author
-
Zhang, Yi and Ni, Qingjian
- Subjects
WELDING defects ,BUTT welding ,MATRIX multiplications ,FEATURE extraction ,ALGORITHMS ,MATHEMATICAL models - Abstract
Detecting small targets and handling target occlusion and overlap are critical challenges in weld defect detection. In this paper, we propose the S-YOLO model, a novel weld defect detection method based on the YOLOv8-nano model and several mathematical techniques, specifically tailored to address these issues. Our approach includes several key contributions. Firstly, we introduce omni-dimensional dynamic convolution, which is sensitive to small targets, for improved feature extraction. Secondly, the NAM attention mechanism enhances feature representation in the region of interest. NAM computes the channel-wise and spatial-wise attention weights by matrix multiplications and element-wise operations, and then applies them to the feature maps. Additionally, we replace the SPPF module with a context augmentation module to improve feature map resolution and quality. To minimize information loss, we utilize Carafe upsampling instead of the conventional upsampling operations. Furthermore, we use a loss function that combines IoU, binary cross-entropy, and focal loss to improve bounding box regression and object classification. We use stochastic gradient descent (SGD) with momentum and weight decay to update the parameters of our model. Through rigorous experimental validation, our S-YOLO model demonstrates outstanding accuracy and efficiency in weld defect detection. It effectively tackles the challenges of small target detection, target occlusion, and target overlap. Notably, the proposed model achieves an impressive 8.9% improvement in mean Average Precision (mAP) compared to the native model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. An Adaptive Inertia and Damping Control Strategy Based on Enhanced Virtual Synchronous Generator Model.
- Author
-
Suvorov, Aleksey, Askarov, Alisher, Ruban, Nikolay, Rudnik, Vladimir, Radko, Pavel, Achitaev, Andrey, and Suslov, Konstantin
- Subjects
SYNCHRONOUS generators ,ADAPTIVE control systems ,TRANSFER functions ,MATHEMATICAL models ,ALGORITHMS - Abstract
In modern converter-dominated power systems, total inertia is very variable and depends on the share of power generated by renewable-based converter-interfaced generation (CIG) at each specific moment. As a result, the limits required by the grid codes on the rate of change of frequency and its nadir or zenith during disturbances become challenging to achieve with conventional control approaches. Therefore, the transition to a novel control strategy of CIG with a grid-forming power converter is relevant. For this purpose, a control algorithm based on a virtual synchronous generator (VSG) is used, which simulates the properties and capabilities of a conventional synchronous generation. However, due to continuously changing operating conditions in converter-dominated power systems, the virtual inertia formed by VSG must be adaptive. At the same time, the efficiency of adaptive algorithms strongly depends on the used VSG structure. In this connection, this paper proposes an enhanced VSG structure for which the transfer function of the active power control loop was formed. With the help of it, the advantages over the conventional VSG structure were proven, which are necessary for the effective adaptive control of the VSG parameters. Then, the analysis of the impact of the VSG parameters on the dynamic response using the transient characteristics in the time domain was performed. Based on the results obtained, adaptive algorithms for independent control of the virtual inertia and the parameters of the VSG damper winding were developed. The performed mathematical modeling confirmed the reliable and effective operation of the developed adaptive control algorithms and the enhanced VSG structure. The theoretical and experimental results obtained in this paper indicate the need for simultaneous development and improvement of both adaptive control algorithms and VSG structures used for this purpose. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Reconstruction of Shredded Paper Documents by Feature Matching.
- Author
-
Peng Li, Xi Fang, Lianglu Pan, Yi Piao, and Mengjun Jiao
- Subjects
TEXTURE analysis (Image processing) ,FEATURE extraction ,ALGORITHMS ,MATHEMATICAL models ,IMAGE reconstruction - Abstract
Splicing the shredded paper means the technology that, according the paper, which has been shredded to design a particular Algorithm to splice and recover the original paper. This paper introduced the algorithm of splicing the shredded paper based on the matching to texture feature. By means of this algorithm, we modeled for the problem of splicing the shredded paper and solved it. And, we used the algorithm to splice both pieces of English shredded paper and Chinese shredded paper. The recovered paper had proved the accuracy of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. Partitioning multi-layer edge network for neural network collaborative computing.
- Author
-
Li, Qiang, Zhou, Ming-Tuo, Ren, Tian-Feng, Jiang, Cheng-Bin, and Chen, Yong
- Subjects
GENETIC algorithms ,EDGE computing ,MATHEMATICAL optimization ,CLOUD computing ,MATHEMATICAL models ,ALGORITHMS - Abstract
There is a trend to deploy neural network on edge devices in recent years. While the mainstream of research often concerns with single edge device processing and edge-cloud two-layer neural network collaborative computing, in this paper, we propose partitioning multi-layer edge network for neural network collaborative computing. With the proposed method, sub-models of neural network are deployed on multi-layer edge devices along the communication path from end users to cloud. Firstly, we propose an optimal path selection method to form a neural network collaborative computing path with lowest communication overhead. Secondly, we establish a time-delay optimization mathematical model to evaluate the effects of different partitioning solutions. To find the optimal partition solution, an ordered elitist genetic algorithm (OEGA) is proposed. The experimental results show that, compared with traditional cloud computing, single-device edge computing and edge-cloud collaborative computing, the proposed multi-layer edge network collaborative computing has a smaller runtime delay with limited bandwidth resources, and because of the pipeline computing characteristics, the proposed method has a better response speed when processing large number of requests. Meanwhile, the OEGA algorithm has better performance than conventional methods, and the optimized partitioning method outperforms other methods like random and evenly partition. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. HDSAP: heterogeneity-aware dynamic scheduling algorithm to improve performance of nanoscale many-core processors for unknown workloads.
- Author
-
Kia, Keihaneh and Rajabzadeh, Amir
- Subjects
SOFT errors ,MULTICORE processors ,ERROR rates ,ALGORITHMS ,HEURISTIC algorithms ,MATHEMATICAL models - Abstract
The performance growth in processors has been continuing toward increasing the number of processing cores on the chip and scaling the feature size of transistors. However, in the nanoera, side effects of the scaling, such as induced heterogeneities in the performance, power, and soft error rate of identically designed cores, prevent the potential performance from being fully utilized. In this paper, we harness the mentioned side effects in shared-memory multicore processors with unknown workloads by a dynamic heuristic scheduling algorithm called HDSAP. HDSAP aims to maximize performance, i.e., the average response time, under power and reliability constraints in presence of induced heterogeneities. In this regard, we use a mathematical model to quantify task to core assignments based on performance variation. We also consider the variation in power to change selected cores when the power constraint is missed. To meet the reliability constraint, we use N-modular redundancy while being aware of the variation in the soft error rate of cores to prevent under/over reliability estimation. To evaluate HDSAP, we run SPLASH benchmark suite on Sniper and MACPat simulators. As a result, the response time of HDSAP reduces by 6%, 8%, and 25% in comparison with similar algorithms under the same power and reliability constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Surge Fault Detection of Aeroengines Based on Fusion Neural Network.
- Author
-
Desheng Zheng, Xiaolan Tang, Xinlong Wu, Kexin Zhang, Chao Lu, and Lulu Tian
- Subjects
ALGORITHMS ,MATHEMATICAL models ,COMPRESSORS ,TIME series analysis ,ROTORS - Abstract
Aeroengine surge fault is one of the main causes of flight accidents. When a surge occurs, it is hard to detect it in time and take anti-surge measures correctly. Recently, people have been applying detection methods based on mathematical models and expert knowledge. Due to difficult modeling and limited failure-mode coverage of these methods, early surge detection cannot be achieved. To address these problems, firstly, this paper introduced the data of six main sensors related to the aeroengine surge fault, such as, total pressure at compressor (high pressure rotor) outlet (Pt3), high pressure compressor rotor speed (N2), power level angle (PLA), etc. Secondly, aiming at preprocessing of data sets, this paper proposed a data standardization preprocessing algorithm based on batch sliding window (DSPABSW) to build a training set, validation set and test set. Thirdly, aeroengine surge fault detection fusion neural network (ASFDFNN) was provided and optimized to improve the detection accuracy of aeroengine surge faults. Finally, the experimental results showed that the model achieved 95.7%, 93.6% and 94.7% in precision rate, recall rate and F1_Score respectively and consequently it can detect the aeroengine surge fault 260 ms in advance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Distributed parameter identification algorithm for large‐scale interconnected systems.
- Author
-
Hamdi, Mounira, Idomhgar, Lhassane, Kamoun, Samira, Chaoui, Mondher, and Kachouri, Abdenaceur
- Subjects
PARAMETER identification ,PARAMETER estimation ,ALGORITHMS ,DISTRIBUTED algorithms ,MATHEMATICAL models - Abstract
This paper deals with parameter estimation problem of large‐scale systems. A recursive distributed parameter estimation algorithm, based on the minimization of the prediction estimation error method, is developed. Specifically, the class of large‐scale systems that are composed of several interconnected sub‐systems is considered. Each interconnected sub‐system is modelled by a linear discrete‐time state space mathematical model with unknown parameters. The convergence analysis is then achieved using the Lyapunov approach. The theoretical analysis and simulation results prove the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. An improved cirrus detection algorithm MeCiDA2 for SEVIRI and its validation with MODIS.
- Author
-
Ewald, F., Bugliaro, L., Mannstein, H., and Mayer, B.
- Subjects
ALGORITHMS ,CIRRUS clouds ,SPECTRORADIOMETER ,CLIMATOLOGY ,CLIMATE change mathematical models ,BRIGHTNESS temperature measurement ,REMOTE sensing ,GAUSSIAN processes ,MATHEMATICAL convolutions ,MATHEMATICAL models - Abstract
The article presents a study on an enhanced detection algorithm for cirrus clouds, validated using Moderate Resolution Imaging Spectroradiometer (MODIS). The study used various methods such as the estimation of the effects of stratospheric ozone amount on brightness temperature, remote sensing and Gaussian convolution kernel. It concludes that it is vital to monitor cirrus coverage due to its role in climate.
- Published
- 2012
- Full Text
- View/download PDF
32. Visual Analysis of Sports Actions Based on Machine Learning and Distributed Expectation Maximization Algorithm.
- Author
-
Luo, Yan
- Subjects
GREEDY algorithms ,ALGORITHMS ,MATHEMATICAL models ,MACHINE learning ,EXPECTATION-maximization algorithms ,SURFACE structure ,SPORTS - Abstract
In order to improve the scientificity of sports action analysis, this paper constructs a sports action analysis model based on machine learning based on the greedy algorithm and the bat algorithm. According to the structural characteristics of the model, the structure of the model is reflected in the form of face order, that is, the face neighborhood structure. Moreover, this paper judges the degree of similarity between model faces through the pros and cons of the order and applies it to the structural similarity matrix between models. In addition, this paper establishes corresponding mathematical models for the shape and structure of the model and constructs the shape similarity matrix, the surface neighborhood structure similarity matrix, and the structure similarity matrix between the source model and the target model. Finally, this paper designs and implements CAD model retrieval methods based on greedy algorithm and bat algorithm and designs experiments to compare the performance of the algorithm proposed in this paper with traditional algorithms. The result of the experiment shows that the algorithm proposed in this paper has obvious advantages in sports action analysis compared with the traditional algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Mathematical Model of Clonal Evolution Proposes a Personalised Multi-Modal Therapy for High-Risk Neuroblastoma.
- Author
-
Italia, Matteo, Wertheim, Kenneth Y., Taschner-Mandl, Sabine, Walker, Dawn, and Dercole, Fabio
- Subjects
TUMOR treatment ,THERAPEUTIC use of antineoplastic agents ,MEDICINE ,GENETICS ,NEUROBLASTOMA ,GENETIC mutation ,SEQUENCE analysis ,MATHEMATICAL models ,ACCURACY ,DRUG resistance ,CANCER patients ,RISK assessment ,SURVIVAL rate ,MEDICAL protocols ,THEORY ,CYCLOPHOSPHAMIDE ,GENOTYPES ,BODY fluid examination ,RARE diseases ,ALGORITHMS ,VINCRISTINE ,PHYSIOLOGY - Abstract
Simple Summary: Neuroblastoma is a rare type of cancer that usually affects children. The high-risk patients' expected survival rate is less than 50%. One reason is the lack of precision in the standard treatment protocol: a one-size-fits-all multi-modal therapy. The study presented in this paper was designed to address this deficit by optimising the use of two chemotherapeutic agents—vincristine and cyclophosphamide—during induction chemotherapy—the part of the protocol that shrinks the primary tumour before surgical removal. We combined a mathematical model and an optimisation algorithm to identify the best chemotherapy schedules for a cohort of virtual patients with different initial tumour compositions. Our results reveal novel strategies to exploit a pair of drugs with different levels of efficacy, provide a platform on which to individualise induction chemotherapy, and lay the foundation for a personalised therapy that leverages targeted therapies, multi-region sequencing, liquid biopsies, and modern computational methods to improve today's multi-modal therapy. Neuroblastoma is the most common extra-cranial solid tumour in children. Despite multi-modal therapy, over half of the high-risk patients will succumb. One contributing factor is the one-size-fits-all nature of multi-modal therapy. For example, during the first step (induction chemotherapy), the standard regimen (rapid COJEC) administers fixed doses of chemotherapeutic agents in eight two-week cycles. Perhaps because of differences in resistance, this standard regimen results in highly heterogeneous outcomes in different tumours. In this study, we formulated a mathematical model comprising ordinary differential equations. The equations describe the clonal evolution within a neuroblastoma tumour being treated with vincristine and cyclophosphamide, which are used in the rapid COJEC regimen, including genetically conferred and phenotypic drug resistance. The equations also describe the agents' pharmacokinetics. We devised an optimisation algorithm to find the best chemotherapy schedules for tumours with different pre-treatment clonal compositions. The optimised chemotherapy schedules exploit the cytotoxic difference between the two drugs and intra-tumoural clonal competition to shrink the tumours as much as possible during induction chemotherapy and before surgical removal. They indicate that induction chemotherapy can be improved by finding and using personalised schedules. More broadly, we propose that the overall multi-modal therapy can be enhanced by employing targeted therapies against the mutations and oncogenic pathways enriched and activated by the chemotherapeutic agents. To translate the proposed personalised multi-modal therapy into clinical use, patient-specific model calibration and treatment optimisation are necessary. This entails a decision support system informed by emerging medical technologies such as multi-region sequencing and liquid biopsies. The results and tools presented in this paper could be the foundation of this decision support system. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Mathematical Model and Algorithm of Multi-Index Transportation Problem in the Background of Artificial Intelligence.
- Author
-
Cao, Junfang
- Subjects
MATHEMATICAL models ,ARTIFICIAL intelligence ,MARITIME shipping ,SCIENTIFIC method ,ALGORITHMS ,CHOICE of transportation ,HUMAN activity recognition - Abstract
The development of artificial intelligence has brought rapid changes to human life and brought great convenience to human activities. The development of various modes of transportation has also brought convenience to people's travel and commodity transactions, but it has also added more issues that need to be carefully considered. Because of the diversification of transportation methods, transportation problems also arise many fields, such as air transportation, water transportation, and land transportation. The development of mathematical models and algorithms for transportation problems is also in full swing, and it is a major trend to introduce mathematical models and algorithms into the solution of transportation problems. This paper deals with the multi-index transportation problem by establishing a multi-index mathematical model and algorithm to find a scientific transportation method for the goods to be transported, so as to save the cost and time of transportation. Experiments show that the mathematical model established in this paper has high efficiency for solving the multi-index transportation problem. At the same time, the most suitable transportation method can also be selected for the transportation of goods, and the route planned by the mathematical model and algorithm can reduce the risk to 12.34%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1): an extended and updated framework for modeling biogenic emissions.
- Author
-
Guenther, A. B., Jiang, X., Heald, C. L., Sakulyanontvittaya, T., Duhl, T., Emmons, L. K., and Wang, X.
- Subjects
EMISSIONS (Air pollution) ,MATHEMATICAL models ,BIOGENIC amines ,AEROSOLS ,ESTIMATES ,ALGORITHMS - Abstract
The article focuses on the Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1) which estimates fluxes of 147 biogenic compounds between terrestrial ecosystems and the atmosphere using mechanistic algorithms. It says that emission estimates from MEGAN2.1 are within the range of estimates reported by other approaches. Moreover, the offline version of MEGAN2.1 driving variables and source code is available from http://acd.ucar.edu/guenther/MEGAN/MEGAN.htm.
- Published
- 2012
- Full Text
- View/download PDF
36. Multi-skill resource-constrained multi-modal project scheduling problem based on hybrid quantum algorithm.
- Author
-
Peng, Jun Long, Liu, Xiao, Peng, Chao, and Shao, Yu
- Subjects
ALGORITHMS ,PARTICLE swarm optimization ,SEARCH algorithms ,SCHEDULING ,MATHEMATICAL models ,PROJECT management - Abstract
Numerous studies on project scheduling only consider a single factor, which fails to reflect the actual environment of project operations. In light of this issue, the article synthesizes multiple perspectives and proposes a multi-skill resource-based multi-modal project scheduling problem (MRCMPSP). This problem is described, modeled, and solved using the resource capability matrix and other constraints to minimize the project duration. To effectively solve MRCMPSP and enrich scheduling algorithms, the paper selects the hybrid quantum algorithm (HQPSO) based on the quantum particle swarm algorithm (QPSO). The HQPSO introduces various improvements such as the JAYA optimization search to improve the algorithm's performance. In order to verify the generality, superiority, and effectiveness of the algorithm, independent operation comparison experiments and practical application experiments of the algorithm are designed based on different case sizes and resource quantities. The experimental results demonstrate that the proposed algorithm has superior convergence performance and solution accuracy and can provide an effective scheduling solution for real cases. Additionally, the article provides targeted management suggestions based on the research findings. Overall, this study contributes a novel mathematical model, solution algorithm, optimization strategies, and managerial insights, advancing the field of project management research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. The consistency and consensus analysis for group decision-making with incomplete linguistic interval-valued intuitionistic fuzzy preference relations.
- Author
-
Li, Tao, Zhang, Liyuan, and Zhang, Zhenglong
- Subjects
GROUP decision making ,FUZZY sets ,QUALITY function deployment ,CHINESE corporations ,MATHEMATICAL models ,ALGORITHMS ,DECISION making - Abstract
This paper mainly provides a group decision-making (GDM) method based on linguistic interval-valued intuitionistic fuzzy preference relations (LIVIFPRs), where consistency and consensus analysis is conducted. The multiplicative consistency of LIVIFPRs is first introduced, and a consistency-based model is built to ascertain the missing values of an incomplete LIVIFPR. Considering the smallest distance, an optimization model is established to repair the unacceptably multiplicatively consistent LIVIFPR to have acceptable consistency. Meanwhile, the linguistic interval-valued intuitionistic fuzzy priority weights of LIVIFPR are constructed via the optimal solutions of a programming model. Then, Algorithm I for decision-making with one incomplete LIVIFPR is presented. For the GDM problem, the weights of experts are determined before aggregating the individual LIVIFPRs. Moreover, when the consensus of an individual LIVIFPR is unacceptable, a mathematical model is utilized to reach the consensus requirement. Subsequently, Algorithm II for GDM with incomplete LIVIFPRs is proposed step-by-step. Finally, the new GDM method is used to evaluate four Chinese express companies, and the advantages of this approach are demonstrated by performing a comparison analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Legalized Routing Algorithm Based on Linear Programming.
- Author
-
Chen, Chuandong, Tong, Xin, Liu, Qinghai, Chen, Jiarui, and Lin, Zhifeng
- Subjects
LINEAR programming ,ROUTING algorithms ,INTEGER programming ,MATHEMATICAL models ,ALGORITHMS ,PROBLEM solving - Abstract
Legalized routing is an essential part of PCB automatic routing. It solves the problem of wiring conflicts and obtains routing results that comply with the constraints of design rules. Traditional legalized routing problems mostly use trial backtracking methods, but with increasing design complexity and design rules, avoiding wiring conflicts between networks has become increasingly challenging. This paper proposes a legalized routing algorithm based on linear programming to obtain the optimal wiring trajectory under specified topological constraints. First, the corresponding routing model was established based on numerous routing rules, and a routing grid diagram was found using obstacles as grid points. Secondly, a global routing algorithm was used to obtain the estimated wiring path, and integer linear programming was used to realize the mathematical modeling of the legalized routing problem. Finally, a multi-line simultaneous routing strategy was used to design and implement a detailed routing algorithm, optimizing the routing results. We use C++ to complete the coding work and thoroughly test the PCB use cases of different sizes. The experimental results show that our algorithm still maintains a 100% routing success rate, good time performance, and excellent routing quality with large-scale use cases compared with the trial backtracking method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Sparse-View Computed Tomography Reconstruction Based on a Novel Improved Prior Image Constrained Compressed Sensing Algorithm.
- Author
-
Li, Xuru, Sun, Xueqin, and Li, Fuzhong
- Subjects
IMAGE reconstruction algorithms ,COMPRESSED sensing ,IMAGE reconstruction ,COMPUTED tomography ,ALGORITHMS ,MATHEMATICAL models ,RADIATION doses - Abstract
The problem of sparse-view computed tomography (SVCT) reconstruction has become a popular research issue because of its significant capacity for radiation dose reduction. However, the reconstructed images often contain serious artifacts and noise from under-sampled projection data. Although the good results achieved by the prior image constrained compressed sensing (PICCS) method, there may be some unsatisfactory results in the reconstructed images because of the image gradient L
1 -norm used in the original PICCS model, which leads to the image suffering from step artifacts and over-smoothing of the edge as a result. To address the above-mentioned problem, this paper proposes a novel improved PICCS algorithm (NPICCS) for SVCT reconstruction. The proposed algorithm utilizes the advantages of PICCS, which could recover more details. Moreover, the algorithm introduces the L0 -norm of image gradient regularization into the framework, which overcomes the disadvantage of conventional PICCS, and enhances the capability to retain edge and fine image detail. The split Bregman method has been used to resolve the proposed mathematical model. To verify the effectiveness of the proposed method, a large number of experiments with different angles are conducted. Final experimental results show that the proposed algorithm has advantages in edge preservation, noise suppression, and image detail recovery. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
40. Analyzing the Impact of Memristor Variability on Crossbar Implementation of Regression Algorithms With Smart Weight Update Pulsing Techniques.
- Author
-
Afshari, Sahra, Musisi-Nkambwe, Mirembe, and Sanchez Esqueda, Ivan Sanchez
- Subjects
ALGORITHMS ,MEMRISTORS ,COMPUTER architecture ,MATHEMATICAL models ,INTEGRATING circuits - Abstract
This paper presents an extensive study of linear and logistic regression algorithms implemented with 1T1R memristor crossbars arrays. Using a sophisticated simulation platform that wraps circuit-level simulations of 1T1R crossbars and physics-based models of RRAM (memristors), we elucidate the impact of device variability on algorithm accuracy, convergence rate and precision. Moreover, a smart pulsing strategy is proposed for practical implementation of synaptic weight updates that can accelerate training in real crossbar architectures. Stochastic multi-variable linear regression shows robustness to memristor variability in terms of prediction accuracy but reveals impact on convergence rate and precision. Similarly, the stochastic logistic regression crossbar implementation reveals immunity to memristor variability as determined by negligible effects on image classification accuracy but indicates an impact on training performance manifested as reduced convergence rate and degraded precision. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Research on Indoor 3D Positioning Model Using Improved Triangular Centroid Position Algorithm Based on UWB.
- Author
-
Fang, Yuan, Ma, Weihao, Chen, Mingzhang, Chai, Cong, and Zhang, Xuancheng
- Subjects
CENTROID ,ALGORITHMS ,ARTIFICIAL satellites in navigation ,MATHEMATICAL models ,DATA modeling - Abstract
The indoor positioning technology of an ultra-wideband (UWB) can play an excellent supplementary role in satellite navigation and has broad application prospects. However, if strong interference exists, the measurement data based on UWB will fluctuate abnormally, which seriously affects the accuracy of positioning. In view of the above problems, based on the combination of mathematical modeling, this paper starts from the subject data, and proposes a positioning method suitable for non-interference/interference conditions and an abnormal data identification method to improve the positioning accuracy. The specific scheme includes data preprocessing model establishment and solution, positioning model establishment and solution, migration application of positioning model, interfering data identification model establishment and solution, and movement track positioning model establishment and solution. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. A New Reconstruction Algorithm for Geometric Shape of Static Overhead Transmission Lines.
- Author
-
Zhang, Zhijin, Li, Ran, Jiang, Xingliang, Liang, Tian, Qiao, Xinhan, and Pang, Guohui
- Subjects
ELECTRIC lines ,GEOMETRIC shapes ,ENERGY function ,EUCLIDEAN distance ,ALGORITHMS ,INTERPOLATION algorithms - Abstract
This paper proposes a reconstruction algorithm for the geometric shapes of static overhead transmission lines that is based on the spatial coordinate data (obtained by field mapping) and the corresponding inclination data (obtained by attitude sensors). The algorithm takes the piecewise cubic Hermite interpolation function as the basic framework and corrects the aforementioned two types of data based on the “minimum energy principle” of the transmission line system. The introduction of a new energy function in modified energy method (MEM) overcame the shortcomings of conventional energy method (CEM) in fairing strategy, and the new Fairing Criterion made result curves more consistent with the mechanical properties of transmission lines. In the numerical simulation tests, the two energy methods were compared in terms of load type, horizontal span, elevation difference, and anti-disturbance ability, which showed that MEM has a higher reconstruction accuracy (average Euclidean distance is less than 1 cm) and better robustness than CEM. Meanwhile, the key parameters involved in the algorithm were discussed to guide the application in practical engineering. In addition, measurement tests under non-uniform icing were carried out in the field test station, and the results showed that the maximum relative error is less than 0.1%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Compensation Network Optimal Design Based on Evolutionary Algorithm for Inductive Power Transfer System.
- Author
-
Chen, Weiming, Lu, Weiguo, Iu, Herbert Ho-Ching, and Fernando, Tyrone
- Subjects
EVOLUTIONARY algorithms ,CURRENT fluctuations ,EVOLUTIONARY computation ,ALGORITHMS ,MATHEMATICAL models ,EXPERIMENTAL design - Abstract
Conventional design and optimization of passive compensation network (PCN) for inductive power transfer (IPT) system are based on specific topologies. The demerits of this design method are: i) The topology is mostly chosen by experience; ii) The design parameters are not multi-objective optimal. Aiming at these issues, this paper proposes an optimal PCN design scheme based on evolutionary algorithm (EA) to synchronously optimize the topology and parameters of PCN for IPT system. Firstly, a unified mathematical model of the PCN is presented and derived by transmission matrix. Then, according to the mathematical model, the multi-objective functions (such as output fluctuation and efficiency) as well as the constraints (such as load and coupling coefficient) for the optimal PCN design are established. The EA based multi-objective optimal PCN design algorithm is further constructed. Six optimal results are obtained using the algorithm, and one optimized PCN having minimum output current fluctuation and high-efficiency is chosen to validate the effectiveness of the proposed design scheme in experiment. For the given IPT system with the optimized PCN, the maximum fluctuation of output current is no more than 11%, within 200% of load variation and about 77% of coupling variation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. Enforcing Passivity of Parameterized LTI Macromodels via Hamiltonian-Driven Multivariate Adaptive Sampling.
- Author
-
Zanco, Alessandro, Grivet-Talocia, Stefano, Bradde, Tommaso, and De Stefano, Marco
- Subjects
ALGORITHMS ,EIGENVALUES ,DESCRIPTOR systems ,MATHEMATICAL models ,PASSIVHAUS - Abstract
We present an algorithm for passivity verification and enforcement of multivariate macromodels whose state-space matrices depend in closed form on a set of external or design parameters. Uniform passivity throughout the parameter space is a fundamental requirement of parameterized macromodels of physically passive structures, that must be guaranteed during model generation. Otherwise, numerical instabilities may occur, due to the ability of nonpassive models to generate energy. In this paper, we propose the first available algorithm that, starting from a generic parameter-dependent state-space model, identifies the regions in the frequency-parameter space where the model behaves locally as a nonpassive system. The approach we pursue is based on an adaptive sampling scheme in the parameter space, which iteratively constructs and perturbs the eigenvalue spectrum of suitable skew-Hamiltonian/Hamiltonian pencils, with the objective of identifying the regions where some of these eigenvalues become purely imaginary, thus pinpointing local passivity violations. The proposed scheme is able to detect all relevant violations. An outer iterative perturbation method is then applied to the model coefficients in order to remove such violations and achieve uniform passivity. Although a formal proof of global convergence is not available, the effectiveness of the proposed implementation of the passivity verification and enforcement schemes is demonstrated on several examples. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Path Planning for the Rapid Reconfiguration of a Multi-Robot Formation Using an Integrated Algorithm.
- Author
-
Zhao, Dewei, Zhang, Sheng, Shao, Faming, Yang, Li, Liu, Qiang, Zhang, Heng, and Zhang, Zihan
- Subjects
ANT algorithms ,POTENTIAL field method (Robotics) ,ALGORITHMS ,ROBOTIC path planning ,GENETIC algorithms ,COMPUTATIONAL complexity ,MATHEMATICAL models - Abstract
Path planning is crucial in the scheduling and motion planning of multiple robots. However, solving multi-robot path-planning problems efficiently and quickly is challenging due to their high complexity and long computational time, especially when dealing with many robots. This paper presents a unified mathematical model and algorithm for the path planning of multiple robots moving from one formation to another in an area with obstacles. The problem was initially simplified by constructing a cost matrix, and then the route planning was achieved by integrating an elite enhanced multi-population genetic algorithm and an ant colony algorithm. The performance of the proposed planning method was verified through numerical simulations in various scenarios. The findings indicate that this method exhibits high computational efficiency and yields a minimal overall path distance when addressing the path-planning problem of a multi-robot formation reconstruction. As a result, it holds promising potential for the path-planning problem of a multi-robot formation reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Downlink Power Allocation for CR-NOMA-Based Femtocell D2D Using Greedy Asynchronous Distributed Interference Avoidance Algorithm.
- Author
-
Elmadina, Nahla Nur, Saeed, Rashid, Saeid, Elsadig, Ali, Elmustafa Sayed, Abdelhaq, Maha, Alsaqour, Raed, and Alharbe, Nawaf
- Subjects
FEMTOCELLS ,COGNITIVE radio ,ALGORITHMS ,MATHEMATICAL models ,FAIRNESS - Abstract
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. A Decomposition Algorithm for Dynamic Car Sequencing Problems with Buffers.
- Author
-
Zhang, Haida and Ding, Wensi
- Subjects
GREEDY algorithms ,ALGORITHMS ,GENETIC algorithms ,TRAFFIC violations ,MATHEMATICAL models ,AUTOMOBILES - Abstract
In this paper, we research the dynamic car sequencing problem with car body buffer (DCSPwB) in automotive mixed-flow assembly. The objective is to reorder the sequence of cars in the paint shop using the post-painted body buffers to minimize the violation of constraint rules and the time cost of sequencing in the general assembly shop. We establish a mathematical model of DCSPwB and propose a decomposition-based algorithm based on the dynamic genetic algorithm (DGA) and greedy algorithm for delayed car release (PGDA). Experiments are conducted based on production orders from actual companies, and the results are compared with the solution results of the underlying genetic algorithm (GA) and greedy algorithm (GDA) to verify the effectiveness of the algorithm. In addition, the effect of buffer capacity on DCSPwB is investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. A Global Structure and Adaptive Weight Aware ICP Algorithm for Image Registration.
- Author
-
Cao, Lin, Zhuang, Shengbin, Tian, Shu, Zhao, Zongmin, Fu, Chong, Guo, Yanan, and Wang, Dongfeng
- Subjects
SMART structures ,ALGORITHMS ,POINT cloud ,REMOTE sensing ,MATHEMATICAL models ,IMAGE registration - Abstract
As an important technology in 3D vision, point-cloud registration has broad development prospects in the fields of space-based remote sensing, photogrammetry, robotics, and so on. Of the available algorithms, the Iterative Closest Point (ICP) algorithm has been used as the classic algorithm for solving point cloud registration. However, with the point cloud data being under the influence of noise, outliers, overlapping values, and other issues, the performance of the ICP algorithm will be affected to varying degrees. This paper proposes a global structure and adaptive weight aware ICP algorithm (GSAW-ICP) for image registration. Specifically, we first proposed a global structure mathematical model based on the reconstruction of local surfaces using both the rotation of normal vectors and the change in curvature, so as to better describe the deformation of the object. The model was optimized for the convergence strategy, so that it had a wider convergence domain and a better convergence effect than either of the original point-to-point or point-to-point constrained models. Secondly, for outliers and overlapping values, the GSAW-ICP algorithm was able to assign appropriate weights, so as to optimize both the noise and outlier interference of the overall system. Our proposed algorithm was extensively tested on noisy, anomalous, and real datasets, and the proposed method was proven to have a better performance than other state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Design of Semantic Matching Model of Folk Music in Occupational Therapy Based on Audio Emotion Analysis.
- Author
-
Ouyang, Wensi
- Subjects
SEMANTICS ,MATHEMATICAL models ,LANGUAGE & languages ,OCCUPATIONAL therapy ,CONCEPTUAL structures ,THEORY ,SOUND recordings ,DESCRIPTIVE statistics ,MUSIC ,EMOTIONS ,ALGORITHMS - Abstract
The main semantic symbol systems for people to express their emotions include natural language and music. The analysis and establishment of semantic association between language and music is helpful to provide more accurate retrieval and recommendation services for text and music. Existing researches mainly focus on the surface symbolic features and association of natural language and music, which limits the performance and interpretability of applications based on semantic association of natural language and music. Emotion is the main meaning of music expression, and the semantic range of text expression includes emotion. In this paper, the semantic features of music are extracted from audio features, and the semantic matching model of audio emotion analysis is constructed to analyze ethnic music audio emotion through feature extraction ability of deep structure. The model is based on the framework of emotional semantic matching technology and realizes the emotional semantic matching of music fragments and words through semantic emotional recognition algorithm. Multiple experiments show that when W = 0.65 , the recognition rate of multichannel fusion model is 88.42%, and the model can reasonably realize audio emotion analysis. When the spatial dimension of music data changes, the classification accuracy reaches the highest when the spatial dimension is 25. Analysing the semantic association of audio promotes the application of folk music in occupational therapy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Analysis of the Association between Teachers' Classroom Teaching Behaviors and Students' Knowledge Acceptance Based on Psychological Data Analysis.
- Author
-
Yao, Tianjin and Yang, Xiuye
- Subjects
SCHOOL environment ,DECISION trees ,TEACHER-student relationships ,TEACHING methods ,PROFESSIONS ,HEALTH occupations students ,MATHEMATICAL models ,MOTIVATION (Psychology) ,LEARNING ,TEACHERS ,THEORY ,DESCRIPTIVE statistics ,RESEARCH funding ,STUDENT attitudes ,ALGORITHMS ,VIDEO recording - Abstract
This paper adopts the method of psychological data analysis to conduct in-depth research and analysis on the correlation between teachers' classroom teaching behaviors and students' knowledge acceptance. Firstly, this paper proposes a health factor prediction model, which is specifically divided into clustering and then classification model and a clustering and classification synthesis model. The classroom learning process is coded, sampled, and quantified to obtain data on students' learning behaviors, and a visualization system based on classroom students' learning behaviors is designed and developed to record and analyze students' behaviors in the classroom learning process and grasp students' classroom learning. These two models use algorithms to fine-grained divide the dataset from the perspective of subject users and mental health factors, respectively, and then use decision tree algorithms to classify and predict the mental health factor information by the subject user base information. Second, based on the collected datasets, we designed comparison experiments to validate the clustering-then-classification model and the integrated clustering-classification model and selected the optimal model for comparison. Teachers should increase effective praise and encouragement behaviors; teachers should increase meaningful teacher-student interaction behaviors; teachers should be proficient in teaching media technology to reduce unnecessary time wastage. Strategies to enhance teachers' TPACK include enriching teachers' knowledge base of CK, TK, and PK; developing teachers' integration thinking; and enriching teachers' types of activities for integrating technology. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.