303 results
Search Results
2. A study on multi-channel active sound profiling algorithm for hybrid control of broadband and narrowband noise inside vehicles.
- Author
-
Liu, Xuexian, Zheng, Xu, Jia, Zibin, Li, Rubin, Wan, Bo, Liu, Chi, and Qiu, Yi
- Subjects
- *
ACTIVE noise control , *NOISE control , *NOISE , *ALGORITHMS , *TRAFFIC noise - Abstract
• A hybrid broadband and narrowband active sound profiling algorithm is proposed for active hybrid noise control. • Simulations of stationary and non-stationary hybrid noise sources prove the stability and robustness of the algorithm. • The proposed algorithm can selectively control the desired narrowband noise components with specific magnitudes in hybrid noise. • Real vehicle experiments, conducted using headrests, validate the theoretical analysis. The vehicle interior noise consists of both broadband road noise and narrowband engine order noise. Currently, the road noise control (RNC) system and engine noise control (EOC) system operate independently in engineering applications. This paper introduces a novel multi-channel hybrid broadband and narrowband active noise control (HBNANC) algorithm, designed to simultaneously attenuate road noise and engine order noise inside vehicles. On this basis, by integrating an error signal design (ESD) subsystem and a sinusoidal error separation (SES) subsystem, a multi-channel hybrid broadband and narrowband active sound profiling (HBNASP) algorithm is further proposed, enabling specific attenuation or amplification of individual narrowband components to meet subjective acoustic comfort requirements. The paper validates the algorithms' effectiveness and robustness through simulations, and real vehicle experimental measurements under various conditions. The research offers a comprehensive approach to enhancing the auditory experience inside vehicles, contributing to the advancement of active noise control technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Joint learning framework of superpixel generation and fuzzy sparse subspace clustering for color image segmentation.
- Author
-
Wu, Chengmao and Zhao, Jingtian
- Subjects
- *
PIXELS , *FUZZY algorithms , *IMAGE segmentation , *COMPUTATIONAL complexity , *RESEARCH personnel , *ALGORITHMS - Abstract
• A joint learning model for superpixel-based fuzzy sparse subspace clustering is established. • A four-level alteration iterative algorithm for superpixel-based image segmentation is proposed. • A centroid shift strategy suitable for image contents is used to generate superpixels. • Experimental results indicate that the proposed algorithm has very good performance. Sparse subspace clustering (SSC) is an important image segmentation method that constructs a self-representation coefficient matrix to represent the relationships between pixels, and then uses spectral clustering to achieve clustering. Compared with other unsupervised segmentation algorithms, SSC has good segmentation performance. However, SSC has a high computational complexity when processing large-sized image. To improve computational efficiency, many researchers have proposed superpixel-based SSC algorithms, which process superpixels instead of pixels and improve efficiency through preprocessing. Due to the sensitivity of superpixel generation to noise, superpixel-based SSC algorithms still have poor robustness. Additionally, preprocessing increases the complexity of the algorithm. To address these issues, this paper proposes a robust superpixel-based fuzzy sparse subspace clustering algorithm. This algorithm combines fuzzy sparse subspace clustering with superpixel generation, and it constructs a unified optimization learning framework through fuzzy C-multiple-means clustering to improve segmentation performance and reduce complexity. Additionally, this paper introduces additional features of superpixel in sparse subspace clustering to further enhance the segmentation performance of the algorithm. Experiment results indicate that the proposed algorithm not only outperforms existing state-of-the-art robust segmentation algorithms independent of superpixels, but also is superior to the latest superpixel-based segmentation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. YOLO-TSL: A lightweight target detection algorithm for UAV infrared images based on Triplet attention and Slim-neck.
- Author
-
Cao, Lei, Wang, Qing, Luo, Yunhui, Hou, Yongjie, Cao, Jun, and Zheng, Wanglin
- Subjects
- *
DRONE aircraft , *INFRARED imaging , *COMPUTATIONAL complexity , *SPINE , *ALGORITHMS - Abstract
When performed from an Unmanned Aerial Vehicle (UAV) perspective, infrared target detection often suffers from low accuracy, excessive model parameters, and slow processing speed. To address these challenges, this paper proposes a lightweight infrared target detection algorithm, named YOLO-TSL, which is based on an improved version of YOLOv8n. Building on an in-depth analysis of UAV infrared image characteristics, this paper introduces Triplet Attention to the model backbone to enhance target detection accuracy and effectively suppress background interference. Moreover, YOLO-TSL incorporates the Slim-Neck architecture, which features GSConv and GSbottleneck in the neck structure and employs a one-time aggregation method to design the cross-level partial network known as the VoV-GSCSP module. This architecture significantly reduces the model's computational complexity while maintaining high detection accuracy. Furthermore, this paper introduces an innovative inner-MPDIoU loss function that optimizes IoU loss computation by enhancing bounding box similarity and adaptive auxiliary bounding box scale adjustments, based on both inner-IoU and MDPIoU. Following experimental validation, YOLO-TSL demonstrate significant improvements over YOLOV8n, including a 3.9% increase in mAP50 and a 3% increase in recall. Additionally, YOLO-TSL has 15.7% fewer parameters, requires 11% less computation, and offers 26% faster inference. Comparative experiments on the FLIR dataset reveal that the algorithm not only outperforms YOLOv8n by 1.4% in mAP50, but also offers significant advantages over other algorithms in terms of parameter reduction and computational efficiency, thus demonstrating YOLO-TSL's superiority in accuracy, efficiency, and speed. • Triplet Attention Boosts UAV Detection Accuracy. • Slim-Neck Reduces Complexity, Maintains Accuracy. • New Inner-MPDIoU Enhances Bounding Box Accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Gradient descent algorithm for the optimization of fixed priorities in real-time systems.
- Author
-
Rivas, Juan M., Gutiérrez, J. Javier, Guasque, Ana, and Balbastre, Patricia
- Subjects
- *
COMPUTATIONAL complexity , *MACHINE learning , *PROBLEM solving , *ALGORITHMS - Abstract
This paper considers the offline assignment of fixed priorities in partitioned preemptive real-time systems where tasks have precedence constraints. This problem is crucial in this type of systems, as having a good fixed priority assignment allows for an efficient use of the processing resources while meeting all the deadlines. In the literature, we can find several proposals to solve this problem, which offer varying trade-offs between the quality of their results and their computational complexities. In this paper , we propose a new approach, leveraging existing algorithms that are widely exploited in the field of Machine Learning: Gradient Descent, the Adam Optimizer, and Gradient Noise. We show how to adapt these algorithms to the problem of fixed priority assignment in conjunction with existing worst-case response time analyses. We demonstrate the performance of our proposal on synthetic task-sets with different sizes. This evaluation shows that our proposal is able to find more schedulable solutions than previous heuristics, approximating optimal but intractable algorithms such as MILP or brute-force, while requiring reasonable execution times. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Generalized black hole clustering algorithm.
- Author
-
Saltos, Ramiro and Weber, Richard
- Subjects
- *
METRIC spaces , *PATTERN recognition systems , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
The Black Hole Clustering (BHC) algorithm is a density-based partitional clustering method inspired by the Density-based Spatial Clustering of Applications with Noise (DBSCAN). It does not require the number of clusters nor the computation of the pair-wise distance matrix between the data points, making it faster than DBSCAN. Also, it only needs one parameter that is intuitively easier to set than the epsilon parameter of DBSCAN. However, BHC needs the allocation of the so-called black holes that have to be linearly independent, making the algorithm in its current version suitable only for two or three-dimensional data sets. In this paper, we propose a generalized version of the black hole clustering algorithm (GBHC) by introducing a novel black hole allocation procedure for higher-dimensional data spaces. Furthermore, the proposed method is data-independent, so we have to run it once to obtain the black hole positions for all finite-dimensional metric spaces. We performed extensive computational experiments to compare GBHC with DBSCAN. The results show that both algorithms obtain comparable clustering solutions. GBHC, however, outperforms DBSCAN in computational complexity and explainability. • We propose a novel method to place the black holes for high-dimensional data sets. • This method is called Generalized Black Hole Clustering and it is data-independent. • We run the new method once to get black hole positions in all finite metric spaces. • We compare the GBHC and DBSCAN algorithms using several validation measures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. SparseShift-GCN: High precision skeleton-based action recognition.
- Author
-
Zang, Ying, Yang, Dongsheng, Liu, Tianjiao, Li, Hui, Zhao, Shuguang, and Liu, Qingshan
- Subjects
- *
CONVOLUTIONAL neural networks , *NETWORK performance , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
• The accuracy is greatly improved after replacing SCS module with the CSC module in Shift-GCN. • Inspired by shift CNNs, we replace shift space convolution in Shift-GCN with a sparse shift and named SparseShift-GCN. • The accuracy of algorithm was further improved after introduction of OHEM into SparseShift-GCN. Skeleton-based action recognition is widely used due to its advantages of lightweight and strong anti-interference. Recently, graph convolutional networks (GCNs) have been applied to action recognition and have made breakthrough progress. The shift convolution operator can effectively replace the spatial convolution and greatly reduce the computational complexity of the algorithm. This article first applies the Conv-Shift-Conv (CSC) module and the Shift-Conv-Shift-Conv (SC2) module to replace the Shift-Conv-Shift (SCS) module in spatial graph convolution of Shift-GCN respectively. This design can reorder the shifted channels more effectively. The experimental results show that the CSC module has the best effect and effectively improves accuracy of model. After that, this article proposes to replace the shift module in the original Shift-GCN with a sparse shift module and named SparseShift-GCN. This structure can reduce the redundancy of features, prevent overfitting and improve the generality of the model. Based on the improvement in the previous step, better results have been achieved. Finally, this paper uses OHEM Loss and Weighted Loss to carefully design the loss function of the model and introduces it into the model proposed in this paper. Experimental results show that OHEM Loss further improves the accuracy of algorithm. After a series of improvements, our proposed model has improved the accuracy of 4 different streams to varying degrees, which improves the overall performance of the network. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. The fewest clues problem.
- Author
-
Demaine, Erik D., Ma, Fermi, Schvartzman, Ariel, Waingarten, Erik, and Aaronson, Scott
- Subjects
- *
PUZZLES , *COMPUTATIONAL complexity , *ALGORITHMS , *THEORY of distributions (Functional analysis) , *TRIANGLES - Abstract
Abstract When analyzing the computational complexity of well-known puzzles, most papers consider the algorithmic challenge of solving a given instance of (a generalized form of) the puzzle. We take a different approach by analyzing the computational complexity of designing a "good" puzzle. We assume a puzzle maker designs part of an instance, but before publishing it, wants to ensure that the puzzle has a unique solution. Given a puzzle, we introduce the FCP (fewest clues problem) version of the problem: Given an instance to a puzzle, what is the minimum number of clues we must add in order to make the instance uniquely solvable? We analyze this question for the Nikoli puzzles Sudoku, Shakashaka, and Akari. Solving these puzzles is NP -complete, and we show their FCP versions are Σ 2 P -complete. Along the way, we show that the FCP versions of Triangle Partition , Planar 1- in -3 SAT , and Latin Square are all Σ 2 P -complete. We show that even problems in P have difficult FCP versions, sometimes even Σ 2 P -complete, though "closed under cluing" problems are in the (presumably) smaller class NP ; for example, FCP 2 SAT is NP -complete. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Deep Reinforcement Learning for Crowdsourced Urban Delivery.
- Author
-
Ahamed, Tanvir, Zou, Bo, Farazi, Nahid Parvez, and Tulabandhula, Theja
- Subjects
- *
DEEP learning , *REINFORCEMENT learning , *AD hoc computer networks , *ASSIGNMENT problems (Programming) , *ALGORITHMS , *NUMERICAL analysis , *COMPUTATIONAL complexity - Abstract
• Investigate assigning shipping requests to crowdsourcees with time and capacity constraints • Propose a centralized, deep reinforcement learning-based approach • Present new state space representation encompassing spatial-temporal and capacity information • Embed heuristics-guided action choice in DRL to preserve tractability and enhance efficiency • Integrate rule-interposing into DRL to further enhance training and implementation efficiency This paper investigates the problem of assigning shipping requests to ad hoc couriers in the context of crowdsourced urban delivery. The shipping requests are spatially distributed each with a limited time window between the earliest time for pickup and latest time for delivery. The ad hoc couriers, termed crowdsourcees, also have limited time availability and carrying capacity. We propose a new deep reinforcement learning (DRL)-based approach to tackling this assignment problem. A deep Q network (DQN) algorithm is trained which entails two salient features of experience replay and target network that enhance the efficiency, convergence, and stability of DRL training. More importantly, this paper makes three methodological contributions: 1) presenting a comprehensive and novel characterization of crowdshipping system states that encompasses spatial-temporal and capacity information of crowdsourcees and requests; 2) embedding heuristics that leverage information offered by the state representation and are based on intuitive reasonings to guide specific actions to take, to preserve tractability and enhance efficiency of training; and 3) integrating rule-interposing to prevent repeated visiting of the same routes and node sequences during routing improvement, thereby further enhancing the training efficiency by accelerating learning. The computational complexities of the heuristics and the overall DQN training are investigated. The effectiveness of the proposed approach is demonstrated through extensive numerical analysis. The results show the benefits brought by the heuristics-guided action choice, rule-interposing, and having time-related information in the state space in DRL training, the near-optimality of the solutions obtained, and the superiority of the proposed approach over existing methods in terms of solution quality, computation time, and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Measurement based dimension descent association algorithm for OTHR multi-detection multi-target tracking.
- Author
-
Zhang, Yujie, Zhang, Zheng, Yu, Jianglong, Li, Qingdong, Dong, Xiwang, and Ren, Zhang
- Subjects
TRACKING radar ,CLUTTER (Radar) ,TRACKING algorithms ,ALGORITHMS ,COMPUTATIONAL complexity ,MEASUREMENT - Abstract
The problem of multi-detection multi-target tracking (MDMTT) using over-the-horizon radar in dense clutter environment is studied in this paper. The biggest challenge of MDMTT is the 3-dimensional multipath data association among measurements, detection models and targets. In particular, a lot of clutter measurements are generated in dense clutter environment, which increase the computational burden of 3-dimensional multipath data association greatly. A measurement based dimension descent association (DDA) algorithm is proposed to solve the 3-dimensional multipath data association, which decomposes the 3-dimensional multipath data association into two 2-dimensional data associations. The proposed algorithm can reduce the computational burden compared with the optimal 3-dimensional multipath data association and the computational complexity is analyzed. Besides, a time extension method is designed to detect the new-born targets that appear in the tracking scene, which is based on the sequential measurements. The convergence of the proposed measurement based DDA algorithm is analyzed. The estimation error can convergence to 0 as the number of Gaussian mixtures tends to infinity. The effectiveness and rapidity of the measurement based DDA algorithm are demonstrated by the comparative simulation with the previously proposed algorithms. • Multi-detection multi-target tracking using OTHR is studied by dimension descent. • The detection of new-born targets with unknown prior knowledge is considered. • The convergence of the novel multi-target tracking algorithm is analyzed. • The labeled GMPHD is proposed to reduce the computational burden. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. A hierarchical framework for distributed resilient control of islanded AC microgrids under false data injection attacks.
- Author
-
Zarei, Mahdi Sadegh and Atrianfar, Hajar
- Subjects
MICROGRIDS ,DISTRIBUTED power generation ,EXTREME value theory ,COMPUTATIONAL complexity ,ALGORITHMS - Abstract
In this paper, we propose a control scheme to ensure the microgrid control layers are resilient to cyberattacks. The studied microgrid consists of several distributed generation (DG) units and we consider the hierarchical control structure that is common for microgrids. The use of communication channels among DGs has made microgrids more vulnerable and this is where cybersecurity issues arise. In this work we added three algorithms, reputation-based, Weighted Mean Subsequence Reduced (W-MSR) and Resilient Consensus Algorithm with Trusted Nodes (RCA-T), to the secondary control layer of the microgrid and made them resilient to false data injection (FDI) attacks. In reputation-based control, some procedures are used for detecting the attacked DGs and isolating them from the others. W-MSR and RCA-T are Mean Subsequence Reduced (MSR)-based algorithms that fade the effect of attacks without finding them. These algorithms use a simple strategy that ignores some extreme values of neighboring agents, so an attacker can simply get ignored. Our analysis of the reputation-based algorithm is based on scrambling matrices, so the communication graph can switch in a prescribed set. In each of the above cases, to evaluate the performance of the designed controllers, in addition to theoretical analysis, we evaluated and compared the controllers using simulation. • Reputation-based control has a fault detection and isolation mechanism. • W-MSR and RCA-T use a fault-resilient control that make microgrids flexible. • The reputation-based algorithm tolerates switching topologies of communication graph. • The W-MSR and RCA-T algorithms do not require computational complexity on DG level. • The performance is evaluated via simulations in addition to theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. A New Efficient Multi-Channel Fast NLMS (MC-FNLMS) Adaptive Algorithm for Audio Teleconferencing systems.
- Author
-
Zerouali, Mohamed and Djendi, Mohamed
- Subjects
TELECONFERENCING ,SOUND systems ,NOISE control ,ALGORITHMS ,ADAPTIVE filters ,COMPUTATIONAL complexity - Abstract
In this paper, we propose a new multichannel fast normalized least mean square (MC-FNLMS) algorithm for audio teleconferencing systems. The main idea behind the proposed MC-FNLMS algorithm is based on the introduction of a first order prediction process on the input noisy signals before the adaptive filtering process of each channel. The proposed algorithm is considered as an alternative solution for the existing algorithm in multi-channel applications. A full stability analysis of the proposed algorithm is derived along this paper. Simulation results of a comparison between the proposed MC-FNLMS algorithm and the two classical Multichannel Normalized Least Mean Square (MC-NLMS) and the Multichannel Affine Projection (MC-APA) algorithms, in terms of convergence speed, noise reduction and computational complexity, show the superiority of the proposed algorithm on multichannel adaptive noise reduction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Hardness results for three kinds of colored connections of graphs.
- Author
-
Huang, Zhong and Li, Xueliang
- Subjects
- *
GRAPH connectivity , *HARDNESS , *COMPUTATIONAL complexity , *CHROMATIC polynomial , *ALGORITHMS - Abstract
The concept of rainbow connection number of a graph was introduced by Chartrand et al. in 2008. Inspired by this concept, other concepts on colored version of connectivity in graphs were introduced, such as the monochromatic connection number by Caro and Yuster in 2011, the proper connection number by Borozan et al. in 2012, and the conflict-free connection number by Czap et al. in 2018, as well as some other variants of connection numbers later on. Chakraborty et al. proved that to compute the rainbow connection number of a graph is NP-hard. For a long time, it has been tried to fix the computational complexity for the monochromatic connection number, the proper connection number and the conflict-free connection number of a graph. However, it has not been solved yet. Only the complexity results for the strong version, i.e., the strong proper connection number and the strong conflict-free connection number, of these connection numbers were determined to be NP-hard. In this paper, we prove that to compute each of the monochromatic connection number, the proper connection number and the conflict free connection number for a graph is NP-hard. This solves a long standing problem in this field, asked in many talks of workshops and papers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. A class of augmented complex-value FLANN adaptive algorithms for nonlinear systems.
- Author
-
Luo, Zheng-Yan, Zhou, Ji-Liu, Pu, Yi-Fei, and Li, Lei
- Subjects
- *
NONLINEAR systems , *SQUARE root , *RANDOM variables , *ALGORITHMS , *COMPLEX variables , *COMPUTATIONAL complexity - Abstract
Recently, few studies have been made on the stereophonic acoustic echo cancellation (SAEC) with nonlinear systems. To identify such nonlinear model, the functional link artificial neural network (FLANN) and the widely linear model can provide an approach to explore the SAEC with complex random variable. In this paper, a class of augmented complex-value functional link network (ACFLN) adaptive algorithms is developed. Based on the augmented complex-value functional least-mean-square (ACFLMS) algorithm, we have proposed the recursive augmented complex-value functional least-mean-square (RACFLMS) algorithm designed by a recursive structure. To further reduce its computational complexity and enhance its performance, a novel inverse square root function is employed in its structure of the RACFLMS algorithm. The results of several experiments demonstrate that our approach can effectively model the nonlinear systems and verify the improvement of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. A fractional gradient descent algorithm robust to the initial weights of multilayer perceptron.
- Author
-
Xie, Xuetao, Pu, Yi-Fei, and Wang, Jian
- Subjects
- *
FRACTIONAL calculus , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
For multilayer perceptron (MLP), the initial weights will significantly influence its performance. Based on the enhanced fractional derivative extend from convex optimization, this paper proposes a fractional gradient descent (RFGD) algorithm robust to the initial weights of MLP. We analyze the effectiveness of the RFGD algorithm. The convergence of the RFGD algorithm is also analyzed. The computational complexity of the RFGD algorithm is generally larger than that of the gradient descent (GD) algorithm but smaller than that of the Adam, Padam, AdaBelief, and AdaDiff algorithms. Numerical experiments show that the RFGD algorithm has strong robustness to the order of fractional calculus which is the only added parameter compared to the GD algorithm. More importantly, compared to the GD, Adam, Padam, AdaBelief, and AdaDiff algorithms, the experimental results show that the RFGD algorithm has the best robust performance for the initial weights of MLP. Meanwhile, the correctness of the theoretical analysis is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Comparison of scalar point multiplication algorithms in a low resource device.
- Author
-
Ramdani, Mohamed, Benmohammed, Mohamed, and Benblidia, Nadjia
- Subjects
MATHEMATICAL formulas ,PUBLIC key cryptography ,ELLIPTIC curve cryptography ,MULTIPLICATION ,ALGORITHMS ,CRYPTOGRAPHY - Abstract
Since it was invented, Elliptic Curve Cryptography (ECC) is considered an ideal choice for implementing public key cryptography in resource constrained devices, thanks to its small keys. Scalar point multiplication is the central and the complex operation in the cryptographic computations of ECC that requires a lot of optimizations on execution time and energy consumption especially in low computing power devices such as embedded systems. Thus, many scalar point multiplication algorithms are proposed, each using their own computational techniques and mathematical formulas. In this paper, we have combined the computational techniques with some optimized mathematical formulas and we have implemented them on some Elliptic Curves scalar point multiplication algorithms over finite field. The aim of this work is to identify the most efficient algorithm that combine the best computational technique and the mathematical formulas, and consequently offer less-memory requirements and faster field arithmetic operations. The results show that Montgomery ladder algorithm in co-Z addition with update formula gives better results compared to the other algorithms implemented in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Quantized synchronization of memristive neural networks with time-varying delays via super-twisting algorithm.
- Author
-
Sun, Bo, Wen, Shiping, Wang, Shengbo, Huang, Tingwen, Chen, Yiran, and Li, Peng
- Subjects
- *
TIME-varying networks , *SYNCHRONIZATION , *CARDIAC pacing , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
In this paper, we investigate quantized synchronization control problem of memristive neural networks (MNNs) with time-varying delays via super-twisting algorithm. A feedback controller is introduced with quantized method. To enormously reduce the computational complexity of the controller under super-twisting algorithm, two quantized control schemes are proposed with uniform quantizer and logarithmic quantization. We obtain some sufficient conditions of specific control plans to guarantee that the driving MNNs can synchronize with the response MNNs. A neoteric Lyapunov functional is designed to analyze the synchronization problem. Finally, in this paper ending, some illustrative examples are given in support of our results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. How hard is safe bribery?
- Author
-
Karia, Neel, Mallick, Faraaz, and Dey, Palash
- Subjects
- *
BRIBERY , *SOCIAL choice , *SOCIAL problems - Abstract
Bribery in an election is one of the well-studied control problems in computational social choice. In this paper, we propose and study the safe bribery problem. Here the goal of the briber is to ask the bribed voters to vote in such a way that the briber never prefers the original winner (of the unbribed election) more than the new winner, even if the bribed voters do not fully follow the briber's advice. Indeed, in many applications of bribery, campaigning for example, the briber often has limited control on whether the bribed voters eventually follow her recommendation and thus it is conceivable that the bribed voters can either partially or fully ignore the briber's recommendation. We provide a comprehensive complexity theoretic landscape of the safe bribery problem for many common voting rules in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. A survey on adaptive active noise control algorithms overcoming the output saturation effect.
- Author
-
Guo, Yu, Shi, Dongyuan, Shen, Xiaoyi, Ji, Junwei, and Gan, Woon-Seng
- Subjects
- *
ACTIVE noise control , *ALGORITHMS , *COMPUTATIONAL complexity , *CONSTRAINT algorithms - Abstract
This paper presents a comparison of contemporary algorithms aimed at mitigating the saturation-induced challenges in active noise control (ANC) systems. The saturation effect introduces nonlinear elements into the adaptive algorithm, consequently impacting the ANC system's performance and degrading the system's stability. The detailed theoretical analysis indicates that the cause of the output saturation issue lies in the exceeding output power of the control signal. Recently, two categories of adaptive algorithms have been developed to address this issue. The first category focuses on effectively constraining the output signal to manage the saturation effect, exhibiting notable practical efficacy. The second category employs nonlinear ANC algorithms (NANC) to model the inherent signal nonlinearity, controlling harmonic distortion caused by saturation effects. This work summarizes the key results in the literature and demonstrates that output constraint algorithms outperform the NANC algorithms in computational efficiency and robustness. Hence, they should be a more practical choice than others in coping with the output-saturation issue in ANC systems. • Output saturation deforms the control signal and degrades the stability of adaptive algorithms. • Output-constraint algorithms constrain output saturation with low computational complexity. • Nonlinear algorithms use nonlinear models to counteract output saturation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. City services provision assessment algorithm.
- Author
-
Khrulkov, Aleksandr, Mishina, Margarita E., and Sobolevsky, Stanislav L.
- Subjects
MUNICIPAL services ,CITIZEN satisfaction ,CITY dwellers ,ALGORITHMS ,COMPUTATIONAL complexity ,NONLINEAR programming - Abstract
The paper is dedicated to the problem of computational urban science to estimate availability of urban services with limited capacity. The applied side of the problem is related to the need to assess the provision of citizens living in residential buildings in the city with a sufficient number of places of city services, such as kindergartens, schools, clinics, and others. This information is necessary for the qualitative management of the socio-economic and spatial development of the city. The computational complexity of the problem is associated with a great variety of factors that determine the variability of the citizens' preference function when accessing a particular service. It is further complicated by the presence of self-organization processes within this system, which lead to the actual final satisfaction of the citizens' service demand but with significant deviations from the required parameters of service availability established in regulatory documents or the optimal state of the system. Since validation of such models for large cities is difficult or impossible, this requires the development of new computational approaches for assessing the provision of the population with urban services based on the optimal state of the system to improve the efficiency of city management. A proposed algorithm is based on sequential aggregation of the demand from buildings for each service object and redistribution of unserved demand between other services objects with free capacity. This approach avoids high computational complexity which arises when using nonlinear programming to calculate buildings' provision of services in large cities. The article presents the algorithm for calculating the provision of the population and demonstrates its application on the data of the city of St. Petersburg. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Divide-and-conquer based large-scale spectral clustering.
- Author
-
Li, Hongmin, Ye, Xiucai, Imakura, Akira, and Sakurai, Tetsuya
- Subjects
- *
SPARSE matrices , *COMPUTATIONAL complexity , *COMPUTER workstation clusters , *BIPARTITE graphs , *ALGORITHMS - Abstract
Spectral clustering is one of the most popular clustering methods. However, how to balance the efficiency and effectiveness of the large-scale spectral clustering with limited computing resources has not been properly solved for a long time. In this paper, we propose a divide-and-conquer based large-scale spectral clustering method to strike a good balance between efficiency and effectiveness. In the proposed method, a divide-and-conquer based landmark selection algorithm and a novel approximate similarity matrix approach are designed to construct a sparse similarity matrix within low computational complexities. Then clustering results can be computed quickly through a bipartite graph partition process. The proposed method achieves the lower computational complexity than most existing large-scale spectral clustering methods. Experimental results on ten large-scale datasets have demonstrated the efficiency and effectiveness of the proposed method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Accelerating spiking neural networks using quantum algorithm with high success probability and high calculation accuracy.
- Author
-
Chen, Yanhu, Wang, Cen, Guo, Hongxiang, Gao, Xiong, and Wu, Jian
- Subjects
- *
ARTIFICIAL neural networks , *ALGORITHMS , *BIOLOGICAL systems , *COMPUTATIONAL complexity , *INTELLIGENCE levels - Abstract
• We are the first to propose QSNN: a quantum algorithm to speed up the classic spiking neural network. • We give the proof for minimum success probability and calculation accuracy of QSNN. • We verify the complexity of QSNN is log-polynomial, and better than the complexity of SNN. • We validate the feasibility and robustness of QSNN on real-world datasets. Spiking neural networks (SNNs) are a kind of neuromorphic computing which meticulously imitates the operations of biological nervous systems. Thus, the SNN is seen as a promising approach to further improve the level of intelligence that the legacy artificial neural networks achieved. In a spiking neuron of an SNN, it is a key step with the highest computing complexity to find out the moment when the output stimuli occur. Classic computer (i.e., electrical computer) based acceleration cannot reduce the computational complexity of the operations in the spiking neurons. As a result, the SNN suffers from scaling and deeper emulation problems. Considering these, in this paper, we propose a quantum algorithm to reduce the complexity of the key steps in the SNN and use this algorithm to build a quantum spiking neuron network (QSNN). More specifically, first, we give mathematical proof that the problem of finding the output stimuli is approximately equal to the problem of calculating the unsigned vector inner products, which can transfer to a quantum operation. Second, we design a scalable quantum circuit of QSNN for data of any dimension and evaluate its basic success probability and calculation accuracy. Third, to improve the QSNN performance, we propose a method to improve the minimum success probability to 99.8%, by repeatedly performing the quantum circuit only 11 times. Fourth, we prove that the computational complexity of the QSNN is a log-polynomial relationship with the data dimension, which is much lower than that of the linear complexity of the classic SNN. Finally, we apply the QSNN to solve classification tasks on two real-world datasets, i.e., MNIST and fashion MNIST which both have two noise levels (i.e., no noise and 50% of the noise). The experiment results show that in addition to the acceleration, QSNN and SNN have equal classification accuracy, which suggests the feasibility and robustness of the QSNN. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. KAB: A new k-anonymity approach based on black hole algorithm.
- Author
-
Kacha, Lynda, Zitouni, Abdelhafid, and Djoudi, Mahieddine
- Subjects
ALGORITHMS ,METAHEURISTIC algorithms ,COMPUTATIONAL complexity ,MATHEMATICAL optimization ,DATA quality ,PRIVACY - Abstract
K-anonymity is the most widely used approach to privacy preserving microdata which is mainly based on generalization. Although generalization-based k-anonymity approaches can achieve the privacy protection objective, they suffer from information loss. Clustering-based approaches have been successfully adapted for k-anonymization as they enhance the data quality, however, the computational complexity of finding an optimal solution has shown as NP-hard. Nature-inspired optimization algorithms are effective in finding solutions to complex problems. We propose, in this paper, a novel algorithm based on a simple nature-inspired metaheuristic called Black Hole Algorithm (BHA), to address such limitations. Experiments on real data set show that data utility has been improved by our approach compared to k-anonymity, BHA-based k-anonymity and clustering-based k-anonymity approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Robust adaptive unscented Kalman filter for bearings-only tracking in three dimensional case.
- Author
-
Mehrjouyan, Ali and Alfi, Alireza
- Subjects
- *
COVARIANCE matrices , *NOISE measurement , *KALMAN filtering , *COMPUTATIONAL complexity , *CRITICAL point (Thermodynamics) , *ALGORITHMS - Abstract
This paper proposes an improved version of Unscented Kalman Filter (UKF), namely Robust Adaptive UKF (RAUKF), with a special focus on Bearings-Only Target Tracking for three-dimensional case (3DBOT). The automatic tuning of the noise covariance matrices and the robust estimation of the target states form a critical point for the performance of the Kalman-type filtering algorithms, especially in the variable environmental conditions exposed in underwater. The key idea of the proposed filter is to combine robust aspects of UKF and adaption of the process and measurement noise covariance matrices with low computational complexity. The main contribution of this paper is to adjust these matrices by means of the steepest descent algorithm, and the H ∞ technique is embedded to achieve superior performance in terms of accuracy and robustness against initial conditions and model uncertainties. Different experiments are performed to evaluate the performance of the proposed algorithm in the 3DBOT problem with a single moving observer. Simulations demonstrate that the proposed filter produce more accurate results with satisfactory computational burden in comparison with other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Subspace segmentation with a large number of subspaces using infinity norm minimization.
- Author
-
Tang, Kewei, Su, Zhixun, Liu, Yang, Jiang, Wei, Zhang, Jie, and Sun, Xiyan
- Subjects
- *
IMAGE segmentation , *COMPUTATIONAL complexity , *ALGORITHMS , *SUBSPACES (Mathematics) , *STATISTICS - Abstract
Highlights • We find an observation important to the LSN subspace segmentation. • We provide the theoretical support of the mentioned observation. • Our paper is the first to adopt infinite norm in subspace segmentation. • The computational complexity of our ADM algorithm is lower than O (n 3). • Our method achieve the state-of-the-art results in extensive experiments. Abstract Spectral-clustering based methods have recently attracted considerable attention in the field of subspace segmentation. The approximately block-diagonal graphs achieved by this kind of methods usually contain some noise, i.e., nonzero elements in the off-diagonal region, due to outlier contamination or complex intrinsic structure of the dataset. In the experiment of most previous work, the number of the subspaces is often no more than 10. In this situation, this kind of noise almost has no influence on the segmentation results. However, the segmentation performance could be negatively affected by the noise when the number of subspaces is large, which is quite common in the real-world applications. In this paper, we address the problem of LSN subspace segmentation, i.e., large subspace number subspace segmentation. We first show that the approximately block-diagonal graph with the smaller difference in its diagonal blocks will be more robust to the off-diagonal noise mentioned above. Then, by using the infinity norm to control the bound of the difference in the diagonal blocks, we propose infinity norm minimization for LSN subspace segmentation. Experimental results demonstrate the effectiveness of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Generalized predecessor existence problems for Boolean finite dynamical systems on directed graphs.
- Author
-
Kawachi, Akinori, Ogihara, Mitsunori, and Uchizawa, Kei
- Subjects
- *
BOOLEAN algebra , *FINITE element method , *ALGORITHMS , *NUMERICAL analysis , *DYNAMICAL systems - Abstract
Abstract A Boolean Finite Synchronous Dynamical System (BFDS, for short) consists of a finite number of objects that each maintains a boolean state, where after individually receiving state assignments, the objects update their state with respect to object-specific time-independent boolean functions synchronously in discrete time steps. The present paper studies the computational complexity of determining, given a boolean finite synchronous dynamical system, a configuration, which is a boolean vector representing the states of the objects, and a positive integer t , whether there exists another configuration from which the given configuration can be reached in t steps. It was previously shown that this problem, which we call the t -Predecessor Problem, is NP-complete even for t = 1 if the update function of an object is either the conjunction of arbitrary fan-in or the disjunction of arbitrary fan-in. This paper studies the computational complexity of the t -Predecessor Problem for a variety of sets of permissible update functions as well as for polynomially bounded t. It also studies the t -Garden-Of-Eden Problem, a variant of the t -Predecessor Problem that asks whether a configuration has a t -predecessor, which itself has no predecessor. The paper obtains complexity theoretical characterizations of all but one of these problems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. On real structured controllability/stabilizability/stability radius: Complexity and unified rank-relaxation based methods.
- Author
-
Zhang, Yuan, Xia, Yuanqing, and Zhan, Yufeng
- Subjects
- *
CONTROLLABILITY in systems engineering , *MATRIX norms , *LINEAR systems , *NP-hard problems , *RADIUS (Geometry) , *PARAMETERIZATION , *ALGORITHMS - Abstract
This paper addresses the real structured controllability, stabilizability, and stability radii (RSCR, RSSZR, and RSSR, respectively) of linear systems, which involve determining the distance (in terms of matrix norms) between a (possibly large-scale) system and its nearest uncontrollable, unstabilizable, and unstable systems, respectively, with a prescribed affine structure. This paper makes two main contributions. First, by demonstrating that determining the feasibilities of RSCR and RSSZR is NP-hard when the perturbations have a general affine parameterization, we prove that computing these radii is NP-hard. Additionally, we prove the NP-hardness of a problem related to the RSSR. These hardness results are independent of the matrix norm used. Second, we develop unified rank-relaxation based algorithms for these problems, which can handle both the Frobenius norm and the 2-norm based problems and share the same framework for the RSCR, RSSZR, and RSSR problems. These algorithms utilize the low-rank structure of the original problems and relax the corresponding rank constraints with a regularized truncated nuclear norm term. Moreover, a modified version of these algorithms can find local optima with performance specifications on the perturbations, under appropriate conditions. Finally, simulations suggest that the proposed methods, despite being in a simple framework, can find local optima as good as several existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. A high speed and memory efficient algorithm for perceptually-lossless volumetric medical image compression.
- Author
-
Lone, Mohd Rafi
- Subjects
IMAGE compression ,DIAGNOSTIC imaging ,MEDICAL imaging systems ,IMAGING systems ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
With the advancements in modern medical imaging systems, the diagnostic image data has increased exponentially. The future medical applications seek medical imaging devices to be portable. Image quality and real-time processing are of prime importance in medical image compression. Therefore the volumetric medical images need to be compressed in perceptually lossless manner. Also the time taken to compress the images before transmitting (or storing), should be small. In this paper, an algorithm for lossless and perceptually-lossless medical image compression is proposed. The proposed algorithm uses two small lists and two small state-tables to encode an image. The compression efficiency is comparable to the state-of-the-art lossless compression techniques. Also the computational complexity and memory requirement are realistic for portable medical imaging devices. Combining all the three features, it is obvious that the proposed algorithm is a better candidate for image compression in comparison to all the state-of-art compression algorithms that we know of, for volumetric-medical imaging systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Algorithm and hardness results in double Roman domination of graphs.
- Author
-
Poureidi, Abolfazl
- Subjects
- *
DOMINATING set , *CHARTS, diagrams, etc. , *ALGORITHMS , *STATISTICAL decision making , *HARDNESS , *ROMANS , *COMPUTATIONAL complexity - Abstract
Let G = (V , E) be a graph. A double Roman dominating function (DRDF) of G is a function f : V → { 0 , 1 , 2 , 3 } such that (i) each vertex v with f (v) = 0 is adjacent to either a vertex u with f (u) = 3 or two vertices u 1 and u 2 with f (u 1) = f (u 2) = 2 , and (ii) each vertex v with f (v) = 1 is adjacent to a vertex u with f (u) > 1. The double Roman domination number of G is the minimum weight of a DRDF along all DRDFs on G , where the weight of a DRDF f on G is f (V) = ∑ v ∈ V f (v). In this paper, we first propose an algorithm to compute the double Roman domination number of an interval graph G = (V , E) in O (| V | + | E |) time, answering a problem posed in Banerjee et al. (2020) [2]. Next, we show that the decision problem associated with the double Roman domination is NP-complete for split graphs. Finally, we show that the computational complexities of the Roman domination problem and the double Roman domination problem are different. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Computational Complexity Relationship between Compaction, Vertex-Compaction, and Retraction.
- Author
-
Vikas, Narayan
- Abstract
Abstract In this paper, we show a very close relationship between the compaction, vertex-compaction, and retraction problems for reflexive and bipartite graphs. Similar to a long-standing open problem concerning whether the compaction and retraction problems are polynomially equivalent, the relationships that we present relate to our problems concerning whether the compaction and vertex-compaction problems are polynomially equivalent, and whether the vertex-compaction and retraction problems are polynomially equivalent. The relationships that we present also relate to the constraint satisfaction problem, providing evidence that similar to the compaction and retraction problems, it is also likely to be difficult to give a complete computational complexity classification of the vertex-compaction problem for every fixed reflexive or bipartite graph. In this paper, we however give a complete computational complexity classification of the vertex-compaction problem for all graphs, including even partially reflexive graphs, with four or fewer vertices, by giving proofs based on mostly just knowing the computational complexity classification results of the compaction problem for such graphs determined earlier by the author. Our results show that the compaction, vertex-compaction, and retraction problems are polynomially equivalent for every graph with four or fewer vertices. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Computation of weighted Moore–Penrose inverse through Gauss–Jordan elimination on bordered matrices.
- Author
-
Sheng, Xingping
- Subjects
- *
GAUSSIAN elimination , *MATRICES (Mathematics) , *COMPUTATIONAL complexity , *ALGORITHMS , *STATISTICAL weighting - Abstract
In this paper, two new algorithms for computing the Weighted Moore–Penrose inverse A M , N † of a general matrix A for weights M and N which are based on elementary row and column operations on two appropriate block partitioned matrices are introduced and investigated. The computational complexity of the introduced two algorithms is analyzed in detail. These two algorithms proposed in this paper are always faster than those in Sheng and Chen (2013) and Ji (2014), respectively, by comparing their computational complexities. In the end, an example is presented to demonstrate the two new algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Intrusion Detection in Computer Networks using Lazy Learning Algorithm.
- Author
-
Chellam, Aditya, L, Ramanathan, and S, Ramani
- Subjects
COMPUTER network security ,MACHINE learning ,INTRUSION detection systems (Computer security) ,ALGORITHMS ,QUERYING (Computer science) ,COMPUTATIONAL complexity - Abstract
Intrusion Detection Systems (IDS) are used in computer networks to safeguard the integrity and confidentiality of sensitive data. In recent years, network traffic has become sizeable enough to be considered under the big data domain. Current machine learning based techniques used in IDS are largely defined on eager learning paradigms which lose performance efficiency by trying to generalize training data before receiving queries thereby incurring overheads for trivial computations. This paper, proposes the use of lazy learning methodologies to improve overall performance of IDS. A novel heuristic weight based indexing technique has been used to overcome the drawback of high search complexity inherent in lazy learning. IBk and LWL, two popular lazy learning algorithms have been compared and applied on the NSL-KDD dataset for simulating a real-world like scenario and comparing their relative performances with hw-IBk. The results of this paper clearly indicate lazy algorithms as a viable solution for real-world network intrusion detection. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Robust augmented space recursive least-constrained-squares algorithms.
- Author
-
Zhang, Qiangqiang, Wang, Shiyuan, Lin, Dongyuan, Zheng, Yunfei, and Tse, Chi K.
- Subjects
- *
ADAPTIVE filters , *ADAPTIVE reuse of buildings , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
This paper proposes a novel augmented space robust adaptive filter by reusing the errors for online applications. First, a batched augmented space constrained model (ASCM) is constructed to combat non-Gaussian noise. In ASCM, the errors are reused by k nearest neighbors (k -NN) estimation. Then, an augmented space recursive least-constrained-squares algorithm integrating with distance-based k -NN method (ARLCS- d k) is developed within the framework of ASCM for adaptive filtering. Finally, to curb the size of ever-growing error network, a sliding window ARLCS- d k (SW-ARLCS- d k) is proposed to reduce the computational burden. Theoretical analyses of excess mean square error (EMSE) and testing mean square error (TMSE) are carried out for performance evaluation. Examples on time-series prediction of simulated and real-world data are used to illustrate the advantages of proposed algorithms on robustness and prediction accuracy. • A batched augmented space constrained model (ASCM) with k nearest neighbors (k -NN) method is constructed to combat impulsive noise. • An online augmented recursive least-constrained-squares algorithm using distance-based k -NN method (ARLCS- dk) is developed for adaptive filtering. • A sliding window is designed for ARLCS- dk to reduce its computational complexity. • Theoretical results of mean square error are illustrated the effectiveness of ARLCS- dk in prediction of time-series. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. A measurement fusion algorithm of active and passive sensors based on angle association for multi-target tracking.
- Author
-
Zhang, Yongquan, Shang, Aomen, Zhang, Wenbo, Liu, Zekun, Li, Zhibin, Ji, Hongbing, and Su, Zhenzhen
- Subjects
- *
TRACKING algorithms , *DETECTORS , *ANGLES , *ALGORITHMS , *LEAST squares , *COMPUTATIONAL complexity , *ARTIFICIAL satellite tracking - Abstract
• An effective screening algorithm is derived by extracting angle measurements. • An exclusion strategy of association results is proposed by building statistics. • Angle association is calculated by least square to get coordinates of fused measurements. • Another exclusion strategy is developed using unique position measurements. Multi-target tracking among different types of sensors is facing great challenge in fully utilizing various types of measurements. To this end, this paper presents a measurement fusion algorithm of single active and multi-passive sensors (SAMPS) based on angle association (AA), named SAMPS-AA algorithm, for multi-target tracking. Firstly, in order to narrow down the association range, the common angle measurements of two types of sensors are extracted by the proposed effective screening algorithm. Then, an exclusion strategy of wrong association groups is developed by building statistics, which is based on angle measurements. Subsequently, coordinates of fused measurements are obtained by angle association, based on least squares (LS). Finally, another exclusion strategy of wrong measurement points is proposed via measurement characteristics of active sensor. Experimental results indicate that the proposed SAMPS-AA algorithm can fully combine advantages of these two types of sensors, effectively exclude as many wrong association groups as possible, efficiently reduce the computational complexity, and obviously improve the tracking accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Joint subchannel power allocation for downlink NOMA systems based on quantum carnivorous plant algorithm.
- Author
-
Gao, Hongyuan, Di, Yanqi, Guo, Lantu, and Zhao, Lishuai
- Subjects
CARNIVOROUS plants ,ALGORITHMS ,ENERGY consumption ,COMPUTATIONAL complexity ,COMPUTER performance - Abstract
Currently, most existing research on non-orthogonal multiple access (NOMA) systems converts power allocation into two independent procedures, i.e., inter-subchannel power allocation and intra-subchannel power allocation. However, improper inter-subchannel power allocation will adversely affect the power allocation process of intra-subchannel. If the power allocations of inter-subchannel and intra-subchannel are optimized simultaneously, it usually needs high computational complexity, and is hard to obtain an optimal solution. To tackle this issue, this paper studies the joint subchannel power allocation for NOMA systems and proposes a new intelligent algorithm called quantum carnivorous plant algorithm (QCPA) for simultaneously optimizing inter-subchannel and intra-subchannel power allocations. Analysis of the convergence of QCPA verified its superior performance on the test functions. By utilizing the optimal solution obtained by QCPA as the power allocation scheme, energy efficiency and sum transmission rate in NOMA systems are significantly increased compared to existing traditional power allocation methods. The results above demonstrate that QCPA effectively addresses test functions and the power allocation problem for NOMA. QCPA possesses favorable convergence in comparison to other comparison algorithms and methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A multi-rank two-dimensional CCA based on PDEs for multi-view feature extraction.
- Author
-
Yang, Jing, Fan, Liya, and Sun, Quansen
- Subjects
- *
PATTERN recognition systems , *FEATURE extraction , *IMAGE recognition (Computer vision) , *COMPUTATIONAL complexity , *STATISTICAL correlation , *ALGORITHMS - Abstract
Feature extraction is one of the fundamental problems in pattern recognition research. For image recognition, extracting effective image features is the key to accomplish the recognition task. In this paper, a partial differential equations-based multi-rank two-dimensional canonical correlation analysis (PDEs-MR2DC2A) is proposed for multi-view feature extraction and pattern classification. Unlike most of the previous researches on multi-view algorithms that work directly on the original 2D representation, in our approach, we first utilize the evolution process of PDEs to extract the feature matrix of per-view. In addition, we employ multi-rank left and right projecting matrices to maximize the correlation. The computational complexity of PDEs-MR2DC2A is also analyzed. To evaluate the effectiveness of the proposed algorithm, we conducted a series of performance comparisons with some existing methods on several popular datasets. The experimental results showed that our proposed algorithm performed very well on these datasets and outperformed the existing related methods on some metrics. • A novel multi-rank multi-view algorithm based on PDEs is proposed. • This approach can extract rotation and translation invariant features. • Numerous experimental results demonstrate the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Random sample consensus algorithm for the hyperbolic frequency modulated signals parameters estimation.
- Author
-
Djurović, Igor
- Subjects
- *
STATISTICAL sampling , *ALGORITHMS , *SIGNAL-to-noise ratio , *COMPUTATIONAL complexity , *PARAMETER estimation - Abstract
The random sample consensus algorithm has been successfully employed for the estimation of parameters in polynomial phase signals. It significantly reduces the signal-to-noise ratio threshold by 3 dB compared to current state-of-the-art algorithms. In this paper, we have developed a random sample consensus algorithm-based estimator for hyperbolic frequency modulated signals parameters. Our proposed parameter and instantaneous frequency estimator demonstrate exceptional accuracy, surpassing existing estimators by achieving the signal-to-noise ratio threshold improvement of up to 7 dB. Furthermore, we have examined the limits of applicability of our proposed approach, particularly in terms of computational complexity. • The RANSAC algorithm is applied to the HFM signals parameter estimation. • The algorithm setup in both fine and rough stage is considered in details. • Calculation complexity analysis is provided. • Simulations are performed on aliased signal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Two-stage dynamic aggregation involving flexible resource composition and coordination based on submodular optimization.
- Author
-
Ding, Zhetong, Li, Yaping, Zhang, Kaifeng, and Peng, Jimmy Chih-Hsien
- Subjects
- *
AGGREGATION operators , *COMPUTATIONAL complexity , *DYNAMIC models , *POWER plants , *COORDINATES , *ALGORITHMS - Abstract
Traditional virtual power plants (VPPs) with fixed resource composition and coordination strategies struggle to cost-effectively exploit the flexibility of large-scale resources for adapting variable regulation requirements and resources characteristics. To this end, this paper proposes a dynamic aggregation mechanism to flexibly select and coordinate individual resources for forming aggregators according to grids regulation requirements and resource characteristics. The proposed mechanism is operated through a two-stage dynamic aggregation model comprising resource selection and coordination. Considering the two-stage dynamic aggregation model is a combinational optimization problem with high computational complexity, the submodular optimization method is utilized to swiftly address this problem. First, the complementarity and submodularity of the dynamic aggregation process are formulated to elaborate how the aggregation regulation characteristics (ARCs) evolve with flexible resource composition and coordination. Next, a submodularity-based algorithm is developed to promptly solve dynamic aggregation model under three scenarios, where aggregation operators focus on the resources quantity, quality, and cost-effectiveness, respectively. The polynomial computational complexity of the proposed algorithm has also been evaluated. Simulations using the IEEE 39-bus (New England) system consists of 10,000 flexible resources were executed to assess the submodularity approach. The proposed algorithm demonstrates superior computing speed and better performance guaranteed results (90%, 97%, 90% in three scenarios) compared to other methods—making it more suitable for implementation in practice. • The complementarity and submodularity of dynamic aggregation process are revealed and proved. • A two-stage dynamic aggregation model consists of resources selection and coordination is established. • A dynamic aggregation algorithm is proposed to deal with different aggregation scenarios. • The approximate guarantees and polynomial computational complexity of algorithm are proved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Sequential fusion filtering based on minimum error entropy criterion.
- Author
-
Feng, Xiaoliang, Wu, Changsheng, and Ge, Quanbo
- Subjects
- *
KALMAN filtering , *COST functions , *INFORMATION theory , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
• The sequential fusion issue of non-Gaussian multi-sensor systems is studied. • Global iteration SFF & independent iteration SFF are designed in MEE criterion. • Homology and convergence of the two proposed SFF algorithms are analyzed. According to the minimum error entropy (MEE) criterion in information theory learning (ITL), the fusion filtering problem of non-Gaussian system is studied in this paper. Combined with the advantages of sequential fusion filtering (SFF) in dealing with asynchronous sampling and communication delay, two SFF algorithms are designed under MEE criterion. Firstly, the non-Gaussian multi-sensor system is transformed into a group of non-Gaussian single sensor subsystems. Then, by solving the optimal solution of the cost function corresponding to each subsystem, a set of fixed-point equations relating to the subsystems state is obtained. By using the strategies of global iteration and independent iteration to solve the fixed-point equations, two SFF algorithms based on the MEE criterion are designed, respectively. In addition, the performance including the computational complexity, the correlation between two iteration algorithms and their convergence is analyzed. Finally, simulation results indicate that the proposed SFF methods can effectively deal with the state estimation problem of non-Gaussian multi-sensor system, and can achieve similar fusion filtering accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Efficient traffic-based IoT device identification using a feature selection approach with Lévy flight-based sine chaotic sub-swarm binary honey badger algorithm.
- Author
-
Wang, Boxiong, Kang, Hui, Sun, Geng, and Li, Jiahui
- Subjects
FEATURE selection ,INTERNET of things ,ALGORITHMS ,DECOMPOSITION method ,SWARM intelligence ,COMPUTATIONAL complexity - Abstract
Internet of Things (IoT) refers to the various devices connected to the Internet, enabling them to communicate and transmit data with each other. The rapid development of the IoT also brings security and other problems in cyberspace. In this case, device identification is a crucial tool for IoT security issues, which can detect and prevent cyber-attacks. However, device identification has some challenges on IoT traffic datasets, such as large-scale and high-dimensional sparse datasets, features prone to security vulnerabilities, and identification of IP and non-IP devices, which affects the performance of classifiers. Feature selection can be seen as an effective data preprocessing technique in IoT device identification, which may improve the performance of classification and reduce the computational complexity of IoT device identification. In this paper, we propose a traffic-based IoT device identification model using a novel wrapper feature selection approach based on an improved and efficient method, which we call Lévy flight-based sine chaotic sub-swarm binary honey badger algorithm (LS
2 -BHBA). Specifically, four improved factors are employed in LS2 -BHBA to expand the search scope, balance the exploration and exploitation phases, and enhance the search capability. In addition, a binary mechanism is implemented to enhance the suitability of the proposed LS2 -BHBA for feature selection in IoT device identification. The experimental results on several real IoT traffic datasets denote that LS2 -BHBA can reduce the number of features to 10%, and the classification accuracy can reach 98%, which outperforms some classical and latest comparison algorithms in the feature selection of IoT device identification. • A traffic-based IoT device identification model including wrapper feature selection is proposed. • An improved binary honey badger algorithm for IoT traffic datasets (denoted as LS2 -BHBA) is designed. • A wrapper feature selection method based on LS2 -BHBA is designed. • Extensive experiments are conducted on three real-world datasets to demonstrate the effectiveness and generalization of the proposed method. • The results are tested statistically and the feasibility of using decomposition methods is analyzed. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
41. Robust recoverable and two-stage selection problems.
- Author
-
Kasperski, Adam and Zieliński, Paweł
- Subjects
- *
SET theory , *ROBUST control , *ALGORITHMS , *UNCERTAINTY (Information theory) , *COMPUTATIONAL complexity - Abstract
In this paper the following selection problem is discussed. A set of n items is given and we wish to choose a subset of exactly p items of the minimum total cost. This problem is a special case of 0–1 knapsack in which all the item weights are equal to 1. Its deterministic version has an O ( n ) -time algorithm, which consists in choosing p items of the smallest costs. In this paper it is assumed that the item costs are uncertain. Two robust models, namely two-stage and recoverable ones, under discrete and interval uncertainty representations, are discussed. Several positive and negative complexity results for both of them are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. One optimized LMF algorithm in low SNR.
- Author
-
Guan, Sihai, Cheng, Qing, and Liu, Fangyao
- Subjects
TIME-varying systems ,ALGORITHMS ,SYSTEM identification ,COMPUTATIONAL complexity ,ADAPTIVE filters - Abstract
There is a need to improve the capability of the adaptive filtering algorithm against Gaussian or multiple types of non-Gaussian noises, time-varying system, and systems with low SNR. In this paper, we propose an optimized least mean absolute fourth (Optimized-LMF) algorithm, especially for a time-varying unknown system with low signal-noise-rate (SNR). The optimal step-size of LMF is obtained by minimizing the mean-square deviation (MSD) at any given moment in time. In addition, the mean convergence and steady-state error of Optimized-LMF are derived. Also the theoretical computational complexity of Optimized-LMF is analyzed. Furthermore, the simulation experiment results of system identification are used to illustrate the principle and efficiency of the Optimized-LMF algorithm. The performance of the algorithm is analyzed mathematically and validated experimentally. Simulation results demonstrate that the proposed Optimized-LMF is superior to the normalized LMF (NLMF) and variable step-size of LMF using quotient form (VSSLMFQ) algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Statistical generalization performance guarantee for meta-learning with data dependent prior.
- Author
-
Liu, Tianyu, Lu, Jie, Yan, Zheng, and Zhang, Guangquan
- Subjects
- *
GENERALIZATION , *COMPUTATIONAL complexity , *ALGORITHMS , *STATISTICAL learning - Abstract
• To improve generalization performance, three novel PAC-Bayes meta-learning bounds are proposed. • Based on the ERM method, a PAC-Bayes meta-learning bound with a data-dependent prior is developed. • Its computational complexity is analyzed and experiments illustrates its effectiveness. Meta-learning aims to leverage experience from previous tasks to achieve an effective and fast adaptation ability when encountering new tasks. However, it is unclear how the generalization property applies to new tasks. Probably approximately correct (PAC) Bayes bound theory provides a theoretical framework to analyze the generalization performance for meta-learning with an explicit numerical generalization error upper bound. A tighter upper bound may achieve better generalization performance. However, for the PAC-Bayes meta-learning bound, the prior distribution is selected randomly which results in poor generalization performance. In this paper, we derive three novel generalization error upper bounds for meta-learning based on the PAC-Bayes relative entropy bound. Furthermore, in order to avoid randomly prior distribution, based on the empirical risk minimization (ERM) method, a data-dependent prior for the PAC-Bayes meta-learning bound algorithm is developed and the sample complexity and computational complexity are analyzed. The experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning achieve a competitive generalization guarantee, and the extended PAC-Bayes bound with a data-dependent prior can achieve rapid convergence ability. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. An LSH-based [formula omitted]-representatives clustering method for large categorical data.
- Author
-
Mau, Toan Nguyen and Huynh, Van-Nam
- Subjects
- *
K-means clustering , *DATA mining , *ALGORITHMS , *BIG data , *COMPUTATIONAL complexity , *PROTHROMBIN - Abstract
• A new method based on Locality-Sensitive Hashing (LSH) for cluster initialization in the k-means like clustering was proposed. • A new clustering algorithm called LSH-k-representatives for handling big categorical data was developed. • Extensive experiments conducted on multiple real-world and synthetic datasets have demonstrated the effectiveness of the proposed clustering method. • The newly developed algorithm achieves comparable or better clustering performance to the existing well-known k-means like algorithms, while it is significantly more efficient by a factor of between 2× and 32×. Clustering categorical data remains a challenging problem in the era of big data, due to the difficulty in measuring dis/similarity meaningfully for categorical data and the high computational complexity of existing clustering algorithms that makes it difficult to be applied in practical use for big data mining applications. In this paper, we propose an integrated approach that incorporates the Locality-Sensitive Hashing (LSH) technique into the k -means-like clustering so as to make it capable of predicting the better initial clusters for boosting clustering effectiveness. To this end, we first utilize a data-driven dissimilarity measure for categorical data to construct a family of binary hash functions that are then used to generate the initial clusters. We also propose to use a nearest neighbor search at each iteration for cluster reassignment of data objects to improve the clustering complexity. These solutions are incorporated into the k -representatives algorithm resulting in the so-called LSH- k -representatives algorithm. Extensive experiments conducted on multiple real-world and synthetic datasets have demonstrated the effectiveness of the proposed method. It is shown that the newly developed algorithm yields comparable or better clustering results in comparison to the existing closely related works, yet it is significantly more efficient by a factor of between 2 × and 32 ×. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. A new algorithmic framework for basic problems on binary images.
- Author
-
Asano, T., Buzer, L., and Bereg, S.
- Subjects
- *
MATHEMATICAL connectedness , *COMPUTATIONAL complexity , *IMAGE processing , *ALGORITHMS , *GRAPH theory , *GRAPH connectivity - Abstract
This paper presents a new algorithmic framework for some basic problems on binary images. Algorithms for binary images such as one of extracting a connected component containing a query pixel and that of connected components labeling play basic roles in image processing. Those algorithms usually use linear work space for efficient implementation. In this paper we propose algorithms for several basic problems on binary images which are efficient in time and space, using space-efficient algorithms for grid graphs. More exactly, some of them run in O ( n log n ) time using O ( 1 ) work space and the others run in O ( n ) or O ( n log n ) time using O ( n ) work space for a binary image of n pixels stored in a read-only array. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. A method for sub-sample computation of time displacements between discrete signals based only on discrete correlation sequences.
- Author
-
Teixeira, César A., Mendes, Luís, Ruano, Maria Graça, and Pereira, Wagner C.A.
- Subjects
STATISTICAL correlation ,SAMPLING (Process) ,COMPUTATIONAL complexity ,COMPARATIVE studies ,ALGORITHMS - Abstract
In this paper, we propose a new method for sub-sample computation of time displacements between two sampled signals. The new algorithm is based on sampled auto- and cross-correlation sequences and takes into account only the sampled signals without the need for the customary interpolation and fitting procedures. The proposed method was evaluated and compared with other methods, in simulated and real signals. Four other methods were used for comparison: two based on cross-correlation plus fitting, one method based on spline fitting over the input signals, and another based on phase demodulation. With simulated signals, the proposed approach presented similar or better performance, concerning bias and variance, in almost all the tested conditions. The exception was signals with very low SNRs (<10 dB), for which the methods based on phase demodulation and spline fitting presented lower variances. Considering only the two methods based on cross-correlation, our approach presented improved results with signals with high and moderate noise levels. The proposed approach and other three out of the four methods used for comparison are robust in real data. The exception is the phase demodulation method, which may fail when applied to signals collected from real-world scenarios because it is very sensitive to phase changes caused by other oscillations not related to the main echoes. This paper introduced a new class of methods, demonstrating that it is possible to estimate sub-sample delay, based on discrete cross-correlations sequences without the need for interpolation or fitting over the original sampled signals. The proposed approach was robust when applied to real-world signals and presented a moderated computational complexity when compared to the other tested algorithms. Although the new method was tested using ultrasound signals, it can be applied to any time-series with observable events. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
47. System reliability evaluation of 12-pulse series converter station based on improved Dijkstra algorithm.
- Author
-
Zhao, Jie, Li, Songhuan, Wu, Fangjie, Zhang, Huaixun, Shen, Xiaolin, Dou, Jinqiu, and Liang, Yilin
- Subjects
- *
RELIABILITY in engineering , *ALGORITHMS , *POWER transmission , *COMPUTATIONAL complexity - Abstract
• Simplicity of principle improves computational efficiency. • Methods to effectively assess reliability. • Incorporate engineering realities to ensure accuracy. With the rise of large-scale energy DC transmission system, more and more projects use UHVDC transmission system for long-distance and large-capacity power transmission, in which the converter station is the core of the whole UHV DC transmission system. In this paper, a reliability assessment method based on improved Dijkstra's algorithm is proposed for the main wiring system of the converter station of the UHVDC transmission system and for the components in the main wiring. In order to fully consider the different states of components in actual operation, a four-state model is adopted in the study. The model improves the accuracy of the model by comprehensively considering the component's state characteristics, switching sequence, operation mode and other factors. Based on the state model of the components, the main wiring system is calculated in accordance with the theory of complex distribution networks in order to reduce the computational complexity. Finally, the main wiring system of the double 12-pulse converter station is traversed through the minimum paths based on the improved Dijkstra search method, and the reliability indexes of the obtained minimum paths are calculated according to the theory of the minimum path cut set. A typical bipolar dual 12-pulse series connection is used to verify the validity of the method, and some guidance is given for practical engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. An algorithm for extracting similar segments of moving target trajectories based on shape matching.
- Author
-
Ouyang, Zhihong, Xue, Lei, Ding, Feng, and Li, Da
- Subjects
- *
PARTICLE swarm optimization , *ALGORITHMS , *CENTROID , *COMPUTATIONAL complexity , *MATCHING theory - Abstract
Trajectory similarity analysis of moving target is the foundation for mining high-value and regular behavioral information such as motion preferences, activity hotspots and frequent paths. Unlike most trajectory similarity analysis methods aimed at discovering correlations of target activities in time, space or spatio-temporal domains, this paper focuses on the shape matching of target trajectories. If some specific shapes frequently appear in historical trajectories, extracting these local shapes would be beneficial for analyzing the target motion templates and behavior modes. Trajectory segments with similar shapes may not have spatio-temporal correlation, and the shapes also have geometric transformation characteristics such as rotation, scaling and translation. Since the existing trajectory similarity analysis methods cannot be directly applied, an algorithm for extracting similar segments based on shape matching is proposed. First, a new shape descriptor based on signed barycenter distance (SBD) is established. It describes a trajectory as a one-dimensional shape feature sequence, which has the advantage of low computational complexity. Then, the distributed nearest neighbor search strategy is used in the particle swarm optimization (PSO) method, which aims to accelerate the retrieval of trajectory segments with similar shapes and improve the matching accuracy. Experiments on MPEG-7, handwritten character and maneuvering target simulation trajectory data sets show that compared with the existing typical shape descriptors, SBD shape descriptor has advantages in accuracy and noise insensitivity, and the improved PSO method can efficiently and accurately obtain the local shape matching results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. TSVM-M[formula omitted]: Twin support vector machine based on multi-order moment matching for large-scale multi-class classification.
- Author
-
Qiang, Wenwen, Zhang, Hongjie, Zhang, Jingxing, and Jing, Ling
- Subjects
SUPPORT vector machines ,ALGORITHMS ,CLASSIFICATION algorithms ,MOMENTS method (Statistics) ,OUTLIER detection ,CLASSIFICATION ,PROCESS optimization ,COMPUTATIONAL complexity - Abstract
For multi-class classification, many existing methods, such as multiple weighted linear loss twin support vector machine (MWLTSVM), construct multiple decision hyperplanes by minimizing the positive points loss's first-order moment (mean), which may lead to sensitivity to outliers. Also, when faced with a large-scale classification problem, how to speed up the process of solving the optimization model is also a challenge. An alternative is to use rectangular kernel technology (RKT) to reduce computational complexity. However, RKT is based on the uniform point selection method, which can be proven to be ineffective in improving classifier performance. To address these problems, a novel classifier under the structure of "one-versus-rest" for multi-class classification is proposed in this paper, named twin support vector machine based on multi-order moment matching (TSVM-M
3 ). When constructing the decision hyperplanes, TSVM-M3 takes the first-order and second-order moments (mean and variance) of positive points loss into consideration and implements this by introducing an adjusting factor into the objective function. A theoretical analysis of the robustness of the proposed TSVM-M3 is also provided. Meanwhile, a novel RKT based on the density-dependent data selection method is proposed for large-scale classification. We demonstrate that the proposed RKT can benefit from reducing modeling error. Experimental results show the effectiveness of the proposed TSVM-M3 . • A new method called TSVM-M3 is proposed to reduce the sensitivity to the outliers in large-scale multi-class classification. Firstly, TSVM-M3 takes into account the multi-order moment information of the positive samples and implements it just by introducing an additional adjustable factor. Then, TSVM-M3 proposes using the weighted linear loss to punish the negative points. • This paper demonstrates that minimizing the variance can lead to the hyperplanes TSVM-M3 modeled to approximate the real systems well even under noise and that TSVM-M3 gives smaller weight to a large error training sample and a larger weight to a small error training sample as compared to the MWLTSVM. • To speed up the process of solving the optimization model in large-scale classification, this paper proposes a novel rectangular kernel technology that is based on a proposed density dependent multi-class data selection algorithm (DDDS). DDDS constructs the Gram matrix based on the points selected by a density-based point selection method. • This paper demonstrates that the new proposed rectangular kernel technology is beneficial in reducing modeling errors. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
50. A multiple model filtering approach to transmission line fault diagnosis.
- Author
-
Qin, Qiu and Wu, N. Eva
- Subjects
- *
ELECTRIC lines , *ELECTRIC fault location , *ELECTRIC networks , *COMPUTATIONAL complexity , *ALGORITHMS - Abstract
This paper provides justification and implementation for a multiple model filtering approach to diagnosis of transmission line three-phase short to ground faults in the presence of protection misoperations. This approach utilizes the electric network dynamics and wide area measurements to provide diagnosis outcomes. A second focus of this paper is on the reduction of computational complexity of the diagnosis algorithm. This issue is addressed by a two-step heuristic. The first step designs subsystem models through measurement selection. The second step reduces the dynamic model order. The performance of the diagnosis algorithms are evaluated on a simulated WSCC 9-bus system. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.