27,640 results on '"COMPUTATIONAL complexity"'
Search Results
2. A Unifying Framework for Incompleteness, Inconsistency, and Uncertainty in Databases.
- Author
-
Kimelfeld, Benny and Kolaitis, Phokion G.
- Subjects
- *
DATABASES , *QUERYING (Computer science) , *SEMANTICS , *PROBABILISTIC databases , *COMPUTATIONAL complexity , *RELATIONAL databases - Abstract
This article details a framework for database deficiencies utilizing possible world semantics. Topics include database rectification, database querying, intractability and tractability. The article explores possible world semantics in data exchange, inconsistent databases, probabilistic databases, tuple-independent databases, and election databases.
- Published
- 2024
- Full Text
- View/download PDF
3. Random permutation-based mixed-double scrambling technique for encrypting MQIR image.
- Author
-
Zhu, Hai-hua, Chen, Zi-gang, and Leng, Tao
- Subjects
- *
IMAGE encryption , *RANDOM number generators , *PERMUTATIONS , *COMPUTATIONAL complexity , *PUBLIC key cryptography , *IMAGE representation , *CLOUD computing - Abstract
The dual-scrambling scheme that combines position transformation and bit-plane transformation is a popular image encryption scheme. However, such schemes need more key information, and the encryption and decryption processes are complicated. In addition, the existing quantum image dual-scrambling schemes mainly deal with square images. In this paper, we propose a hybrid scrambling encryption scheme for multi-mode quantum image representation (MQIR) images based on random permutation, in which the H × W quantum image is represented in MQIR. A random number generator factor s uniquely associates one of the random permutations of integers from 1 to a positive integer, so as to hybrid scramble both the pixel position and the binarized position of each pixel value. Meanwhile, the quantum circuits and some examples of scrambling are given. Furthermore, various analyses of the performance of this scheme were conducted, including effectiveness, key space, and computational complexity. By modifying the random generation factor to construct multiple binary grayscale images, the simulated results on the IBM Quantum Cloud platform demonstrate that the proposed quantum image encryption scheme is effective. In comparison to existing quantum image dual scrambling schemes, it is both simple and effective, offering a large key space, lower computational complexity, and applicability to non-square quantum images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Pondering the Ugly Underbelly, and Whether Images Are Real.
- Author
-
Hill, Robin K. and Baquero, Carlos
- Subjects
- *
MATHEMATICAL proofs , *DIGITAL images , *COMPUTATIONAL complexity , *DIGITAL image watermarking , *ARTIFICIAL intelligence - Abstract
Two blogs on different topics are presented, including one on the importance of showing how a proof can lead to the truth using the example of the Cook-Levin Theorem and one about genuine versus fake photos and using watermarking technology to annotate artificial intelligence (AI) generated images.
- Published
- 2024
- Full Text
- View/download PDF
5. Performance improvement methods of sphere decoding in MIMO systems: A technical review.
- Author
-
Girija, M. G. and Sudha, T.
- Subjects
- *
MIMO systems , *SPHERES , *DECODING algorithms , *COMPUTATIONAL complexity - Abstract
One of the most effective nonlinear detection techniques utilized in Multi Input Multi Output (MIMO) systems is sphere decoding It consists of a group of very effective algorithms that provide average computational complexity. The realization and deployment of Massive MIMO (M-MIMO) as well as MIMO networks depend greatly on data detection techniques. Different MIMO detectors have been suggested in the literature, based on various principles and approaches. In this paper various sphere decoding algorithms and their performance metrics are illustrated. Also, various sphere decoding methods applied in MIMO systems are compared and pros and cons of each method are presented. Finally, future directions to enhance the effectiveness of sphere decoding algorithms are discussed. We conclude by highlighting a few research challenges and further research directions in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A new accurate estimator of the frequency using the three-point interpolation of DFT samples.
- Author
-
Alrubei, Mohammed A. T., Al-Chlaihawi, Sarab, Pozdnyakov, A. D., and Al-Saadi, Mohammed
- Subjects
- *
FAST Fourier transforms , *SIGNAL frequency estimation , *INTERPOLATION , *SIGNAL processing , *ERROR rates , *COMPUTATIONAL complexity - Abstract
Frequency estimation of a sinusoidal signal is a fundamental task in signal processing in many applications such as radar, radio channels, sonar, and others. Since the frequency is the main parameter of the signal, for this reason, it is necessary to accurately detect it for the design of the measurements equipment's more accuracy. The fast Fourier transform (FFT) is widely used to analyze sinusoidal signal but it causes the problem of spectral dispersion. To reduce the effect of this problem, time windows are used. It is possible to improve the frequency estimation accuracy by using an appropriate window and an accurate frequency correction formula. A new frequency estimation algorithm based on 3-spectral DFT interpolation lines is proposed. The simulation signal was analyzed and a comparison was made of a number of windows applied to the signal such as Chebyshev, Blackman and Kaiser (β=8), and finally to test the feasibility of the proposed algorithm, a comparison was made with Jain algorithm. The simulation results showed that the proposed algorithm has a lower frequency estimation error rate. The maximum frequency estimation error was 0.002 for the proposed algorithm while 0.01 for the Jain algorithm, in addition the proposed algorithm has more stable performance and less computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Generalization of parallel performance for multidimensional finite difference method of PDE problem.
- Author
-
Saipol, Hafizah Farhah Saipan, Alias, Norma, and Nordin, Syarifah Zyurina
- Subjects
- *
FINITE difference method , *PARALLEL algorithms , *PARTIAL differential equations , *COMPUTATIONAL complexity , *GENERALIZATION , *LINEAR equations - Abstract
Partial differential equation (PDE) has been used widely in the development of the mathematical model to predict, design and perform optimal strategy for process control. The PDE model is performed in multidimensional; one, two and three and it is discretized using finite difference method (FDM) with central difference formula. To solve the system of linear equations, numerical methods such as Alternating Group Explicit with Brian (AGEB) and Douglas-Rachford (AGED) variances, as well as the Jacobi (JB) method, are used. The grid decomposition process involved a fine grained large sparse data by minimizing the size of interval, increasing the dimension of the model and level of time steps. In order to improve execution time, the implementation of the parallel algorithm on Matlab Distributed Computing Server (MDCS) is significant. Furthermore, the parallel algorithm helps to increase the speedup of computation and to reduce the computational complexity problem. Inappropriate directive selection and unnecessary data distribution can lead to load imbalances, unnecessary communication, or the process going into idle state. Thus, data partitioning for multidimensional problem is critical for optimal performance. Both AGE method has the potential for parallelization because it is based on domain decomposition which is independent between processors. The computational complexity of the AGEB and AGED methods per iteration is found to be greater than that of the JB method. The computational time for JB is supposedly shorter than for AGED and AGEB, but this is contradicted by the fact that the number of iterations for JB is greater than for AGED and AGEB. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Design and implementation in an Altera's cyclone IV EP4CE6E22C8 FPGA board of a fast and robust cipher using combined 1D maps.
- Author
-
Djomo, Alain Fanda, Tiedeu, Alain, and Fotsing, Janvier
- Subjects
- *
IMAGE encryption , *CYCLONES , *ARCHITECTURAL design , *CIPHERS , *COMPUTER hardware description languages , *SOFTWARE development tools - Abstract
This paper proposes an image encryption algorithm based on combined 1D chaotic maps. First, a permutation technique was applied. It was then reorganized into 1D matrices along the rows and columns respectively, which were then shuffled by computing the substituted position indices to obtain the scrambled image. Subsequently, a method of confusion of the scrambled image was used through another generated data map, combined with random sub‐matrices for diffusion, then resulting in an encrypted image. Finally, the proposed cryptosystem was implemented in a single kernel platform developed using the Nios II Software Build Tools processor for Eclipse. A hardware architecture was designed using the Qsys‐built tool which is available in the Quartus II 13.0sp1 environment. The developed single‐core system was implemented using the Cyclone IV EP4CE6E22C8. Robustness evaluation of the cryptosystem was performed through security analysis tests such as histogram analysis, correlation coefficient, differential analysis, and key space analysis to prove that it is of good quality, efficient, fast, and successfully resisting brute force attacks. The hardware performance analysis was also carried out. Then the cryptosystem is compared with those in the literature both in the hardware and security performance aspects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Asymptotic performance of reconfigurable intelligent surface assisted MIMO communication for large systems using random matrix theory.
- Author
-
Hu, Feng, Zhang, Hongliu, Chen, ShuTing, Jin, Libiao, Zhang, Jinhao, and Feng, Yunfei
- Subjects
- *
MIMO systems , *RANDOM matrices , *ELECTROMAGNETIC waves , *COMPUTATIONAL complexity , *RANDOM graphs , *BEAMFORMING - Abstract
Reconfigurable intelligent surface (RIS) can provide unprecedented spectral efficiency gains and excellent ability to manipulate electromagnetic waves. This article considered a RIS‐assisted multiuser multiple‐input multiple‐output (MIMO) downlink system, where the beamforming at the base station and RIS are jointly designed to maximize the sum‐rate. For the large dimension scenario and high‐rank beamforming matrix, the accurate deterministic approximations from random matrix theory are then utilized to simplify the RIS‐assisted MIMO systems. The asymptotical signal‐to‐interference‐plus‐noise ratio values obtained through random matrix theory is infinitely close to the theoretical limits calculated by accurately iteration. And the performance of the proposed algorithm computed via the sharing second‐order channel statistics matches that of the RIS algorithm with sharing full channel state information asymptotically. The deterministic approximations are instrumental to get improvement into the structure of the optimal beamforming and to reduce the implementation complexity in large‐scale MIMO system. Numerical simulations results are provided to evaluate and verify the accuracy of the asymptotic results obtained from the proposed algorithm in the finite system regime. With the complex operation process of large dimension matrix reducing to the deterministic approximations, a lower computational complexity can be obtained compared with other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Visual Re-Initialization Model Development Methodology for Solving Problems Regarding Metaheuristic Algorithm-Based MPPT Applications.
- Author
-
Sezen, Serkan and Kılıç, Fuat
- Subjects
- *
METAHEURISTIC algorithms , *PROBLEM solving , *EMULATION software , *SEARCH algorithms , *COMPUTATIONAL complexity , *PHOTOVOLTAIC power systems , *ALGORITHMS - Abstract
Metaheuristic algorithms are particularly useful for maximum power point tracking (MPPT) applications, because they can adapt to changes in operating conditions and effectively handle partial shading conditions. However, metaheuristic algorithms also have some limitations that need to be addressed to make them suitable for MPPT applications. The problems associated with metaheuristic algorithm-based MPPT applications include being trapped in local optima, slow convergence speed, shading condition variability, computational complexity and robustness. These problems lead to reduced efficiency in MPPT applications. In the literature, the solution of the aforementioned problems is partially addressed and some of them are solved via an additional irradiation sensor. The motivation of this study is to develop a control algorithm that covers all problems that have been partially solved in the literature and includes an original re-initialization modeling method in accordance with visual programing concept, without using any additional radiation sensor. The proposed control algorithm has the flexibility to be easily adapted to other metaheuristic algorithms and does not require any radiation sensors. The re-initialization model created via Matlab/Simulink and "Embedded Coder Support Package for TI C2000 Processors" allows easy tracking of the global maximum power point (GMPP) by detecting variable radiation conditions. The proposed model was implemented on the Cuckoo Search Algorithm (CSA) and verified through experimental studies carried out with a TI-TMS320f28069 microcontroller and PV emulator. The experimental results confirm that issue 1 is solved with 100%, issue 2 is solved with 99.5%, issue 3 is solved with 99.84%, and issue 4 is solved with 100% MPPT efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Efficient Mass Spectrometry Peak Detection by Combining Resolution Enhancement and Image Segmentation.
- Author
-
Zhang, Weiyang, Zhou, Jun, Yang, Mingguang, Feng, Jiayong, Bao, Miaoqing, Gao, Wenqing, Han, Renlu, Hu, Lingxiao, Tang, Keqi, and Yu, Jiancheng
- Subjects
- *
IMAGE intensifiers , *MASS spectrometry , *IMAGE segmentation , *MATRIX-assisted laser desorption-ionization , *RECEIVER operating characteristic curves , *WAVELET transforms , *COMPUTATIONAL complexity - Abstract
Mass spectrometry data may be affected by random noise and baseline drift due to experimental instruments and conditions, posing significant challenges for detecting spectral peaks, particularly when identifying weak and separating overlapping peaks. To increase the sensitivity and enhance the resolution, we propose a mass spectral peak detection algorithm that integrates resolution enhancement and image segmentation. Initially, the extended Mexican hat wavelet is proposed by integrating the peak sharpening method with its wavelet. This approach accurately transforms mass spectra into wavelet space using the continuous wavelet transform. Subsequently, the triangular single-peak thresholding method, a more suitable threshold segmentation approach for spectral analysis, is introduced to identify ridges in the two-dimensional wavelet space. Compared to traditional Otsu and its improved variants, long-tailed single-peaked histograms are more effectively processed by this method with lower computational complexity, enabling faster identification of segmentation thresholds and image segmentation. Ultimately, peak positions are determined by utilizing ridge and valley lines in wavelet space along with the original spectrum. To evaluate the performance of the peak recognition algorithm, two metrics are introduced: the receiver operating characteristic (ROC) curve and the balanced F score (F1 score). When compared to multi-scale peak detection (MSPD), continuous wavelet transform and image segmentation (CWT-IS), the developed approach is more suitable for weak and highly overlapping peaks. The robustness and practicality of the method are verified through peak detection using matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectra. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Dimension results for extremal-generic polynomial systems over complete toric varieties.
- Author
-
Bender, Matías and Spaenlehauer, Pierre-Jean
- Subjects
- *
TORIC varieties , *HOMOGENEOUS polynomials , *GROBNER bases , *POLYNOMIALS , *COMPUTATIONAL complexity , *ALGEBRA - Abstract
We study polynomial systems with prescribed monomial supports in the Cox ring of a toric variety built from a complete polyhedral fan. We present combinatorial formulas for the dimension of their associated subvarieties under genericity assumptions on the coefficients of the polynomials. Using these formulas, we identify at which degrees generic systems in polytopal algebras form regular sequences. Our motivation comes from sparse elimination theory, where knowing the expected dimension of these subvarieties leads to specialized algorithms and to large speed-ups for solving sparse polynomial systems. As a special case, we classify the degrees at which regular sequences defined by weighted homogeneous polynomials can be found, answering an open question in the Gröbner bases literature. We also show that deciding whether a sparse system is generically a regular sequence in a polytopal algebra is hard from the point of view of theoretical computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Efficient simulation of potential energy operators on quantum hardware: a study on sodium iodide (NaI).
- Author
-
Laskar, Mostafizur Rahaman, Bhattacharya, Atanu, and Dasgputa, Kalyan
- Subjects
- *
SODIUM iodide , *QUANTUM operators , *POTENTIAL energy , *HAMILTONIAN operator , *QUANTUM computing , *QUANTUM computers , *COMPUTATIONAL complexity - Abstract
This study introduces a conceptually novel polynomial encoding algorithm for simulating potential energy operators encoded in diagonal unitary forms in a quantum computing machine. The current trend in quantum computational chemistry is effective experimentation to achieve high-precision quantum computational advantage. However, high computational gate complexity and fidelity loss are some of the impediments to the realization of this advantage in a real quantum hardware. In this study, we address the challenges of building a diagonal Hamiltonian operator having exponential functional form, and its implementation in the context of the time evolution problem (Hamiltonian simulation and encoding). Potential energy operators when represented in the first quantization form is an example of such types of operators. Through systematic decomposition and construction, we demonstrate the efficacy of the proposed polynomial encoding method in reducing gate complexity from O (2 n) to O ∑ i = 1 r n C r (for some r ≪ n ). This offers a solution with lower complexity in comparison to the conventional Hadamard basis encoding approach. The effectiveness of the proposed algorithm was validated with its implementation in the IBM quantum simulator and IBM quantum hardware. This study demonstrates the proposed approach by taking the example of the potential energy operator of the sodium iodide molecule (NaI) in the first quantization form. The numerical results demonstrate the potential applicability of the proposed method in quantum chemistry problems, while the analytical bound for error analysis and computational gate complexity discussed, throw light on issues regarding its implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Improving convergence of generalised Rosenbluth sampling for branched polymer models by uniform sampling.
- Author
-
Roberts, T and Prellberg, T
- Subjects
- *
BRANCHED polymers , *UNIFORM polymers , *ESTIMATION theory , *COMPUTATIONAL complexity , *LINEAR polymers , *POLYMERS - Abstract
Sampling with the generalised atmospheric Rosenbluth method (GARM) is a technique for estimating the distributions of lattice polymer models that has had some success in the study of linear polymers and lattice polygons. In this paper we will explain how and why such sampling appears not to be effective for many models of branched polymers. Analysing the algorithm on a simple binary tree, we argue that the fundamental issue is an inherent bias towards extreme configurations that is costly to correct with reweighting techniques. We provide a solution to this by applying uniform sampling methods to the atmospheres that are central to GARM. We caution that the ensuing computational complexity often outweighs the improvements gained. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Tensor structure-based ground clutter suppression approaches for pulse doppler radar.
- Author
-
Xu, Bangzhen, Lu, Xingyu, Gu, Hong, and Su, Weimin
- Subjects
- *
DOPPLER radar , *SINGULAR value decomposition , *BISTATIC radar , *PRINCIPAL components analysis , *COMPUTATIONAL complexity - Abstract
For pulse doppler (PD) radar, two tensor structure-based methods are proposed to improve the clutter suppression performance. One is tensor singular value decomposition (TSVD) and the other is tensor robust principal component analysis (TRPCA). Two algorithms both stack the range-Doppler matrices of multiple coherent processing intervals (CPI) into a tensor structure, which helps make the most of the temporal information across CPIs. TSVD utilizes the strong temporal correlation of ground clutter data to estimate the clutter subspace. Meanwhile, TRPCA exactly separates both low-rank clutter component and sparse target components from the time series range-Doppler tensor structure. Experimental results indicate that compared with conventional matrix-based schemes such as singular value decomposition (SVD) or singular value decomposition (RPCA), the proposed methods have significant advantages in keeping the target echo while suppressing the clutter. In addition, the computational complexity of each algorithm is analysed in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. RadarTCN: Lightweight Online Classification Network for Automotive Radar Targets Based on TCN.
- Author
-
Li, Yuan, Zhang, Mengmeng, Jing, Hongyuan, and Liu, Zhi
- Subjects
- *
ROAD vehicle radar , *INTELLIGENT sensors , *CLASSIFICATION , *FEATURE extraction , *COMPUTATIONAL complexity , *RADAR targets , *MULTISPECTRAL imaging - Abstract
Automotive radar is one of the key sensors for intelligent driving. Radar image sequences contain abundant spatial and temporal information, enabling target classification. For existing radar spatiotemporal classifiers, multi-view radar images are usually employed to enhance the information of the target and 3D convolution is employed for spatiotemporal feature extraction. These models consume significant hardware resources and are not applicable to real-time applications. In this paper, RadarTCN, a novel lightweight network, is proposed that achieves high-accuracy online target classification using single-view radar image sequences only. In RadarTCN, 2D convolution and 3D-TCN are employed to extract spatiotemporal features sequentially. To reduce data dimensionality and computational complexity, a multi-layer max pooling down-sampling method is designed in a 2D convolution module. Meanwhile, the 3D-TCN module is improved through residual pruning and causal convolution is introduced for leveraging the performance of online target classification. The experimental results demonstrate that RadarTCN can achieve high-precision online target recognition for both range-angle and range-Doppler map sequences. Compared to the reference models on the CARRADA dataset, RadarTCN exhibits better classification performance, with fewer parameters and lower computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Low-Complexity 2D-DOD and 2D-DOA Estimation in Bistatic MIMO Radar Systems: A Reduced-Dimension MUSIC Algorithm Approach.
- Author
-
Ahmad, Mushtaq, Zhang, Xiaofei, Lai, Xin, Ali, Farman, and Shi, Xinlei
- Subjects
- *
MULTIPLE Signal Classification , *BISTATIC radar , *MIMO radar , *MIMO systems , *ESTIMATION theory , *COMPUTATIONAL complexity - Abstract
This paper presents a new technique for estimating the two-dimensional direction of departure (2D-DOD) and direction of arrival (2D-DOA) in bistatic uniform planar array Multiple-Input Multiple-Output (MIMO) radar systems. The method is based on the reduced-dimension (RD) MUSIC algorithm, aiming to achieve improved precision and computational efficiency. Primarily, this pioneering approach efficiently transforms the four-dimensional (4D) estimation problem into two-dimensional (2D) searches, thus reducing the computational complexity typically associated with conventional MUSIC algorithms. Then, exploits the spatial diversity of array response vectors to construct a 4D spatial spectrum function, which is crucial in resolving the complex angular parameters of multiple simultaneous targets. Finally, the objective is to simplify the spatial spectrum to a 2D search within a 4D measurement space to achieve an optimal balance between efficiency and accuracy. Simulation results validate the effectiveness of our proposed algorithm compared to several existing approaches, demonstrating its robustness in accurately estimating 2D-DOD and 2D-DOA across various scenarios. The proposed technique shows significant computational savings and high-resolution estimations and maintains high precision, setting a new benchmark for future explorations in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Urban Street Scene Instance Segmentation: An Integrated Hybrid Network Merging Top-Down and Bottom-Up Strategies.
- Author
-
Ruifa Zhou and Ji Zhao
- Subjects
- *
OBJECT recognition (Computer vision) , *FEATURE extraction , *COMPUTATIONAL complexity , *STREETS , *AUTONOMOUS vehicles , *PYRAMIDS - Abstract
There are two standard methods in instance segmentation: top-down and bottom-up. The top-down approach performs object detection to generate candidate proposals and then performs pixel-level segmentation for each proposal. It is accurate and flexible, capable of handling objects of different sizes and shapes. However, it is computationally complex and relies on object detection accuracy. The bottom-up approach first performs pixel-level clustering or segmentation and then combines candidate instances to obtain the final segmentation result. It can handle overlapping cases and has lower computational complexity, but it may need to localize accurately, and segment instances, and the segmentation granularity is coarser. In this paper, the Urban Street Scene Instance Segmentation (UISNet) algorithm is proposed. Firstly, the feature extraction network is the foundation of UISNet, which uses EfficientNet as the backbone network. Secondly, MPAFPN is the feature pyramid network part of UISNet, used for multi-scale feature fusion. By using EfficientNet and MPAFPN as the backbone network and bottleneck layers, the accuracy of UISNet is improved by 4% compared to ResNet and FPN. In the inference phase, this paper introduces an innovative dual-branch design that combines top-down and bottom-up strategies. One branch is the bounding box aggregation branch, which generates highdimensional information such as the shape and orientation of bounding boxes based on the FCOS Head. The other branch is the mask decoding branch, which creates mask prediction results. These two branches are fused using the Mask FCN Header to obtain the final instance segmentation result. With this dual-branch design, the model can effectively utilize the information from both top-down and bottom-up approaches, thereby improving the accuracy and robustness of instance segmentation. Through experimental comparisons, the proposed network model in this paper achieves the best performance in terms of accuracy compared to other instance segmentation networks, with an accuracy of 36.28%. Moreover, the proposed model performs better in urban street scenes, enhancing object detection and segmentation and offering more reliable and efficient solutions for applications such as autonomous driving and intelligent transportation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
19. A Lightweight Chip-Scale Chemical Mechanical Polishing Model Based on Polynomial Network.
- Author
-
Ji, Ruian, Chen, Rong, and Chen, Lan
- Subjects
- *
MECHANICAL models , *GRINDING & polishing , *POLYNOMIALS , *COMPUTATIONAL complexity , *CHEMICAL reactions , *SEMICONDUCTOR devices - Abstract
Chemical mechanical polishing/planarization (CMP) combines physical grinding and chemical reactions to planarize the wafer surface. The complex mechanism of CMP brings great challenges to the mechanism-based modeling process. The data-driven CMP modeling process is limited by insufficient datasets. At the same time, these two types of models generally have high computational complexity. In this paper, we introduce the group method of data handling (GMDH)-type polynomial network to build the CMP model to address the above challenges. We designed and manufactured the test chip using a 28nm process. The measurement data from the test chip shows that compared with the mechanism-based CMP model, the trained CMP model based on GMDH-type polynomial network has higher accuracy and lower computational complexity, with the average simulation speed being 115x faster. Experiments based on silicon data show that this modeling method has a small demand for data, and 20 randomly selected sets of data can meet the needs for modeling the current CMP process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Multi-layer encoder–decoder time-domain single channel speech separation.
- Author
-
Liu, Debang, Zhang, Tianqi, Christensen, Mads Græsbøll, Yi, Chen, and Wei, Ying
- Subjects
- *
SPEECH , *COMPUTATIONAL complexity , *VIDEO coding , *DEAF children - Abstract
With the emergence of more advanced separation networks, significant progress has been made in time-domain speech separation methods. These methods typically use a temporal encoder–decoder structure to encode speech feature sequences, thereby accomplishing the separation task. However, due to the limitation of traditional encoder–decoder structure, the separation performance decreases sharply when the encoded sequence is short, and when encoded sequence is sufficiently long, the separation performance improves, but which leads to an increase in computational complexity and training cost. Therefore, this paper compresses and reconstructs the speech feature sequence through a multi-layer convolution structure, and proposes a multi-layer encoder–decoder time-domain speech separation model (MLED). In this model, our encoder–decoder structure can compress speech sequence to a short length while ensuring the separation performance does not decrease. And combined with our multi-scale temporal attention (MSTA) separation network, MLED achieves efficient and precise separation of short encoded sequences. Therefore, compared to previous advanced time-domain separation methods, our experiments show that MLED achieves competitive separation performance with smaller model size, lower computational complexity, and training cost. • Our designed encoder-decoder network is more effective in shorter encoded sequence. • Since encoded sequence is shorter, MLED can efficiently performs separation task. • MLED can better balance performance, model size, computational and training costs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Multispectral and hyperspectral images fusion based on subspace representation and nonlocal low-rank regularization.
- Author
-
Yang, Yiguo, Li, Dan, Lv, Yanyan, Kong, Fanqiang, and Wang, Qiang
- Subjects
- *
SPATIAL resolution , *COMPUTATIONAL complexity , *MULTISPECTRAL imaging , *IMAGE fusion , *HIGH resolution imaging - Abstract
Multispectral image (MSI) and hyperspectral image (HSI) fusion is a popular method for HSI super-resolution reconstruction (HSI-SR). MSI-HSI fusion problem is ill-posed and demands several image priors or regularization terms to solve accurately, which is a challenging issue. In this paper, we propose an MSI-HSI fusion model via subspace representation and nonlocal low-rank regularization (SRNLRR). SRNLRR model incorporates the global spectral correlations and spatial nonlocal similarities of HSI to improve the fusion results, where the priors complement each other. First, we use the mode-n tensor-matrix product to project latent high spatial resolution HSI (HR-HSI) into spectral subspace, which can capture spectral low-rankness and reduce computational complexity. Then, based on low-rank representation (LRR) and nonlocal processing strategy, we design a spatial nonlocal LRR regularization (spa-NLRR) and a spectral global LRR regularization (spe-GLRR). These two regularizations analyze the spatial nonlocal similarities and spectral global correlations from intermediate-level vision. Finally, we use the residual regularization program to obtain more image information and input it into the fusion model. We use alternate minimization (AM) methods to optimize the SRNLRR model and employ the alternating direction method (ADM) on spatial/spectral LRR learning. Comparison experiments between the SRNLRR method and six advanced methods on three HSI datasets indicate the superiority of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Local and parallel multigrid method for semilinear Neumann problem with nonlinear boundary condition.
- Author
-
Xu, Fei, Wang, Bingyi, and Xie, Manting
- Subjects
- *
NEUMANN problem , *NONLINEAR equations , *SEMILINEAR elliptic equations , *BOUNDARY value problems , *MULTIGRID methods (Numerical analysis) , *COMPUTATIONAL complexity - Abstract
A novel local and parallel multigrid method is proposed in this study for solving the semilinear Neumann problem with nonlinear boundary condition. Instead of solving the semilinear Neumann problem directly in the fine finite element space, we transform it into a linear boundary value problem defined in each level of a multigrid sequence and a small-scale semilinear Neumann problem defined in a low-dimensional correction subspace. Furthermore, the linear boundary value problem can be efficiently solved using local and parallel methods. The proposed process derives an optimal error estimate with linear computational complexity. Additionally, compared with existing multigrid methods for semilinear Neumann problems that require bounded second order derivatives of nonlinear terms, ours only needs bounded first order derivatives. A rigorous theoretical analysis is proposed in this paper, which differs from the maturely developed theories for equations with Dirichlet boundary conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An improved exponential metric space approach for C‐mean clustering analysing.
- Author
-
Kumar, Rakesh, Joshi, Varun, Dhiman, Gaurav, and Viriyasitavat, Wattana
- Subjects
- *
CENTROID , *GAUSSIAN function , *COMPUTATIONAL complexity , *METRIC spaces , *ALGORITHMS - Abstract
In this article, we present two resilient algorithms, the improved alternative hard c‐means (IAHCM) and the improved alternative fuzzy c‐means (IAFCM). We implement the Gaussian distance‐dependent function proposed by Zhang and Chen (D.‐Q. Zhang and Chen, 2004). In some cases, Zhang and Chen's metric distance does not account for the clustering centroid effect predicted by the large value. R* is employed in IAHCM and IAFCM to discover robust results while minimizing its sensitivity. Experiments are conducted using two‐and three‐dimensional data, including Diamond and Iris real‐world data. The results are based on demonstrating the robust simplicity and applicability of the offered algorithms. Similarly, computational complexity is assessed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. ATFTrans: attention-weighted token fusion transformer for robust and efficient object tracking.
- Author
-
Xu, Liang, Wang, Liejun, and Guo, Zhiqing
- Subjects
- *
OBJECT tracking (Computer vision) , *TRANSFORMER models , *COMPUTATIONAL complexity , *RESEARCH personnel - Abstract
Recently, fully transformer-based trackers have achieved impressive tracking results, but this also brings a great deal of computational complexity. Some researchers have applied token pruning techniques to fully transformer-based trackers to diminish the computational complexity, but this leads to missing contextual information that is important for the regression task in the tracker. In response to the above issue, this paper proposes a token fusion method that speeds up inference while avoiding information loss and thus improving the robustness of the tracker. Specifically, the input of the transformer's encoder contains search tokens and exemplar tokens, and the search tokens are divided into tracking object tokens and background tokens according to the similarity between search tokens and exemplar tokens. The tokens with greater similarity to the exemplar tokens are identified as tracking object tokens, and those with smaller similarity to the exemplar tokens are identified as background tokens. The tracking object tokens contain the discriminative features of the tracking object, for the sake of making the tracker pay more attention to the tracking object tokens while reducing the computational effort. All the tracking object tokens are kept, and then, the background tokens are weighted and fused to form new background tokens according to the attention weight of the background tokens to prevent the loss of contextual information. The token fusion method presented in this paper not only provides efficient inference of the tracker but also makes the tracker more robust. Extensive experiments are carried out on popular tracking benchmark datasets to verify the validity of the token fusion method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Supersaturated designs with less β-aberration.
- Author
-
Daraz, Umer, Chen, E, and Tang, Yu
- Subjects
- *
FACTORIAL experiment designs , *COMPUTATIONAL complexity - Abstract
Supersaturated design is an important class of fractional factorial designs in which the number of experimental runs is not enough to estimate all the main effects. These designs are widely used in screening experiments, where the primary goal is to find important active factors at a low cost. The minimum β-aberration criterion is an appropriate criterion for measuring designs with quantitative factors. In this article, we first establish the explicit expression of β2 for three-level designs based on the relationship between the wordlength enumerator and the β-wordlength pattern. It can reduce the computational complexity of the β-wordlength pattern, and help provide an effective way for finding designs under the minimum β-aberration criterion. Moreover, a sharper lower bound of β2 is obtained, which can be considered as a benchmark for constructing optimal supersaturated designs. We further provide a simulated annealing algorithm to construct three-level supersaturated uniform designs with less β2. Finally, numerical results verify that our lower bound is sharper than the existing lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Unfitted generalized finite element methods for Dirichlet problems without penalty or stabilization.
- Author
-
Zhang, Qinghui
- Subjects
- *
DIRICHLET problem , *FINITE element method , *FUNCTION spaces , *ENERGY function , *COMPUTATIONAL complexity , *LAGRANGE multiplier - Abstract
Unfitted finite element methods (FEM) have attractive merits for problems with evolving or geometrically complex boundaries. Conventional unfitted FEMs incorporate penalty terms, parameters, or Lagrange multipliers to impose the Dirichlet boundary condition weakly. This to some extent increases computational complexity in implementation. In this article, we propose an unfitted generalized FEM (GFEM) for the Dirichlet problem, which is free from any penalty or stabilization. This is achieved by means of partition of unity frameworks of GFEM and designing a set of new enrichments for the Dirichlet boundary. The enrichments are divided into two groups: the one is used to impose the Dirichlet boundary condition strongly, and the other one serves as energy space of variational formulations. The shape functions in energy space vanish at the boundary so that standard variational formulae like those in the conventional fitted FEM can be applied, and thus the penalty and stabilization are not needed. The optimal convergence rate in the energy norm is proven rigorously. Numerical experiments and comparisons with other methods are executed to verify the theoretical result and effectiveness of the algorithm. The conditioning of new method is numerically shown to be of same order as that of the standard FEM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A sparse expansion for deep Gaussian processes.
- Author
-
Ding, Liang, Tuo, Rui, and Shahrampour, Shahin
- Subjects
- *
MARKOV processes , *STOCHASTIC processes , *COMPUTATIONAL complexity , *GAUSSIAN distribution , *GAUSSIAN processes - Abstract
In this work, we use Deep Gaussian Processes (DGPs) as statistical surrogates for stochastic processes with complex distributions. Conventional inferential methods for DGP models can suffer from high computational complexity, as they require large-scale operations with kernel matrices for training and inference. In this work, we propose an efficient scheme for accurate inference and efficient training based on a range of Gaussian Processes, called the Tensor Markov Gaussian Processes (TMGP). We construct an induced approximation of TMGP referred to as the hierarchical expansion. Next, we develop a deep TMGP (DTMGP) model as the composition of multiple hierarchical expansion of TMGPs. The proposed DTMGP model has the following properties: (i) the outputs of each activation function are deterministic while the weights are chosen independently from standard Gaussian distribution; (ii) in training or prediction, only O (polylog (M)) (out of M) activation functions have non-zero outputs, which significantly boosts the computational efficiency. Our numerical experiments on synthetic models and real datasets show the superior computational efficiency of DTMGP over existing DGP models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Single-metalens-assisted polarization imaging and edge detection for target recognition.
- Author
-
Fan, Yandong, Jin, Chunqi, Yang, Jiayu, Zhu, Fei, and Li, Wei
- Subjects
- *
COMPUTER vision , *IMAGE processing , *IMAGING systems , *IMAGE recognition (Computer vision) , *COMPUTATIONAL complexity - Abstract
Simultaneous capture of various light information, including polarization and edge information of the objects, has consistently been a fundamental concern within the field of target recognition. However, these tasks are typically accompanied by bulky optical components and active illumination methods, which significantly restricts their use in compact and lightweight applications. Here, we demonstrate a metalens-assisted imaging system that can simultaneously achieve polarization imaging and optoelectronic edge detection in a single shot with low consumption. The dielectric metalens is designed to achieve polarization imaging by dispersing the input polarized light into two orthogonal components, resulting in optoelectronic isotropic edge detection of two-dimensional images after digital post-processing. Compared with the algorithmic methods using a convolution kernel, the proposed system has a much lower computational complexity. The work presented in this study demonstrates the potential applications in machine vision and paves the way for the development of compact target recognition and real-time image processing systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Computational complexity in explainable decision support system: A review.
- Author
-
Ezeji, Ijeoma Noella, Adigun, Matthew, and Oki, Olukayode
- Abstract
The rise of decision processes in various sectors has led to the adoption of decision support systems (DSSs) to support human decision-makers but the lack of transparency and interpretability of these systems has led to concerns about their reliability, accountability and fairness. Explainable Decision Support Systems (XDSS) have emerged as a promising solution to address these issues by providing explanatory meaning and interpretation to users about their decisions. These XDSSs play an important role in increasing transparency and confidence in automated decision-making. However, the increasing complexity of data processing and decision models presents computational challenges that need to be investigated. This review, therefore, focuses on exploring the computational complexity challenges associated with implementing explainable AI models in decision support systems. The motivations behind explainable AI were discussed, explanation methods and their computational complexities were analyzed, and trade-offs between complexity and interpretability were highlighted. This review provides insights into the current state-of-the-art computational complexity within explainable decision support systems and future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Fire and smoke real-time detection algorithm for coal mines based on improved YOLOv8s.
- Author
-
Kong, Derui, Li, Yinfeng, and Duan, Manzhen
- Subjects
- *
COAL mining , *OBJECT recognition (Computer vision) , *FIRE detectors , *COMPUTATIONAL complexity , *ALGORITHMS , *SMOKE - Abstract
Fire and smoke detection is crucial for the safe mining of coal energy, but previous fire-smoke detection models did not strike a perfect balance between complexity and accuracy, which makes it difficult to deploy efficient fire-smoke detection in coal mines with limited computational resources. Therefore, we improve the current advanced object detection model YOLOv8s based on two core ideas: (1) we reduce the model computational complexity and ensure real-time detection by applying faster convolutions to the backbone and neck parts; (2) to strengthen the model's detection accuracy, we integrate attention mechanisms into both the backbone and head components. In addition, we improve the model's generalization capacity by augmenting the data. Our method has 23.0% and 26.4% fewer parameters and FLOPs (Floating-Point Operations) than YOLOv8s, which means that we have effectively reduced the computational complexity. Our model also achieves a mAP (mean Average Precision) of 91.0%, which is 2.5% higher than the baseline model. These results show that our method can improve the detection accuracy while reducing complexity, making it more suitable for real-time fire-smoke detection in resource-constrained environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. The concept of optimal planning of a linearly oriented segment of the 5G network.
- Author
-
Kovtun, Viacheslav, Grochla, Krzysztof, Zaitseva, Elena, and Levashenko, Vitaly
- Subjects
- *
5G networks , *COMPUTATIONAL complexity , *NUMERICAL analysis , *SPACE frame structures - Abstract
In the article, the extreme problem of finding the optimal placement plan of 5G base stations at certain points within a linear area of finite length is set. A fundamental feature of the author's formulation of the extreme problem is that it takes into account not only the points of potential placement of base stations but also the possibility of selecting instances of stations to be placed at a specific point from a defined excess set, as well as the aspect of inseparable interaction of placed 5G base stations within the framework of SON. The formulation of this extreme problem is brought to the form of a specific combinatorial model. The article proposes an adapted branch-and-bounds method, which allows the process of synthesis of the architecture of a linearly oriented segment of a 5G network to select the best options for the placement of base stations for further evaluation of the received placement plans in the metric of defined performance indicators. As the final stage of the synthesis of the optimal plan of a linearly oriented wireless network segment based on the sequence of the best placements, it is proposed to expand the parametric space of the design task due to the specific technical parameters characteristic of the 5G platform. The article presents a numerical example of solving an instance of the corresponding extremal problem. It is shown that the presented mathematical apparatus allows for the formation of a set of optimal placements taking into account the size of the non-coverage of the target area. To calculate this characteristic parameter, both exact and two approximate approaches are formalized. The results of the experiment showed that for high-dimensional problems, the approximate approach allows for reducing the computational complexity of implementing the adapted branch-and-bounds method by more than six times, with a slight loss of accuracy of the optimal solution. The structure of the article includes Section 1 (introduction and state-of-the-art), Section 2 (statement of the research, proposed models and methods devoted to the research topic), Section 3 (numerical experiment and analysis of results), and Section 4 (conclusions and further research). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A Training-Free Estimation Method for the State of Charge and State of Health of Series Battery Packs under Various Load Profiles.
- Author
-
Pei, Lei, Yu, Cheng, Wang, Tiansi, Yang, Jiawei, and Wang, Wanlin
- Subjects
- *
PARAMETER estimation , *COMPUTATIONAL complexity , *MODELS & modelmaking - Abstract
To ensure the accuracy of state of charge (SOC) and state of health (SOH) estimation for battery packs while minimizing the amount of pre-experiments required for aging modeling and the scales of computation for online management, a decisive-cell-based estimation method with training-free characteristic parameters and a dynamic-weighted estimation strategy is proposed in this paper. Firstly, to reduce the computational complexity, the state estimation of battery packs is summed up to that of two decisive cells, and a new selection approach for the decisive cells is adopted based on the detection of steep voltage changes. Secondly, two novel ideas are implemented for the state estimation of the selected cells. On the one hand, a set of characteristic parameters that only exhibit local curve shrinkage with aging is chosen, which keeps the corresponding estimation approaches away from training. On the other hand, multiple basic estimation approaches are effectively combined by their respective dynamic weights, which ensures the estimation can maintain a good estimation accuracy under various load profiles. Finally, the experimental results show that the new method can quickly correct the initial setting deviations and have a high estimation accuracy for both the SOC and SOH within 2% for a series battery pack consisting of cells with obvious inconsistency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. MRFA-Net: Multi-Scale Receptive Feature Aggregation Network for Cloud and Shadow Detection.
- Author
-
Wang, Jianxiang, Li, Yuanlu, Fan, Xiaoting, Zhou, Xin, and Wu, Mingxuan
- Subjects
- *
MATRIX decomposition , *REMOTE sensing , *FEATURE extraction , *IMAGE processing , *COMPUTATIONAL complexity - Abstract
The effective segmentation of clouds and cloud shadows is crucial for surface feature extraction, climate monitoring, and atmospheric correction, but it remains a critical challenge in remote sensing image processing. Cloud features are intricate, with varied distributions and unclear boundaries, making accurate extraction difficult, with only a few networks addressing this challenge. To tackle these issues, we introduce a multi-scale receptive field aggregation network (MRFA-Net). The MRFA-Net comprises an MRFA-Encoder and MRFA-Decoder. Within the encoder, the net includes the asymmetric feature extractor module (AFEM) and multi-scale attention, which capture diverse local features and enhance contextual semantic understanding, respectively. The MRFA-Decoder includes the multi-path decoder module (MDM) for blending features and the global feature refinement module (GFRM) for optimizing information via learnable matrix decomposition. Experimental results demonstrate that our model excelled in generalization and segmentation performance when addressing various complex backgrounds and different category detections, exhibiting advantages in terms of parameter efficiency and computational complexity, with the MRFA-Net achieving a mean intersection over union (MIoU) of 94.12% on our custom Cloud and Shadow dataset, and 87.54% on the open-source HRC_WHU dataset, outperforming other models by at least 0.53% and 0.62%. The proposed model demonstrates applicability in practical scenarios where features are difficult to distinguish. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. GLUENet: An Efficient Network for Remote Sensing Image Dehazing with Gated Linear Units and Efficient Channel Attention.
- Author
-
Fang, Jiahao, Wang, Xing, Li, Yujie, Zhang, Xuefeng, Zhang, Bingxian, and Gade, Martin
- Subjects
- *
REMOTE sensing , *IMAGE fusion , *COMPUTATIONAL complexity , *DATA mining , *OVERTRAINING - Abstract
Dehazing individual remote sensing (RS) images is an effective approach to enhance the quality of hazy remote sensing imagery. However, current dehazing methods exhibit substantial systemic and computational complexity. Such complexity not only hampers the straightforward analysis and comparison of these methods but also undermines their practical effectiveness on actual data, attributed to the overtraining and overfitting of model parameters. To mitigate these issues, we introduce a novel dehazing network for non-uniformly hazy RS images: GLUENet, designed for both lightweightness and computational efficiency. Our approach commences with the implementation of the classical U-Net, integrated with both local and global residuals, establishing a robust base for the extraction of multi-scale information. Subsequently, we construct basic convolutional blocks using gated linear units and efficient channel attention, incorporating depth-separable convolutional layers to efficiently aggregate spatial information and transform features. Additionally, we introduce a fusion block based on efficient channel attention, facilitating the fusion of information from different stages in both encoding and decoding to enhance the recovery of texture details. GLUENet's efficacy was evaluated using both synthetic and real remote sensing dehazing datasets, providing a comprehensive assessment of its performance. The experimental results demonstrate that GLUENet's performance is on par with state-of-the-art (SOTA) methods and surpasses the SOTA methods on our proposed real remote sensing dataset. Our method on the real remote sensing dehazing dataset has an improvement of 0.31 dB for the PSNR metric and 0.13 for the SSIM metric, and the number of parameters and computations of the model are much lower than the optimal method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Long-Term Coherent Integration Algorithm for High-Speed Target Detection.
- Author
-
He, Yao, Zhao, Guanghui, and Xiong, Kai
- Subjects
- *
ALGORITHMS , *DOPPLER effect , *FOURIER transforms , *COMPUTATIONAL complexity , *RADON transforms , *RADON , *VELOCITY , *DISCRETE Fourier transforms - Abstract
Long-term coherent integration (CI) can effectively improve the radar detection capability for high-speed targets. However, the range walk (RW) effect caused by high-speed motion significantly degrades the detection performance. To improve detection performance, this study proposes an improved algorithm based on the modified Radon inverse Fourier transform (denoted as IMRIFT). The proposed algorithm uses parameter searching for velocity estimation, designs a compensation function based on the relationship between velocity and distance walk and Doppler ambiguity terms, and performs CI based on the compensated signal. IMRIFT can achieve RW correction, avoid the blind-speed sidelobe (BSSL) effect caused by velocity mismatch, and improve detection performance, while ensuring low computational complexity. In addition, considering the relationship between energy concentration regions and bandwidth in the 2D frequency domain, a fast method based on IMIRFT is proposed, which can balance computational cost and detection capacity. Finally, a series of comparative experiments are conducted to demonstrate the effectiveness of the proposed algorithm and the fast method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Deep Reinforcement Learning for Network Dismantling: A K-Core Based Approach.
- Author
-
Pu, Tianle, Zeng, Li, and Chen, Chao
- Subjects
- *
DEEP reinforcement learning , *GRAPH neural networks , *REINFORCEMENT (Psychology) , *REINFORCEMENT learning , *COMPUTATIONAL complexity - Abstract
Network dismantling is one of the most challenging problems in complex systems. This problem encompasses a broad array of practical applications. Previous works mainly focus on the metrics such as the number of nodes in the Giant Connected Component (GCC), average pairwise connectivity, etc. This paper introduces a novel metric, the accumulated 2-core size, for assessing network dismantling. Due to the NP-hard computational complexity of this problem, we propose SmartCore, an end-to-end model for minimizing the accumulated 2-core size by leveraging reinforcement learning and graph neural networks. Extensive experiments across synthetic and real-world datasets demonstrate SmartCore's superiority over existing methods in terms of both accuracy and speed, suggesting that SmartCore should be a better choice for the network dismantling problem in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Semantic segmentation for multisource remote sensing images incorporating feature slice reconstruction and attention upsampling.
- Author
-
Lang, FengKai, Zhang, Ming, Zhao, JinQi, Zheng, NanShan, and Shi, Hongtao
- Subjects
- *
COMPUTATIONAL complexity , *PROBLEM solving , *IMAGE segmentation - Abstract
Multisource remote sensing images have rich features and high interpretability and are widely employed in many applications. However, highly unbalanced category distributions and complex backgrounds have created some difficulties in the application of remote sensing image semantic segmentation tasks, such as low accuracy of small target segmentation and inaccurate edge extraction. To solve these problems, in this paper, a feature map segmentation reconstruction module and an attention upsampling module are proposed. In the encoder part, the input feature map is equally segmented, and the segmented feature map is enlarged to effectively improve the small target feature information expression ability in the model. In the decoder part, the key segmentation and location information of shallow features are obtained using the global view. The deep semantic information and shallow spatial location information are fully combined to achieve a more refined upsampling operation. In addition, the attention mechanism of the spatial and channel squeeze and excitation block (scSE) is applied to pay more attention to important features and to suppress irrelevant background and redundant information. To verify the effectiveness of the proposed method, the WHU-OPT-SAR dataset and six state-of-the-art algorithms are utilized in comparative experiments. The experimental results show that our model has demonstrated the best performance and low computational complexity. With only approximately half the floating-point operation count and the number of model parameters of the MCANet model, which is specially designed for the dataset, our model surpasses MCANet by 1.52% and 1.53% in terms of mean intersection over union (mIoU) and F1 score, respectively. In particular, for small object regions such as roads and other categories, compared to the baseline model, the IoU and F1 score of our model are improved by 5.27% and 3.99% and by 5.68% and 5.65%, respectively. These results demonstrate the superior performance of our model in terms of accuracy and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. MFI-CD: a lightweight siamese network with multidimensional feature interaction for change detection.
- Author
-
Zhang, Xingpeng, Li, Yuru, Wang, Qiuli, and Wu, Sijing
- Subjects
- *
SURFACE of the earth , *SPINE , *REMOTE sensing , *COMPUTATIONAL complexity - Abstract
Change detection has always played a crucial role in observing the Earth's surface. Although visual Transformers (ViTs) have achieved excellent performance in change detection, they require high computational complexity. In addition, it does not explicitly address the differences caused by factors such as climate, lighting, and shooting angle in remote sensing (RS) image pairs. Therefore, we propose an efficient, lightweight Siamese network that considers multi-dimensional feature interaction. Firstly, we use a lightweight Transformer-based headless backbone network to extract feature information at each stage for bi-temporal images. To better capture the details and structure of images and eliminate the adverse effects of climate, lighting, and shooting angle, we design a multidimensional feature interaction method that uses spatial, channel, and amplitude dimension interaction methods after feature extraction operations at different stages. Furthermore, this approach achieves domain adaptation between bi-temporal domains to a certain extent while preserving the original semantic correspondence. Comprehensive experiments and extensive ablation studies on two common datasets, LEVIR-CD and S2Looking, have shown that our method achieves better performance with fewer parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Optimal virtual tube planning and control for swarm robotics.
- Author
-
Mao, Pengda, Fu, Rao, and Quan, Quan
- Subjects
- *
AGGREGATION (Robotics) , *TUBES , *COMPUTATIONAL complexity , *PREDICTION models , *ROBOTICS , *ROBOTS - Abstract
This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from O (k (n t , ε) n t 2) to O (k (n t , ε) n t 3) where n t is the number of parameters in the parameterized trajectory, ε is precision, and k (n t , ε) is the number of iterations with respect to n t and ε. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of O (n t) . Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. The complexity of spanning tree problems involving graphical indices.
- Author
-
Dong, Yanni, Broersma, Hajo, Bai, Yuhang, and Zhang, Shenggui
- Subjects
- *
SPANNING trees , *MOLECULAR connectivity index , *COMPUTATIONAL complexity - Abstract
We consider the computational complexity of spanning tree problems involving the graphical function-index. This index was recently introduced by Li and Peng as a unification of a long list of chemical and topological indices. We present a number of unified approaches to determine the NP -completeness and APX -completeness of maximum and minimum spanning tree problems involving this index. We give many examples of well-studied topological indices for which the associated complexity questions are covered by our results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Attention 3D central difference convolutional dense network for hyperspectral image classification.
- Author
-
Ashraf, Mahmood, Alharthi, Raed, Chen, Lihui, Umer, Muhammad, Alsubai, Shtwai, and Eshmawi, Ala Abdulmajid
- Subjects
- *
CONVOLUTIONAL neural networks , *FREQUENCY tuning , *IMAGE recognition (Computer vision) , *REMOTE sensing , *COMPUTATIONAL complexity - Abstract
Hyperspectral Images (HSI) classification is a challenging task due to a large number of spatial-spectral bands of images with high inter-similarity, extra variability classes, and complex region relationships, including overlapping and nested regions. Classification becomes a complex problem in remote sensing images like HSIs. Convolutional Neural Networks (CNNs) have gained popularity in addressing this challenge by focusing on HSI data classification. However, the performance of 2D-CNN methods heavily relies on spatial information, while 3D-CNN methods offer an alternative approach by considering both spectral and spatial information. Nonetheless, the computational complexity of 3D-CNN methods increases significantly due to the large capacity size and spectral dimensions. These methods also face difficulties in manipulating information from local intrinsic detailed patterns of feature maps and low-rank frequency feature tuning. To overcome these challenges and improve HSI classification performance, we propose an innovative approach called the Attention 3D Central Difference Convolutional Dense Network (3D-CDC Attention DenseNet). Our 3D-CDC method leverages the manipulation of local intrinsic detailed patterns in the spatial-spectral features maps, utilizing pixel-wise concatenation and spatial attention mechanism within a dense strategy to incorporate low-rank frequency features and guide the feature tuning. Experimental results on benchmark datasets such as Pavia University, Houston 2018, and Indian Pines demonstrate the superiority of our method compared to other HSI classification methods, including state-of-the-art techniques. The proposed method achieved 97.93% overall accuracy on the Houston-2018, 99.89% on Pavia University, and 99.38% on the Indian Pines dataset with the 25 × 25 window size. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Time difference detection based on sliding window all-phase FFT and Kalman filtering for precise flow measurement.
- Author
-
Jing, Jiaqi, Zheng, Dezhi, and Fan, Shangchun
- Subjects
- *
KALMAN filtering , *FLOW measurement , *FAST Fourier transforms , *COMPUTATIONAL complexity , *SLIDING mode control - Abstract
Coriolis mass flowmeter (CMF) measures the mass flow rate by detecting the time difference, typically using frequency domain methods. However, the spectrum leakage is the primary challenge. To address this issue, a new time difference detection method is proposed utilizing sliding window and all-phase fast Fourier transform. The computational complexity is reduced considering the changes in signal frequency. To further improve the stability and response speed of the measured value, a Kalman filtering algorithm based on variance detection is also proposed for post-processing. A transmitter system is developed to validate the proposed methods. The results demonstrate that, for single phase fluids, the accuracy is better than 0.5‰ and the repeatability is better than 0.2‰, thereby offering an improvement in the accuracy of CMF and supporting for industrial applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Sliding limited penetrable visibility graph for establishing complex network from time series.
- Author
-
Wang, Shilin, Li, Peng, Chen, Guangwu, and Bao, Chengqi
- Subjects
- *
COMPUTATIONAL complexity , *CAPACITORS , *TIME series analysis - Abstract
This study proposes a novel network modeling approach, called sliding window limited penetrable visibility graph (SLPVG), for transforming time series into networks. SLPVG takes into account the dynamic nature of time series, which is often affected by noise disturbances, and the fact that most nodes are not directly connected to distant nodes. By analyzing the degree distribution of different types of time series, SLPVG accurately captures the dynamic characteristics of time series with low computational complexity. In this study, the authors apply SLPVG for the first time to diagnose compensation capacitor faults in jointless track circuits. By combining the fault characteristics of compensation capacitors with network topological indicators, the authors find that the betweenness centrality reflects the fault status of the compensation capacitors clearly and accurately. Experimental results demonstrate that the proposed model achieves a high accuracy rate of 99.1% in identifying compensation capacitor faults. The SLPVG model provides a simple and efficient tool for studying the dynamics of long time series and offers a new perspective for diagnosing compensation capacitor faults in jointless track circuits. It holds practical significance in advancing related research fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Reduced complexity for sound zones with subband block adaptive filters and a loudspeaker line array.
- Author
-
Møller, Martin B., Martinez, Jorge, and Østergaard, Jan
- Subjects
- *
LOUDSPEAKERS , *ADAPTIVE filters , *LOCKER rooms , *TRANSFER functions , *COMPUTATIONAL complexity - Abstract
Sound zones are used to reproduce individual audio content to multiple people in a room using a set of loudspeakers with controllable input signals. To allow the reproduction of individual audio to dynamically change, e.g., due to moving listeners, changes in the number of listeners, or changing room transfer functions, an adaptive formulation is proposed. This formulation is based on frequency domain block adaptive filters and given room transfer functions. To reduce computational complexity, the system is extended to subband processing without cross-adaptive filters. The computational savings come from recognizing that sound zones consist of part-solutions which are inherently band limited, hence, several subbands can be ignored. To validate the theoretical findings, a 27-channel loudspeaker array was constructed, and measurements were performed in anechoic and reflective environments. The results show that the subband solution performs identically to a full-rate solution but at a reduced computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. New improved model for joint segmentation and registration of multi-modality images: application to medical images.
- Author
-
Badshah, Noor, Begum, Nasra, Rada, Lavdie, Ashfaq, Muniba, and Atta, Hadia
- Subjects
- *
IMAGE registration , *DIAGNOSTIC imaging , *COMPUTATIONAL complexity , *IMAGE processing , *IMAGE segmentation , *CURVATURE - Abstract
Joint segmentation and registration of images is a focused area of research nowadays. Jointly segmenting and registering noisy images and images having weak boundaries/intensity inhomogeneity is a challenging task. In medical image processing, joint segmentation and registration are essential methods that aid in distinguishing structures and aligning images for precise diagnosis and therapy. However, these methods encounter challenges, such as computational complexity and sensitivity to variations in image quality, which may reduce their effectiveness in real-world applications. Another major issue is still attaining effective joint segmentation and registration in the presence of artifacts or anatomical deformations. In this paper, a new nonparametric joint model is proposed for the segmentation and registration of multi-modality images having weak boundaries/noise. For segmentation purposes, the model will be utilizing local binary fitting data term and for registration, it is utilizing conditional mutual information. For regularization of the model, we are using linear curvature. The new proposed model is more efficient to segmenting and registering multi-modality images having intensity inhomogeneity, noise and/or weak boundaries. The proposed model is also tested on the images obtained from the freely available CHOAS dataset and compare the results of the proposed model with the other existing models using statistical measures such as the Jaccard similarity index, relative reduction, Dice similarity coefficient and Hausdorff distance. It can be seen that the proposed model outperforms the other existing models in terms of quantitatively and qualitatively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. UViT: Efficient and lightweight U-shaped hybrid vision transformer for human pose estimation.
- Author
-
Li, Biao, Tang, Shoufeng, and Li, Wenyi
- Subjects
- *
TRANSFORMER models , *POSE estimation (Computer vision) , *MULTISCALE modeling , *FEATURE extraction , *STRUCTURAL design , *COMPUTATIONAL complexity - Abstract
Pose estimation plays a crucial role in human-centered vision applications and has advanced significantly in recent years. However, prevailing approaches use extremely complex structural designs for obtaining high scores on the benchmark dataset, hampering edge device applications. In this study, an efficient and lightweight human pose estimation problem is investigated. Enhancements are made to the context enhancement module of the U-shaped structure to improve the multi-scale local modeling capability. With a transformer structure, a lightweight transformer block was designed to enhance the local feature extraction and global modeling ability. Finally, a lightweight pose estimation network— U-shaped Hybrid Vision Transformer, UViT— was developed. The minimal network UViT-T achieved a 3.9% improvement in AP scores on the COCO validation set with fewer model parameters and computational complexity compared with the best-performing V2 version of the MobileNet series. Specifically, with an input size of 384×288, UViT-T achieves an impressive AP score of 70.2 on the COCO test-dev set, with only 1.52 M parameters and 2.32 GFLOPs. The inference speed is approximately twice that of general-purpose networks. This study provides an efficient and lightweight design idea and method for the human pose estimation task and provides theoretical support for its deployment on edge devices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Earlier laryngeal cancer detection using hybrid M-RCNN technique.
- Author
-
Sharmila Joseph, J. and Vidyarthi, Abhay
- Subjects
- *
EARLY detection of cancer , *LARYNGEAL cancer , *DATA augmentation , *SQUAMOUS cell carcinoma , *COMPUTATIONAL complexity - Abstract
One of the most common types of cancer is Laryngeal cancer, which has a high mortality rate. The primary malignant tumor responsible for this disease is squamous cell carcinoma (SCC). Early diagnosis is very important to avoid experiencing morbidity and mortality. Various tools and techniques are used to detect and monitor laryngeal cancers. Unfortunately, these tools and techniques have various limitations, for example, Existing tools and approaches Mask R-CNN for identifying laryngeal cancer have various performance limitations. These include the inability to accurately identify the disease in its early stages, the complexity of the computational environment, and the time-consuming process of conducting patient screenings by utilizing diverse image datasets, but it lagging to detect large dataset. In this paper, we present a hybrid deep-learning model which can be used to analyze and monitor the different symptoms of laryngeal cancers. Proposed model takes Laryngeal cancer dataset as input; preprocessing is done using median filter, then data augmentation is applied to increase data diversity, then feature extraction is performed using LBP-KNN, finally cancer identification/classification is done using Mask-RCNN. Proposed model attains Accuracy:99.3%; Precision:97.99%; Recall:98.09% and F-measure: 97.01%. This method could be useful in providing clinical support to radiologists and doctors. The proposed model can be used to detect minor malignancies in patients in a fast and accurate manner. It can also help improve the efficiency of the clinical process by allowing clinicians to screen more patients. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. YOLO-RDP: Lightweight Steel Defect Detection through Improved YOLOv7-Tiny and Model Pruning.
- Author
-
Zhang, Guiheng, Liu, Shuxian, Nie, Shuaiqi, and Yun, Libo
- Subjects
- *
LIGHTWEIGHT steel , *FEATURE extraction , *STEEL manufacture , *SURFACE defects , *COMPUTATIONAL complexity - Abstract
During steel manufacturing, surface defects such as scratches, scale, and oxidation can compromise product quality and safety. Detecting these defects accurately is critical for production efficiency and product integrity. However, current target detection algorithms are often too resource-intensive for deployment on edge devices with limited computing resources. To address this challenge, we propose YOLO-RDP, an enhanced YOLOv7-tiny model. YOLO-RDP integrates RexNet, a lightweight network, for feature extraction, and employs GSConv and VOV-GSCSP modules to enhance the network's neck layer, reducing parameter count and computational complexity. Additionally, we designed a dual-headed object detection head called DdyHead with a symmetric structure, composed of two complementary object detection heads, greatly enhancing the model's ability to recognize minor defects. Further model optimization through pruning achieves additional lightweighting. Experimental results demonstrate the superiority of our model, with improvements in mAP values of 3.7% and 3.5% on the NEU-DET and GC10-DET datasets, respectively, alongside reductions in parameter count and computation by 40% and 30%, and 25% and 24%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Research on Genetic Algorithm Optimization with Fusion Tabu Search Strategy and Its Application in Solving Three-Dimensional Packing Problems.
- Author
-
Kang, Zhenjia, Guan, Yong, Wang, Jiake, and Chen, Pengzhan
- Subjects
- *
TABU search algorithm , *GENETIC algorithms , *HEURISTIC algorithms , *COMBINATORIAL optimization , *NP-hard problems , *MATHEMATICAL models , *COMPUTATIONAL complexity - Abstract
Symmetry is an important principle and characteristic that is prevalent in nature and artificial environments. In the three-dimensional packing problem, leveraging the inherent symmetry of goods and the symmetry of the packing space can enhance packing efficiency and utilization.The three-dimensional packing problem is an NP-hard combinatorial optimization problem in the field of modern logistics, with high computational complexity. This paper proposes an improved genetic algorithm by incorporating a fusion tabu search strategy to address this problem. The algorithm employs a three-dimensional loading mathematical model and utilizes a wall-building method under residual space constraints for stacking goods. Furthermore, adaptation of fitness variation strategy, chromosome adjustment, and tabu search algorithm are introduced to balance the algorithm's global and local search capabilities, as well as to enhance population diversity and convergence speed. Through testing on benchmark cases such as Bischoff and Ratcliff, the improved algorithm demonstrates an average increase of over 3% in packing space utilization compared to traditional genetic algorithms and other heuristic algorithms, validating its feasibility and effectiveness. The proposed improved genetic algorithm provides new insights for solving three-dimensional packing problems and optimizing logistics loading schedules, offering promising prospects for various applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Direction of Arrival Estimation of Coherent Sources via a Signal Space Deep Convolution Network.
- Author
-
Zhao, Jun, Gui, Renzhou, Dong, Xudong, and Zhao, Yufei
- Subjects
- *
DIRECTION of arrival estimation , *DEEP learning , *COVARIANCE matrices , *COMPUTATIONAL complexity - Abstract
In the field of direction of arrival (DOA) estimation for coherent sources, subspace-based model-driven methods exhibit increased computational complexity due to the requirement for eigenvalue decomposition. In this paper, we propose a new neural network, i.e., the signal space deep convolution (SSDC) network, which employs the signal space covariance matrix as the input and performs independent two-dimensional convolution operations on the symmetric real and imaginary parts of the input signal space covariance matrix. The proposed SSDC network is designed to address the challenging task of DOA estimation for coherent sources. Furthermore, we leverage the spatial sparsity of the output from the proposed SSDC network to conduct a spectral peak search for obtaining the associated DOAs. Simulations demonstrate that, compared to existing state-of-the-art deep learning-based DOA estimation methods for coherent sources, the proposed SSDC network achieves excellent results in both matching and mismatching scenarios between the training and test sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.