42,326 results
Search Results
2. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ELECTRONIC paper ,ATRIAL fibrillation ,ALGORITHMS ,HEART failure ,HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Tools and algorithms for the construction and analysis of systems: a special issue on tool papers for TACAS 2021.
- Author
-
Jensen, Peter Gjøl and Neele, Thomas
- Subjects
ALGORITHMS ,SOFTWARE verification ,INTEGRATED circuit verification ,SYSTEMS software ,CONFERENCES & conventions - Abstract
This special issue contains six revised and extended versions of tool papers that appeared in the proceedings of TACAS 2021, the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. The issue is dedicated to the realization of algorithms in tools and the studies of the application of these tools for analysing hard- and software systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Revealing trajectories of the mind via non-linear manifolds of brain activity.
- Subjects
- Brain diagnostic imaging, Algorithms, Nervous System Physiological Phenomena
- Published
- 2023
- Full Text
- View/download PDF
5. Single-cell-specific drug activities are revealed by a tensor imputation algorithm.
- Subjects
- Oligonucleotide Array Sequence Analysis, Algorithms
- Published
- 2022
- Full Text
- View/download PDF
6. Selected Papers of the 32nd International Workshop on Combinatorial Algorithms, IWOCA 2021.
- Author
-
Flocchini, Paola and Moura, Lucia
- Subjects
EULERIAN graphs ,ALGORITHMS ,APPROXIMATION algorithms ,WEB hosting - Abstract
They give fixed parameter tractable algorithms for the problem parameterized by various structural parameters. The authors give a greedy loop-free algorithm for the exhaustive generation, a successor algorithm that runs in constant amortized time, among other algorithms, as well as results for the fixed spin generalization of this problem. IWOCA (International Workshop on Combinatorial Algorithms) is an annual conference series covering all aspects of combinatorial algorithms. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
7. Machine learning reveals how complex molecules bind to catalyst surfaces.
- Subjects
- Machine Learning, Algorithms
- Published
- 2022
- Full Text
- View/download PDF
8. Selected Papers of the 31st International Workshop on Combinatorial Algorithms, IWOCA 2020.
- Author
-
Gąsieniec, Leszek, Klasing, Ralf, and Radzik, Tomasz
- Subjects
- *
MATHEMATICAL proofs , *ALGORITHMS , *CHARTS, diagrams, etc. , *POLYNOMIAL time algorithms , *ONLINE algorithms , *GRAPH labelings , *HAMMING distance - Published
- 2022
- Full Text
- View/download PDF
9. Edge computing-enabled green multisource fusion indoor positioning algorithm based on adaptive particle filter.
- Author
-
Li, Mengyao, Zhu, Rongbo, Ding, Qianao, Wang, Jun, Wan, Shaohua, and Ma, Maode
- Subjects
ADAPTIVE filters ,PROBABILITY density function ,ALGORITHMS ,EDGE computing ,FILTER paper ,PARTICLE swarm optimization - Abstract
Edge computing enables portable devices to provide smart applications, and the indoor positioning technique offers accurate location-based indoor navigation and personalized smart services. To achieve the high positioning accuracy, an indoor positioning algorithm based on particle filter requires a large number of sample particles to approximate the probability density function, which leads to the additional computational cost and high fusion delay. Focusing on real-time and accurate positioning, an edge computing-enabled green multi-source fusion indoor positioning algorithm called APFP is proposed based on adaptive particle filter in this paper. APFP considers both pedestrian dead reckoning (PDR) signals in mobile terminals and the received signal strength indication (RSSI) of Bluetooth, and effectively merges the error-free accumulation of trilateral positioning and the accurate short-range positioning of PDR, which enables mobile terminals adaptively perform particle filter to reduce the computing time and power consumption while ensuring positioning accuracy simultaneously. Detailed experimental results show that, compared with the traditional particle filter algorithm and the map-constrained algorithm, the proposed APFP reduces fusion computing cost by 59.89% and 54.37%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Paper currency defect detection algorithm using quaternion uniform strength.
- Author
-
Gai, Shan, Xu, Xiaolin, and Xiong, Bangshu
- Subjects
ALGORITHMS ,QUATERNIONS ,MONEY ,IMAGE registration ,MATHEMATICAL convolutions - Abstract
In this paper, we propose a novel paper currency defect detection algorithm using quaternion uniform strength. We first build paper currency image preprocessing integration framework which includes intensity balancing, paper currency location, and geometric correction. We then propose a global–local paper currency image registration algorithm by moving key areas within certain range which can eliminate the false difference effectively. Finally, the quaternion uniform strength is calculated by using quaternion convolution edge detector. The defect degree of paper currency is determined by using the quaternion uniform color difference. The proposed algorithm is tested using different datasets from five countries: CNY, USD, EUR, VND, and RUB. The experimental results demonstrate that the proposed algorithm yields better results than the existing state-of-the-art paper currency defect detection techniques. The demo of the proposed paper currency defect detection algorithm will be publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Paper-based 3D printing of anthropomorphic CT phantoms: Feasibility of two construction techniques.
- Author
-
Jahnke, Paul, Schwarz, Stephan, Ziegert, Marco, Schwarz, Felix Benjamin, Hamm, Bernd, and Scheel, Michael
- Subjects
ALGORITHMS ,COMPUTED tomography ,DIAGNOSTIC imaging ,HEAD ,COMPUTERS in medicine ,IMAGING phantoms ,RESEARCH funding ,PILOT projects ,THREE-dimensional printing ,MEDICAL artifacts - Abstract
Objectives: To develop and evaluate methods for assembling radiopaque printed paper sheets to realistic patient phantoms for CT dose and image quality testing.Methods: CT images of two patients were radiopaque printed with aqueous potassium iodide solution (0.6 g/ml) on paper. Two methods were developed for assembling the paper sheets to head and neck phantoms. (1) Printed sheets were fed to a paper-based 3D printer along with corresponding 3D printable STL files. (2) Paper stacks of 5-mm thickness were glued with toner, cut to the patient shape and assembled to a phantom. In a sample application study, both phantoms were examined with five different tube current settings. Images were reconstructed using filtered-back projection (FBP) and iterative reconstruction (AIDR 3D) with three strength levels. Dose length product (DLP), signal-to-noise ratios (SNR) and contrast-to-noise ratios (CNRs) were analysed. Data were analysed using 2-way analysis of variance (ANOVA).Results: Both methods achieved anthropomorphic phantoms with detailed patient anatomy. The 3D printer yielded a precise reproduction of the external patient shape, but caused visible glue artefacts. Gluing with toner avoided these artefacts and yielded more flexibility with regard to phantom size. In the sample application study, non-inferior SNR and CNR and up to 83.7% lower DLP were achieved on the phantoms with AIDR 3D compared with FBP.Conclusions: Two methods for assembling radiopaque printed paper sheets to phantoms of individual patients are presented. The sample application demonstrates potential for simulation of patient imaging and systematic CT dose and image quality assessment.Key Points: • Two methods were developed to create realistic CT phantoms of individual patients from radiopaque printed paper sheets. • Analysis of five tube current and four reconstruction settings on two radiopaque 3D printed patient phantoms yielded non-inferior SNR and CNR and up to 83.7% lower dose with iterative reconstruction in comparison with filtered back projection. • Radiopaque 3D printed phantoms can simulate patients and allow systematic analysis of CT dose and image quality parameters. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
12. COAP 2019 Best Paper Prize: Paper of S. Gratton, C. W. Royer, L. N. Vicente, and Z. Zhang.
- Subjects
APPLIED mathematics ,ALGORITHMS ,SCIENTIFIC computing ,PRIZES (Contests & competitions) ,ALGORITHMIC randomness - Abstract
Each year, the editorial board of Computational Optimization and Applications selects a paper from the preceding year's publications for the Best Paper Award. This derivative-free algorithm relies on randomly generated directions and is analyzed from a probabilistic viewpoint, leading to complexity guarantees for both deterministic and probabilistic versions of the method. First, following recent developments in nonconvex optimization, complexity results have become increasingly popular in derivative-free optimization [[8]]. [Extracted from the article]
- Published
- 2020
- Full Text
- View/download PDF
13. On calibration of orthotropic elastic-plastic constitutive models for paper foils by biaxial tests and inverse analyses.
- Author
-
Garbowski, Tomasz, Maier, Giulio, and Novati, Giorgio
- Subjects
- *
ARTIFICIAL neural networks , *PAPER products , *PAPER product manufacturing , *CARDBOARD , *DIGITAL image processing , *MATHEMATICAL programming , *ALGORITHMS - Abstract
In this paper two procedures are developed for the identification of the parameters contained in an orthotropic elastic-plastic-hardening model for free standing foils, particularly of paper and paperboard. The experimental data considered are provided by cruciform tests and digital image correlation. A simplified version of the constitutive model proposed by Xia et al. (Int J Solids Struct 39:4053-4071, ) is adopted. The inverse analysis is comparatively performed by the following alternative computational methodologies: (a) mathematical programming by a trust-region algorithm; (b) proper orthogonal decomposition and artificial neural network. The second procedure rests on preparatory once-for-all computations and turns out to be applicable economically and routinely in industrial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
14. A software resource for large graph processing and analysis.
- Subjects
- Software, Algorithms
- Published
- 2023
- Full Text
- View/download PDF
15. An FPGA Implementation of the Log-MAP Algorithm for a Dirty Paper Coding CODEC.
- Author
-
Lopes, Paulo A. C. and Gerald, José A. B.
- Subjects
- *
BIT rate , *VIDEO coding , *ALGORITHMS , *GATE array circuits , *CODECS , *DECODING algorithms - Abstract
This work describes the log-MAP (BCJR) algorithm implementation of a close to capacity dirty paper coding CODEC. The CODEC consists of eight deep pipeline processors. It decodes blocks of 975 bits in 26.9 ms using less than 9.7% of low-cost FPGA (and no DSP blocks). Two pipelines, for alpha and beta, calculate the values of gamma (of the BCJR) to reduce the storage requirements. The final log-likelihood ratio (LLR) is calculated together with alpha, reusing intermediate results. The number of bits used by the different signals of the processor is easily configurable. It was set to six bits to the channel measure signals and eight bits to log of probability signals like alpha, beta, and others. The CODEC clock was 100 MHz. The achieved bit rate is 36.2 Kbps per CODEC, but multiple CODECs can be fit into a single chip. The CODEC is 3.49 dB from the channel capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Evaluation of fog application placement algorithms: a survey.
- Author
-
Smolka, Sven and Mann, Zoltán Ádám
- Subjects
ALGORITHMS ,EDGE computing ,ENERGY consumption ,FOG ,SERVER farms (Computer network management) - Abstract
Recently, the concept of cloud computing has been extended towards the network edge. Devices near the network edge, called fog nodes, offer computing capabilities with low latency to nearby end devices. In the resulting fog computing paradigm (also called edge computing), application components can be deployed to a distributed infrastructure, comprising both cloud data centers and fog nodes. The decision which infrastructure nodes should host which application components has a large impact on important system parameters like performance and energy consumption. Several algorithms have been proposed to find a good placement of applications on a fog infrastructure. In most cases, the proposed algorithms were evaluated experimentally by the respective authors. In the absence of a theoretical analysis, a thorough and systematic empirical evaluation is of key importance for being able to make sound conclusions about the suitability of the algorithms. The aim of this paper is to survey how application placement algorithms for fog computing are evaluated in the literature. In particular, we identify good and bad practices that should be utilized respectively avoided when evaluating such algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Brief communication: Three errors and two problems in a recent paper: gazeNet: End-to-end eye-movement event detection with deep neural networks (Zemblys, Niehorster, and Holmqvist, 2019).
- Author
-
Friedman, Lee
- Subjects
- *
AUTOMATIC classification , *ALGORITHMS , *FILES (Records) , *BEHAVIORAL research - Abstract
Zemblys et al. (Behavior Research Methods, 51(2), 840–864, 2019) reported on a method for the classification of eye-movements ("gazeNet"). I have found three errors and two problems with that paper that are explained herein. Error 1: The gazeNet classification method was built assuming that a hand-scored dataset from Lund University was all collected at 500 Hz, but in fact, six of the 34 recording files were actually collected at 200 Hz. Of the six datasets that were used as the training set for the gazeNet algorithm, two were actually collected at 200 Hz. Problem 1 has to do with the fact that even among the 500 Hz data, the inter-timestamp intervals varied widely. Problem 2 is that there are many unusual discontinuities in the saccade trajectories from the Lund University dataset that make it a very poor choice for the construction of an automatic classification method. Error 2 The gazeNet algorithm was trained on the Lund dataset, and then compared to other methods, not trained on this dataset, in terms of performance on this dataset. This is an inherently unfair comparison, and yet nowhere in the gazeNet paper is this unfairness mentioned. Error 3 arises out of the novel event-related agreement analysis employed by the gazeNet authors. Although the authors intended to classify unmatched events as either false positives or false negatives, many are actually being classified as true negatives. True negatives are not errors, and any unmatched event misclassified as a true negative is actually driving kappa higher, whereas unmatched events should be driving kappa lower. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Special Issue on papers from the 2019 Workshop on Models and Algorithms for Planning and Scheduling Problems.
- Author
-
Khuller, Samir
- Subjects
SCHEDULING ,ALGORITHMS ,ONLINE algorithms - Abstract
The paper "Well-behaved Online Load Balancing Against Strategic Jobs" by Li, Li and Wu considers a truthful online load-balancing problem with the objective of the makespan minimization on related machines. The 2019 workshop on models and algorithms for planning and scheduling problems was held in Renesse (The Netherlands). [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
19. COAP 2019 Best Paper Prize: Paper of Andreas Tillmann.
- Subjects
MATROIDS ,RESTRICTED isometry property ,ALGORITHMS ,MATHEMATICAL optimization ,HAMMING distance ,COMPRESSED sensing ,CUTTING stock problem ,OPERATIONS research - Abstract
Thus, (3) treats the constraints HT ht and HT ht implicitly, and works solely with the auxiliary support variables I y i . [Extracted from the article]
- Published
- 2020
- Full Text
- View/download PDF
20. Research-paper recommender systems: a literature survey.
- Author
-
Beel, Joeran, Gipp, Bela, Langer, Stefan, and Breitinger, Corinna
- Subjects
RESEARCH management ,LITERATURE reviews ,DESCRIPTIVE statistics ,ALGORITHMS ,RUN time systems (Computer science) - Abstract
In the last 16 years, more than 200 research articles were published about research-paper recommender systems. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users' information needs. Our review revealed some shortcomings of the current research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few research papers had an impact on research-paper recommender systems in practice. We also identified a lack of authority and long-term research interest in the field: 73 % of the authors published no more than one paper on research-paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Single-sequence protein structure prediction using supervised transformer protein language models.
- Author
-
Wang W, Peng Z, and Yang J
- Subjects
- Humans, Distillation, Electric Power Supplies, Language, Algorithms, Benchmarking
- Abstract
Significant progress has been made in protein structure prediction in recent years. However, it remains challenging for AlphaFold2 and other deep learning-based methods to predict protein structure with single-sequence input. Here we introduce trRosettaX-Single, an automated algorithm for single-sequence protein structure prediction. It incorporates the sequence embedding from a supervised transformer protein language model into a multi-scale network enhanced by knowledge distillation to predict inter-residue two-dimensional geometry, which is then used to reconstruct three-dimensional structures via energy minimization. Benchmark tests show that trRosettaX-Single outperforms AlphaFold2 and RoseTTAFold on orphan proteins and works well on human-designed proteins (with an average template modeling score (TM-score) of 0.79). An experimental test shows that the full trRosettaX-Single pipeline is two times faster than AlphaFold2, using much fewer computing resources (<10%). On 2,000 designed proteins from network hallucination, trRosettaX-Single generates structure models with high confidence. As a demonstration, trRosettaX-Single is applied to missense mutation analysis. These data suggest that trRosettaX-Single may find potential applications in protein design and related studies., (© 2022. The Author(s), under exclusive licence to Springer Nature America, Inc.)
- Published
- 2022
- Full Text
- View/download PDF
22. Infrared image enhancement algorithm based on detail enhancement guided image filtering.
- Author
-
Tan, Ailing, Liao, Hongping, Zhang, Bozhi, Gao, Meijing, Li, Shiyu, Bai, Yang, and Liu, Zehao
- Subjects
IMAGE intensifiers ,INFRARED imaging ,COST functions ,ENTROPY (Information theory) ,ALGORITHMS ,ENTROPY ,SIGNAL-to-noise ratio ,QUANTUM noise ,QUANTUM entropy - Abstract
Because of the unique imaging mechanism of infrared (IR) sensors, IR images commonly suffer from blurred edge details, low contrast, and poor signal-to-noise ratio. A new method is proposed in this paper to enhance IR image details so that the enhanced images can effectively inhibit image noise and improve image contrast while enhancing image details. First, for the traditional guided image filter (GIF) applied to IR image enhancement is prone to halo artifacts, this paper proposes a detail enhancement guided filter (DGIF). It mainly adds the constructed edge perception and detail regulation factors to the cost function of the GIF. Then, according to the visual characteristics of human eyes, this paper applies the detail regulation factor to the detail layer enhancement, which solves the problem of amplifying image noise using fixed gain coefficient enhancement. Finally, the enhanced detail layer is directly fused with the base layer so that the enhanced image has rich detail information. We first compare the DGIF with four guided image filters and then compare the algorithm of this paper with three traditional IR image enhancement algorithms and two IR image enhancement algorithms based on the GIF on 20 IR images. The experimental results show that the DGIF has better edge-preserving and smoothing characteristics than the four guided image filters. The mean values of quantitative evaluation of information entropy, average gradient, edge intensity, figure definition, and root-mean-square contrast of the enhanced images, respectively, achieved about 0.23%, 3.4%, 4.3%, 2.1%, and 0.17% improvement over the optimal parameter. It shows that the algorithm in this paper can effectively suppress the image noise in the detail layer while enhancing the detail information, improving the image contrast, and having a better visual effect. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Intelligent algorithms and complex system for a smart parking for vaccine delivery center of COVID-19.
- Author
-
Jemmali, Mahdi
- Subjects
COVID-19 ,INTELLIGENT buildings ,ALGORITHMS ,HERD immunity ,SMART cities ,NP-hard problems ,ELECTRONIC paper - Abstract
Achieving community immunity against the coronavirus disease 2019 (COVID-19) depends on vaccinating the largest number of people within a specific period while taking all precautionary measures. To address this problem, this paper presents a smart parking system that will help the health crisis management committee to vaccinate the largest number of people with the minimum period of time while ensuring that all precautionary measures are followed, through a set of algorithms. These algorithms seek to ensure a uniform distribution of persons in parking. This paper proposes a novel complex system for smart parking and nine algorithms to address the NP-hard problem. The experimental results demonstrate the performance of the proposed algorithms in terms of gap and time. Applying these algorithms to smart cities to ensure precautionary measures against COVID-19 can help fight against this pandemic. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Current concepts on bibliometrics: a brief review about impact factor, Eigenfactor score, CiteScore, SCImago Journal Rank, Source-Normalised Impact per Paper, H-index, and alternative metrics.
- Author
-
Roldan-Valadez, Ernesto, Salazar-Ruiz, Shirley Yoselin, Ibarra-Contreras, Rafael, and Rios, Camilo
- Abstract
Background: Understanding the impact of a publication by using bibliometric indices becomes an essential activity not only for universities and research institutes but also for individual academicians. This paper aims to provide a brief review of the current bibliometric tools used by authors and editors and proposes an algorithm to assess the relevance of the most common bibliometric tools to help the researchers select the fittest journal and know the trends of published submissions by using self-evaluation. Methods: We present a narrative review answering at least two related consecutive questions triggered by the topics mentioned above. How prestigious is a journal based on its most recent bibliometrics, so authors may choose it to submit their next manuscript? And, how can they self-evaluate/understand the impact of their whole publishing scientific life? Results: We presented the main relevant definitions of each bibliometrics and grouped them in those oriented to evaluated journals or individuals. Also, we share with our readers our algorithm to assess journals before manuscript submission. Conclusions: Since there is a journal performance market and an article performance market, each one with its patterns, an integrative use of these metrics, rather than just the impact factor alone, might represent the fairest and most legitimate approach to assess the influence and importance of an acceptable research issue, and not only a sound journal in their respective disciplines. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. A taxonomy of load balancing algorithms and approaches in fog computing: a survey.
- Author
-
Ebneyousef, Sepideh and Shirmarz, Alireza
- Subjects
ALGORITHMS ,COMPUTER systems ,QUALITY of service ,CLOUD computing ,INTERNET of things ,TAXONOMY ,LOAD balancing (Computer networks) - Abstract
These days, cloud computing usage has been increasing with the rapid growth of Internet coverage all over the world to serve as a pay-per-use model using shared computing resources. Internet of Things (IoT) is a growing technology which is used in different applications and it needs cloud computing however the distance between cloud computing resources and the end system in IoT can cause a delay which is intolerable for delay-sensitive applications. Fog computing is a computing resource between cloud computing and end system to reduce the delay for the delay-sensitive applications in IoT. Therefore, load balancing functionality is a significant role to provide the required quality of service (QoS), quality of experience (QoE), and performance. Load balancing can be done based on response time, throughput, energy consumption, and utilization metrics. In this paper, the papers published in Elsevier, ACM, IEEE, Springer and Wiley between 2018 and 2022 have been examined to extract the load-balancing algorithms, system architecture, tools and applications, advantages and disadvantages. This review is useful for those working on load-balancing performance improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Tools and algorithms for the construction and analysis of systems: a special issue for TACAS 2020.
- Author
-
Biere, Armin and Parker, David
- Subjects
ALGORITHMS ,SOFTWARE verification ,TECHNOLOGY transfer ,SOFTWARE maintenance ,SOFTWARE engineering - Abstract
This special issue of Software Tools for Technology Transfer comprises extended versions of selected papers from the 26th edition of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2020). The focus of this conference series is tools and algorithms for the rigorous analysis of software and hardware systems, and the papers in this special cover the spectrum of current work in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Autonomous localized path planning algorithm for UAVs based on TD3 strategy.
- Author
-
Feiyu, Zhao, Dayan, Li, Zhengxu, Wang, Jianlin, Mao, and Niya, Wang
- Subjects
DRONE aircraft ,ALGORITHMS ,PROBLEM solving - Abstract
Unmanned Aerial Vehicles are useful tools for many applications. However, autonomous path planning for Unmanned Aerial Vehicles in unfamiliar environments is a challenging problem when facing a series of problems such as poor consistency, high influence by the native controller of the Unmanned Aerial Vehicles. In this paper, we investigate reinforcement learning-based autonomous local path planning methods for Unmanned Aerial Vehicles with high autonomous decision-making capability and locally high portability. We propose an autonomous local path planning algorithm based on the TD3 strategy to solve the problem of local obstacle avoidance and path planning in unfamiliar environments using autonomous decision-making of Unmanned Aerial Vehicles. The simulation results on Gazebo show that our method can effectively realize the autonomous local path planning task for Unmanned Aerial Vehicles, the success rate of path planning with our method can reach 93% under the interference of no obstacles, and 92% in the environment with obstacles. Finally, our method can be used for autonomous path planning of Unmanned Aerial Vehicles in unfamiliar environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Weakly-supervised structural surface crack detection algorithm based on class activation map and superpixel segmentation.
- Author
-
Liu, Chao and Xu, Boqiang
- Subjects
SURFACE cracks ,OPTIMIZATION algorithms ,PIXELS ,CONVOLUTIONAL neural networks ,ALGORITHMS - Abstract
This paper proposes a weakly-supervised structural surface crack detection algorithm that can detect the crack area in an image with low data labeling cost. The algorithm consists of a convolutional neural networks Vgg16-Crack for classification, an improved and optimized class activation map (CAM) algorithm for accurately reflecting the position and distribution of cracks in the image, and a method that combines superpixel segmentation algorithm simple linear iterative clustering (SLIC) with CAM for more accurate semantic segmentation of cracks. In addition, this paper uses Bayesian optimization algorithm to obtain the optimal parameter combination that maximizes the performance of the model. The test results show that the algorithm only requires image-level labeling, which can effectively reduce the labor and material consumption brought by pixel-level labeling while ensuring accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Research on fabric surface defect detection algorithm based on improved Yolo_v4.
- Author
-
Li, Yuanyuan, Song, Liyuan, Cai, Yin, Fang, Zhijun, and Tang, Ming
- Subjects
SURFACE defects ,FEATURE extraction ,ALGORITHMS ,INDUSTRIAL sites ,TEXTILES ,PROBLEM solving - Abstract
In industry, the task of defect classification and defect localization is an important part of defect detection system. However, existing studies only focus on one task and it is difficult to ensure the accuracy of both tasks. This paper proposes a defect detection system based on improved Yolo_v4, which greatly improves the detection ability of minor defects. For K_Means algorithm clustering prianchors question with strong subjectivity, the paper proposes the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to determine the number of Anchors. To solve the problem of low detection rate of small targets caused by insufficient reuse rate of low-level features in CSPDarknet53 feature extraction network, this paper proposes an ECA-DenseNet-BC-121 feature extraction network to improve it. And the Dual Channel Feature Enhancement (DCFE) module is proposed to improve the local information loss and gradient propagation obstruction caused by quad chain convolution in PANet networks to improve the robustness of the model. The experimental results on the fabric surface defect detection datasets show that the mAP of the improved Yolo_v4 is 98.97%, which is 7.67% higher than SSD, 3.75% higher than Faster_RCNN, 10.82% higher than Yolo_v4 tiny, and 5.35% higher than Yolo_v4, and the detection speed reaches 39.4 fps. It can meet the real-time monitoring needs of industrial sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Visual recognition and location algorithm based on optimized YOLOv3 detector and RGB depth camera.
- Author
-
He, Bin, Qian, Shusheng, and Niu, Yongchao
- Subjects
DETECTORS ,DIAMETER ,TOMATOES ,TRACKING algorithms ,CAMERAS ,ALGORITHMS - Abstract
Fruit recognition and location are the premises of robot automatic picking. YOLOv3 has been used to detect different fruits in complex environment. However, for the object with definite features, the complex network structure will increase the computing time and may cause overfitting. Therefore, this paper has carried out a lightweight design for the YOLOv3. This paper proposed an improved T-Net to detect tomato images. Firstly, the T-Net reduces the residual network layers. This paper changed the number of cycles in each group of the residual unit to 1, 2, 2, 1, and 1. Second, two feature layers with different scales are selected according to the features of tomatoes. Meanwhile, the convolutional layer at the neck has been reduced by two layers. Finally, the location and approximate diameter of the ripe tomato are obtained by combining the node information of the Intel D435i camera and T-Net in the Robot Operation System. T-Net obtains mean average precision (mAP) of 99.2%, F
1 -score of 98.9%, precision of 99.0%, and recall of 98.8% at a detection rate of 104.2 FPS. The proposed T-Net has outperformed the YOLOv3 with 0.4%, 0.1%, and 0.2% increase in precision, mAP, and F1 -score. The detection speed of T-Net is 1.8 times faster than YOLOv3. The mean errors of the center coordinates and diameter of the tomato are 8.5 mm and 2.5 mm, respectively. This model provides a method for efficient real-time detection and location of tomatoes. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
31. Moth-flame optimization algorithm based on diversity and mutation strategy.
- Author
-
Ma, Lei, Wang, Chao, Xie, Neng-gang, Shi, Miao, Ye, Ye, and Wang, Lu
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,CONSTRAINED optimization ,MAXIMA & minima - Abstract
In this work, an improved moth-flame optimization algorithm is proposed to alleviate the problems of premature convergence and convergence to local minima. From the perspective of diversity, an inertia weight of diversity feedback control is introduced in the moth-flame optimization to balance the algorithm's exploitation and global search abilities. Furthermore, a small probability mutation after the position update stage is added to improve the optimization performance. The performance of the proposed algorithm is extensively evaluated on a suite of CEC'2014 series benchmark functions and four constrained engineering optimization problems. The results of the proposed algorithm are compared with the ones of other improved algorithms presented in literatures. It is observed that the proposed method has a superior performance to improve the convergence ability of the algorithm. In addition, the proposed algorithm assists in escaping the local minima. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Assigning Papers to Referees.
- Author
-
Garg, Naveen, Kavitha, Telikepalli, Kumar, Amit, Mehlhorn, Kurt, and Mestre, Julián
- Subjects
CONFERENCES & conventions ,COMPUTER software ,PREFERENCES (Philosophy) ,POLYNOMIALS ,ALGORITHMS - Abstract
Refereed conferences require every submission to be reviewed by members of a program committee (PC) in charge of selecting the conference program. There are many software packages available to manage the review process. Typically, in a bidding phase PC members express their personal preferences by ranking the submissions. This information is used by the system to compute an assignment of the papers to referees (PC members). We study the problem of assigning papers to referees. We propose to optimize a number of criteria that aim at achieving fairness among referees/papers. Some of these variants can be solved optimally in polynomial time, while others are NP-hard, in which case we design approximation algorithms. Experimental results strongly suggest that the assignments computed by our algorithms are considerably better than those computed by popular conference management software. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
33. Nested Codes for Constrained Memory and for Dirty Paper.
- Author
-
Fossorier, Marc, Imai, Hideki, Lin, Shu, Poli, Alain, Schaathun, Hans Georg, and Cohen, Gérard D.
- Abstract
Dirty paper coding are relevant for wireless networks, multiuser channels, and digital watermarking. We show that the problem of dirty paper is essentially equivalent to some classes of constrained memories, and we explore the binary so-called nested codes, which are used for efficient coding and error-correction on such channels and memories. Keywords: dirty paper, constrained memory, nested codes, covering codes. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. Special Issue Dedicated to 16th International Conference and Workshops on Algorithms and Computation, WALCOM 2022.
- Author
-
Rahman, Md. Saidur, Mutzel, Petra, and Slamin
- Subjects
CONFERENCES & conventions ,ALGORITHMS ,BIPARTITE graphs ,DOMINATING set - Published
- 2023
- Full Text
- View/download PDF
35. Reviewing Machine Learning and Image Processing Based Decision-Making Systems for Breast Cancer Imaging.
- Author
-
Zerouaoui, Hasnae and Idri, Ali
- Subjects
BREAST tumor diagnosis ,ALGORITHMS ,MAMMOGRAMS ,BREAST tumors ,DECISION support systems ,DECISION trees ,DIAGNOSTIC imaging ,DIGITAL image processing ,MACHINE learning ,MAGNETIC resonance imaging ,MEDLINE ,ARTIFICIAL neural networks ,ONLINE information services ,RESEARCH funding ,SYSTEMATIC reviews ,RESEARCH bias ,SUPPORT vector machines ,DESCRIPTIVE statistics ,COMPUTER-aided diagnosis ,DEEP learning - Abstract
Breast cancer (BC) is the leading cause of death among women worldwide. It affects in general women older than 40 years old. Medical images analysis is one of the most promising research areas since it provides facilities for diagnosis and decision-making of several diseases such as BC. This paper conducts a Structured Literature Review (SLR) of the use of Machine Learning (ML) and Image Processing (IP) techniques to deal with BC imaging. A set of 530 papers published between 2000 and August 2019 were selected and analyzed according to ten criteria: year and publication channel, empirical type, research type, medical task, machine learning techniques, datasets used, validation methods, performance measures and image processing techniques which include image pre-processing, segmentation, feature extraction and feature selection. Results showed that diagnosis was the most used medical task and that Deep Learning techniques (DL) were largely used to perform classification. Furthermore, we found out that classification was the most ML objective investigated followed by prediction and clustering. Most of the selected studies used Mammograms as imaging modalities rather than Ultrasound or Magnetic Resonance Imaging with the use of public or private datasets with MIAS as the most frequently investigated public dataset. As for image processing techniques, the majority of the selected studies pre-process their input images by reducing the noise and normalizing the colors, and some of them use segmentation to extract the region of interest with the thresholding method. For feature extraction, we note that researchers extracted the relevant features using classical feature extraction techniques (e.g. Texture features, Shape features, etc.) or DL techniques (e. g. VGG16, VGG19, ResNet, etc.), and finally few papers used feature selection techniques in particular the filter methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Node similarity in the citation graph.
- Author
-
Wangzhong Lu, Janssen, J., Milios, E., Japkowicz, N., and Yongzheng Zhang
- Subjects
GRAPHIC methods ,ALGORITHMS ,COMPUTER science ,INFORMATION retrieval ,BIBLIOGRAPHICAL citations - Abstract
Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
37. Optimization of table tennis target detection algorithm guided by multi-scale feature fusion of deep learning.
- Author
-
Rong, Zhang
- Subjects
DEEP learning ,TABLE tennis ,CONVOLUTIONAL neural networks ,TENNIS tournaments ,ATHLETE training ,ALGORITHMS - Abstract
This paper aims to propose a table tennis target detection (TD) method based on deep learning (DL) and multi-scale feature fusion (MFF) to improve the detection accuracy of the ball in table tennis competition, optimize the training process of athletes, and improve the technical level. In this paper, DL technology is used to improve the accuracy of table tennis TD through MFF guidance. Initially, based on the FAST Region-based Convolutional Neural Network (FAST R-CNN), the TD is carried out in the table tennis match. Then, through the method of MFF guidance, different levels of feature information are fused, which improves the accuracy of TD. Through the experimental verification on the test set, it is found that the mean Average Precision (mAP) value of the target detection algorithm (TDA) proposed here reaches 87.3%, which is obviously superior to other TDAs and has higher robustness. The DL TDA combined with the proposed MFF can be applied to various detection fields and can help the application of TD in real life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Characterization of Exact One-Query Quantum Algorithms for Partial Boolean Functions.
- Author
-
Ye, Ze-Kun and Li, Lv-Zhou
- Subjects
BOOLEAN functions ,SYMMETRIC functions ,ALGORITHMS ,QUANTUM computing - Abstract
The query model (or black-box model) has attracted much attention from the communities of both classical and quantum computing. Usually, quantum advantages are revealed by presenting a quantum algorithm that has a better query complexity than its classical counterpart. In the history of quantum algorithms, the Deutsch algorithm and the Deutsch-Jozsa algorithm play a fundamental role and both are exact one-query quantum algorithms. This leads us to consider the problem: what functions can be computed by exact one-query quantum algorithms? This problem has been addressed in the literature for total Boolean functions and symmetric partial Boolean functions, but is still open for general partial Boolean functions. Thus, in this paper, we continue to characterize the computational power of exact one-query quantum algorithms for general partial Boolean functions. First, we present several necessary and sufficient conditions for a partial Boolean function to be computed by exact one-query quantum algorithms. Second, inspired by these conditions, we discover some new representative functions that can be computed by exact one-query quantum algorithms but have an essential difference from the already known ones. Specially, it is worth pointing out that before our work, the known functions that can be computed by exact one-query quantum algorithms are all symmetric functions and the quantum algorithm used is essentially the Deutsch-Jozsa algorithm, whereas the functions discovered in this paper are generally asymmetric and new algorithms to compute these functions are required. Thus, this expands the class of functions that can be computed by exact one-query quantum algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Fast privacy-preserving utility mining algorithm based on utility-list dictionary.
- Author
-
Yin, Chunyong and Li, Ying
- Subjects
ENCYCLOPEDIAS & dictionaries ,ALGORITHMS ,COMPUTATIONAL complexity ,DATABASES ,RESEARCH personnel - Abstract
Privacy preserving utility mining (PPUM) aims to solve the problem of sensitive information leakage in utility pattern mining. In recent years, researchers have proposed algorithms to solve the privacy-preserving problem. However, these algorithms have high side effects, long sanitization time, and computational complexity. Although the FPUTT algorithm reduces the number of database scans, tree construction and traversal still take much time. The paper proposes a fast utility-list dictionary algorithm (FULD). The utility-list dictionary consists of all sensitive items. Through dictionary lookup, sensitive items can be found and modified. In addition, the novel concepts of SINS and tns are proposed to reduce the side effects of the algorithm. In this paper, the experiments show that the FULD algorithm has good performance, such as running time and side effects. The running time of the FULD is 15–20 times shorter than the FPUTT algorithm. It performs well both on sparse and dense datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Path planning and collision avoidance for autonomous surface vehicles II: a comparative study of algorithms.
- Author
-
Vagale, Anete, Bye, Robin T., Oucheikh, Rachid, Osen, Ottar L., and Fossen, Thor I.
- Subjects
PROBLEM solving ,ALGORITHMS ,COLLISIONS at sea ,AUTONOMOUS vehicles ,COMPARATIVE studies ,ARTIFICIAL intelligence ,EVOLUTIONARY algorithms - Abstract
Artificial intelligence is an enabling technology for autonomous surface vehicles, with methods such as evolutionary algorithms, artificial potential fields, fast marching methods, and many others becoming increasingly popular for solving problems such as path planning and collision avoidance. However, there currently is no unified way to evaluate the performance of different algorithms, for example with regard to safety or risk. This paper is a step in that direction and offers a comparative study of current state-of-the art path planning and collision avoidance algorithms for autonomous surface vehicles. Across 45 selected papers, we compare important performance properties of the proposed algorithms related to the vessel and the environment it is operating in. We also analyse how safety is incorporated, and what components constitute the objective function in these algorithms. Finally, we focus on comparing advantages and limitations of the 45 analysed papers. A key finding is the need for a unified platform for evaluating and comparing the performance of algorithms under a large set of possible real-world scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. An efficient image encryption algorithm based on multi chaotic system and random DAN coding.
- Author
-
Zheng, Jiming, Luo, Zheng, and Zeng, Qingxia
- Subjects
IMAGE encryption ,ALGORITHMS ,ELECTRONIC paper ,NUCLEOTIDE sequence - Abstract
This paper presents a digital image encryption scheme based on multi chaotic system and random DNA coding. Firstly, the initial values and parameter values of 2D Logistic-adjusted-Sine mapping (2D-LASM) and Logistic-Sine system (LSS) are obtained from SHA256 hash values of the original image. In the scrambling stage, the chaotic sequences generated by 2D-LASM are used to get two column scrambling matrices and row scrambling matrix, respectively. The elements of the second column scrambling matrix and row scrambling matrix are used as row and column coordinates in the scrambling process. Then apply it to scrambling the DNA encoded image, which can complete the row and column scrambling at the same time. In the diffusion stage, we proposed a new diffusion method. Through the chaotic sequence generated by the LSS, a DNA coding sequence is obtained. DNA XOR operation is carried out on the central point and horizontal line of the image, and then spread from the central line to the upper and lower directions of the matrix to achieve the purpose of multi-directional diffusion and improve the encryption efficiency. Experimental results and security analysis show that the algorithm executes fast and has strong security. It can resist many attacks, such as statistical attacks, brute attacks, plaintext / select plaintext attacks, etc. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Outer-product-free sets for polynomial optimization and oracle-based cuts.
- Author
-
Bienstock, Daniel, Chen, Chen, and Muñoz, Gonzalo
- Subjects
POLYNOMIALS ,ALGORITHMS ,LINEAR programming ,PAPER arts ,SET theory ,SYMMETRIC matrices - Abstract
This paper introduces cutting planes that involve minimal structural assumptions, enabling the generation of strong polyhedral relaxations for a broad class of problems. We consider valid inequalities for the set S ∩ P , where S is a closed set, and P is a polyhedron. Given an oracle that provides the distance from a point to S, we construct a pure cutting plane algorithm which is shown to converge if the initial relaxation is a polyhedron. These cuts are generated from convex forbidden zones, or S-free sets, derived from the oracle. We also consider the special case of polynomial optimization. Accordingly we develop a theory of outer-product-free sets, where S is the set of real, symmetric matrices of the form x x T . All maximal outer-product-free sets of full dimension are shown to be convex cones and we identify several families of such sets. These families are used to generate strengthened intersection cuts that can separate any infeasible extreme point of a linear programming relaxation efficiently. Computational experiments demonstrate the promise of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. High performance feature selection algorithms using filter method for cloud-based recommendation system.
- Author
-
Muthusankar, D., Kalaavathi, B., and Kaladevi, P.
- Subjects
FEATURE selection ,NAIVE Bayes classification ,OUTSIDER art ,CLASSIFICATION algorithms ,ALGORITHMS ,HIGH-dimensional model representation ,FILTER paper - Abstract
In cloud-based recommendation system, the feature selection is implemented to reduce the large dimension of the cloud data. The feature selection increases the performance of the recommendation system without affecting the accuracy of the system. In this paper two filter model based algorithms SFS and MSFS are proposed to extract the necessary features for the recommendation system. The state of the art Naive bayes classification algorithm is used to evaluate the performance of the feature selection algorithm. The bench mark datasets Newsgroups, WebKB and Book Crossing are used for performance evaluation. The experimental results show that the proposed algorithm is superior to the existing feature selection algorithms T-Score, Information Gain and Chi squared. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. A Reply to a Note on the Paper 'A Simplified Novel Technique for Solving Fully Fuzzy Linear Programming Problems'.
- Author
-
Khan, Izaz, Ahmad, Tahir, and Maan, Normah
- Subjects
- *
LINEAR programming , *FUZZY algorithms , *ALGORITHMS , *MATHEMATICS , *MATHEMATICAL programming - Abstract
This note tries to answer issues raised in Bhardwaj and Kumar (J Optim Theory Appl 163(2): 685-696, 2014). The research summarizes that the results obtained in Khan et al. (J Optim Theory Appl 159: 536-546, 2013) are sound and correct and it fulfills all the necessary requirements of its scope and objectives. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Paper check image quality enhancement with Moire reduction.
- Author
-
Ok, Jiheon, Youn, Sungwook, Seo, Guiwon, Choi, Euisun, Baek, Yoonkil, and Lee, Chulhee
- Subjects
IMAGE ,MOIRE effects ,PANTOGRAPH ,AUTOMATED teller machines ,ALGORITHMS - Abstract
Paper checks may have complex background features including fine lines and patterns, which make forgery more difficult. Also, halftoning techniques are used to produce continuous tones and to prevent copies with void pantograph features. When these kinds of checks are scanned, Moire patterns may occur. These patterns make it difficult for customers to examine the scanned check images on ATM (Automated Teller Machine) displays. They also can decrease the classification accuracy of check recognition systems. In this paper, we propose an algorithm to enhance the perceptual quality of scanned check images by reducing the Moire patterns. The proposed algorithm consists of foreground extraction, Moire detection and Moire removal. Subjective image quality assessment was performed to evaluate the degree of improvement. Experimental results show that the proposed algorithm improves perceptual quality while maintaining check recognition accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
46. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
DATA management ,INTERNET of things ,NETWORK performance ,BLOCKCHAINS ,SCALABILITY ,ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
TRACKING algorithms ,FILTERS & filtration ,VIDEO surveillance ,ALGORITHMS ,IMAGE segmentation ,RESEARCH personnel ,JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A flocking control algorithm of multi-agent systems based on cohesion of the potential function.
- Author
-
Li, Chenyang, Yang, Yonghui, Jiang, Guanjie, and Chen, Xue-Bo
- Subjects
COHESION ,POTENTIAL functions ,MULTIAGENT systems ,SOCIAL distance ,SOCIAL cohesion ,ALGORITHMS ,CHANGE agents - Abstract
Flocking cohesion is critical for maintaining a group's aggregation and integrity. Designing a potential function to maintain flocking cohesion unaffected by social distance is challenging due to the uncertainty of real-world conditions and environments that cause changes in agents' social distance. Previous flocking research based on potential functions has primarily focused on agents' same social distance and the attraction–repulsion of the potential function, ignoring another property affecting flocking cohesion: well depth, as well as the effect of changes in agents' social distance on well depth. This paper investigates the effect of potential function well depths and agent's social distances on the multi-agent flocking cohesion. Through the analysis, proofs, and classification of these potential functions, we have found that the potential function well depth is proportional to the flocking cohesion. Moreover, we observe that the potential function well depth varies with the agents' social distance changes. Therefore, we design a segmentation potential function and combine it with the flocking control algorithm in this paper. It enhances flocking cohesion significantly and has good robustness to ensure the flocking cohesion is unaffected by variations in the agents' social distance. Meanwhile, it reduces the time required for flocking formation. Subsequently, the Lyapunov theorem and the LaSalle invariance principle prove the stability and convergence of the proposed control algorithm. Finally, this paper adopts two subgroups with different potential function well depths and social distances to encounter for simulation verification. The corresponding simulation results demonstrate and verify the effectiveness of the flocking control algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Research on WSN reliable ranging and positioning algorithm for forest environment.
- Author
-
Wu, Peng, Yu, Le, Yi, Xiaomei, Xu, Liang, Liu, LiJuan, Yi, YuTong, Jiang, Tengteng, and Tao, Chunling
- Subjects
WIRELESS sensor networks ,ALGORITHMS - Abstract
Wireless sensor network (WSN) location is a significant research area. In complex environments like forests, inaccurate signal intensity ranging is a major challenge. To address this issue, this paper presents a reliable WSN distance measurement-positioning algorithm for forest environments. The algorithm divides the positioning area into several sub-regions based on the discrete coefficient of the collected signal strength. Then, using the fitting method based on the signal intensity value of each sub-region, the algorithm derives the reference points of the logarithmic distance path loss model and path loss index. Finally, the algorithm locates target nodes using anchor nodes in different regions. Additionally, to enhance the positioning accuracy, weight values are assigned to the positioning result based on the discrete coefficient of the signal intensity in each sub-region. Experimental results demonstrate that the proposed WSN algorithm has high precision in forest environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.