231,912 results
Search Results
2. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping
- Author
-
Wen Xu, Gerhard Kramer, M. Yusuf Sener, and Ronald Böhnke
- Subjects
Computer science ,Frame (networking) ,Probabilistic logic ,List decoding ,Precoding ,Amplitude-shift keying ,Computer Science Applications ,Dimension (vector space) ,Modeling and Simulation ,Dirty paper coding ,Electrical and Electronic Engineering ,Algorithm ,Computer Science::Information Theory ,Block (data storage) - Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations.
- Published
- 2021
3. Design and Construction of Zana Robot for Modeling Human Player in Rock-paper-scissors Game using Multilayer Perceptron, Radial basis Functions and Markov Algorithms
- Author
-
Peshawa Jammal Muhammad Ali, Abdolreza Roshani, Maryam Ghasemi, Ehsan Nazemi, Farhad F. Nia, and Gholam Hossein Roshani
- Subjects
Paper ,Technology ,Computer science ,Science ,Markov model ,upgraded Markov model ,Radial basis functions ,Software ,Multilayer perceptron ,Scissors game ,MATLAB ,General Environmental Science ,computer.programming_language ,Graphical user interface ,Computer. Automation ,Artificial neural network ,Markov chain ,business.industry ,Agriculture ,Rock ,General Earth and Planetary Sciences ,Robot ,business ,Engineering sciences. Technology ,computer ,Algorithm - Abstract
In this paper, the implementation of artificial neural networks (multilayer perceptron [MLP] and radial base functions [RBF]) and the upgraded Markov chain model have been studied and performed to identify the human behavior patterns during rock, paper, and scissors game. The main motivation of this research is the design and construction of an intelligent robot with the ability to defeat a human opponent. MATLAB software has been used to implement intelligent algorithms. After implementing the algorithms, their effectiveness in detecting human behavior pattern has been investigated. To ensure the ideal performance of the implemented model, each player played with the desired algorithms in three different stages. The results showed that the percentage of winning computer with MLP and RBF neural networks and upgraded Markov model, on average in men and women is 59%, 76.66%, and 75%, respectively. Obtained results clearly indicate a very good performance of the RBF neural network and the upgraded Markov model in the mental modeling of the human opponent in the game of rock, paper, and scissors. In the end, the designed game has been employed in both hardware and software which include the Zana intelligent robot and a digital version with a graphical user interface design on the stand. To the best knowledge of the authors, the precision of novel presented method for determining human behavior patterns was the highest precision among all of the previous studies.
- Published
- 2021
4. Gaussian Process Regression for Quantitative DP Analysis of Oil-paper Insulation by NIRS Detection
- Author
-
Chen Wang, Han Li, Guan-Jun Zhang, Wen-Bo Zhang, and Yuan Li
- Subjects
Training set ,Computer science ,Electrical insulation paper ,law.invention ,symbols.namesake ,law ,Kriging ,Ground-penetrating radar ,symbols ,Statistical analysis ,Transformer ,Gaussian process ,Algorithm ,Reliability (statistics) - Abstract
Oil-paper insulation is the key insulation structure of the transformers, whose aging condition is closely related to the operations of the equipment. The degree of polymerization (DP) is the direct parameter characterizing the aging condition of oil-paper insulation. Recently, the near infrared spectroscopy (NIRS) measurement powered by quantitative analysis is used to evaluate DP of the insulating papers mainly. While the applications of NIRS for DP evaluation is constrained because the present spectral quantitative analysis method is not accurate and stable enough, especially for onsite tests. In this paper, we propose a Gaussian process regression (GPR) method to predict DP of the oil- paper insulation in laboratory as well as in field. Firstly, the basic principles of GPR algorithm are illustrated. A GPR model for DP prediction is established based on the spectra of differently aged insulating paper samples which are prepared in laboratory. The GPR model show a high prediction accuracy both for training set and testing set. The established GPR model is finally applied on the DP prediction of insulating papers originating from a de-tanked transformer. The accurate predicted DP and the reliable aging assessment results indicate that the established model can be implemented on site.
- Published
- 2021
5. Overlap Detection in 2D Amorphous Shapes for Paper Optimization in Digital Printing Presses
- Author
-
Rafael Rivera-López, Juan Manuel Rendón-Mancha, Marco Antonio Cruz-Chávez, Yainier Labrada-Nueva, Marta Lilia Eraña-Díaz, and Martín H. Cruz-Rosales
- Subjects
Computer science ,Iterated local search ,General Mathematics ,0211 other engineering and technologies ,resource allocation ,02 engineering and technology ,neighborhood structure ,Reduction (complexity) ,overlaps ,perturbations ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,QA1-939 ,Engineering (miscellaneous) ,Structure (mathematical logic) ,021103 operations research ,business.industry ,paper waste ,amorphous shapes ,Amorphous solid ,Resource allocation ,020201 artificial intelligence & image processing ,Digital printing ,business ,Algorithm ,Mathematics - Abstract
Paper waste in the mockups design with regular, irregular, and amorphous patterns is a critical problem in digital printing presses. Paper waste reduction directly impacts production costs, generating business and environmental benefits. This problem can be mapped to the two-dimensional irregular bin-packing problem. In this paper, an iterated local search algorithm using a novel neighborhood structure to detect overlaps between amorphous shapes is introduced. This algorithm is used to solve the paper waste problem, modeled as one 2D irregular bin-packing problem. The experimental results show that this approach works efficiently and effectively to detect and correct the overlaps between regular, irregular, and amorphous figures.
- Published
- 2021
6. The Development of Autonomous Examination Paper Application: A Case Study in UiTM Cawangan Perlis
- Author
-
Izzati Farzana Ibni Amin, Noorfaizalfarid Mohd Noor, and Nadhirah Mohd Napi
- Subjects
Measure (data warehouse) ,algorithm ,business.industry ,Computer science ,lcsh:T ,Usability ,Workload ,General Medicine ,fisher-yates ,randomization ,Educational institution ,automated question paper ,lcsh:Technology ,Engineering management ,examination ,lcsh:Technology (General) ,lcsh:T1-995 ,lcsh:Probabilities. Mathematical statistics ,Construct (philosophy) ,business ,lcsh:QA273-280 - Abstract
Examination is a vital role to measure the capabilities of students in their learning. Hence, generating question paper in an effective way is a decisive job for educators in educational institution. Using traditional method, it is monotonous and time consuming. Today, Autonomous Examination Paper (AEP) is used to produce exam paper. Many researchers have proposed effective AEPs to be used by educators. This paper aims to investigate about AEP development and to construct AEP in UiTM Cawangan Perlis. As a result, Ad-Hoc Question Paper Application (AQPA) has been developed using Fisher-Yates algorithm to generate questions for exam paper in the university. Evaluation based on Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) reveal that lecturers in the university manage to interact with AQPA and willing to use it as a tool to minimize their workload. However, more improvement must be done on AQPA to be an effective AEP. To conclude, AEP brings significance to educators and can be improved with the latest technology.
- Published
- 2019
7. A Multi-Beam Forward Link Precoding Algorithm for Dirty Paper Zero-Forcing
- Author
-
Yumeng Zhang, Lei Bian, Gan Wang, and Jinliang Dong
- Subjects
Spot beam ,Computer science ,Co-channel interference ,Dirty paper coding ,Interference (wave propagation) ,Algorithm ,Noise (electronics) ,Multiplexing ,Precoding ,Throughput (business) ,Computer Science::Information Theory - Abstract
This article considers the severe co-channel interference caused by the efficient use of spectrum by multiple beams combined with full-frequency multiplexing. After establishing a forward link model that considers severe rainfall attenuation in higher frequency bands such as Ka, the classic low-complexity precoding algorithm for zero-forcing is improved, and a regularized zero-forcing precoding algorithm considering the influence of system noise is proposed. Based on the dirty paper coding idea, a low-complexity dirty paper regularization zero-forcing precoding algorithm is proposed, which maximizes the signal-to-interference and noise ratio, thereby increasing throughput.
- Published
- 2021
8. Channel Capacity Analysis for Dirty Paper Coding With the Binary Codeword and Interference
- Author
-
Yongbiao Xie and Zhengguang Xu
- Subjects
021110 strategic, defence & security studies ,Computer science ,0211 other engineering and technologies ,Code word ,020206 networking & telecommunications ,Watermark ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Computer Science Applications ,Channel capacity ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Dirty paper coding ,Electrical and Electronic Engineering ,Encoder ,Digital watermarking ,Algorithm ,Computer Science::Information Theory ,Phase-shift keying - Abstract
Dirty paper coding is an interference pre-cancellation method for known interference at the transmitter and serves as a basic building block in the digital watermarking system. In this letter, we investigate the dirty paper model in the simplest digital communication system, where both the codeword and the interference are binary. For watermark embedment, we derive the relevant coding, the constant coding, and the symmetric relevant coding when the encoder focuses on the binary codeword and interference. The channel capacity is analyzed and the optimal parameter is discussed in the case.
- Published
- 2018
9. Error diffusion algorithm based on neighborhood gray information in electrowetting electronic paper research
- Author
-
郭太良 Guo Tai-liang, 林志贤 Lin Zhi-xian, 唐 彪 Tang Biao, 林珊玲 Lin Shan-ling, and 曾素云 Zeng Su-yun
- Subjects
Error diffusion ,Computer science ,law ,Signal Processing ,Electrowetting ,Electronic paper ,Instrumentation ,Gray (unit) ,Algorithm ,Electronic, Optical and Magnetic Materials ,law.invention - Published
- 2019
10. A Method of Analyzing Data Linearly Plotted on 2D Hybrid Scale Graph Paper
- Author
-
Shigeru Kumazawa
- Subjects
Scale (ratio) ,Computer science ,Graph paper ,Algorithm - Published
- 2019
11. A Method for Diagnosing the State of Insulation Paper in Traction Transformer Based on FDS Test and CS-DQ Algorithm
- Author
-
Liqing Zhang, Lijun Zhou, Dongyang Wang, Lei Guo, Yi Cui, and Lujia Wang
- Subjects
010302 applied physics ,Computer science ,020208 electrical & electronic engineering ,Energy Engineering and Power Technology ,Transportation ,02 engineering and technology ,01 natural sciences ,Field (computer science) ,Support vector machine ,Set (abstract data type) ,Nonlinear system ,Insulation system ,0103 physical sciences ,Automotive Engineering ,0202 electrical engineering, electronic engineering, information engineering ,State (computer science) ,Electrical and Electronic Engineering ,Cuckoo search ,Algorithm ,Reliability (statistics) - Abstract
Traction transformer is vital equipment in high-speed railway. Insulation status decides its safety and reliability, and the frequency-domain dielectric spectrum (FDS) test is one of the most effective methods reflecting the changing of insulation status. For field applications, the following problems should be addressed: 1) how to obtain the result of paper insulation from the combined result of the oil–paper insulation system and 2) how to discriminate the defects of insulation paper between aging and damp. In this article, the first problem was transferred to a nonlinear equation set, and a cuckoo search algorithm optimized by the differential evolution algorithm and the quadratic interpolation (QI) method (CS-DQ algorithm) was proposed to solve it. Then, the insulation states were discriminated by establishing a multiclass least-squares support vector machine (LS-SVM) model, in which the CS-DQ algorithm was also used. Finally, a diagnostic approach for the insulation paper in the traction transformer was proposed. The results in the laboratory show that the pure result of insulation paper can be obtained, and the insulation states can be discriminated effectively by using the proposed approach. Meanwhile, the proposed CS-DQ algorithm has a better performance than the conventional CS algorithm. The results of field testing also verify the proposed approach.
- Published
- 2021
12. Beyond Dirty Paper Coding for Multi-Antenna Broadcast Channel with Partial CSIT: A Rate-Splitting Approach
- Author
-
Bruno Clerckx and Yijie Mao
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer science ,business.industry ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Transmitter ,020206 networking & telecommunications ,020302 automobile design & engineering ,02 engineering and technology ,Data_CODINGANDINFORMATIONTHEORY ,Interference (wave propagation) ,Precoding ,Noise (electronics) ,0203 mechanical engineering ,Transmission (telecommunications) ,Single antenna interference cancellation ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Wireless ,Dirty paper coding ,Electrical and Electronic Engineering ,Electrical Engineering and Systems Science - Signal Processing ,business ,Algorithm - Abstract
Imperfect Channel State Information at the Transmitter (CSIT) is inevitable in modern wireless communication networks, and results in severe multi-user interference in multi-antenna Broadcast Channel (BC). While the capacity of the multi-antenna (Gaussian) BC with perfect CSIT is known and achieved by Dirty Paper Coding (DPC), the capacity and the capacity-achieving strategy of the multi-antenna BC with imperfect CSIT remain unknown. Conventional approaches therefore rely on applying communication strategies designed for perfect CSIT to the imperfect CSIT setting. In this work, we break this conventional routine and make two major contributions. First, we show that linearly precoded Rate-Splitting (RS), relying on the split of messages into common and private parts and linear precoding at the transmitter, and successive interference cancellation at the receivers, can achieve larger rate regions than DPC in multi-antenna BC with partial CSIT. Second, we propose a novel achievable scheme, denoted as Dirty Paper Coded Rate-Splitting (DPCRS), that relies on RS to split the user messages into common and private parts, and DPC to encode the private parts. We show that the rate region achieved by DPCRS in Multiple-Input Single-Output (MISO) BC with partial CSIT is enlarged beyond that of conventional DPC and that of linearly precoded RS. Gaining benefits from the capability of RS to partially decode the interference and partially treat interference as noise, DPCRS is less sensitive to CSIT inaccuracies, networks loads and user deployments compared with DPC and other existing transmission strategies. The proposed DPCRS acts as a new benchmark and the best-known achievable strategy for multi-antenna BC with partial CSIT., accepted by IEEE transactions on communications for publication
- Published
- 2019
13. Three machine learning algorithms and their utility in exploring risk factors associated with primary cesarean section in low‐risk women: A methods paper
- Author
-
Jintong Hou and Rebecca R. S. Clark
- Subjects
Adult ,medicine.medical_specialty ,Adolescent ,Computer science ,media_common.quotation_subject ,Oxytocin ,Machine learning ,computer.software_genre ,Outcome (game theory) ,Article ,Terminology ,Machine Learning ,Young Adult ,03 medical and health sciences ,0302 clinical medicine ,Pregnancy ,Risk Factors ,Oxytocics ,medicine ,Humans ,Obesity ,030212 general & internal medicine ,Association (psychology) ,Function (engineering) ,General Nursing ,media_common ,030504 nursing ,Cesarean Section ,business.industry ,Rank (computer programming) ,Regression ,Random forest ,Cross-Sectional Studies ,Female ,Artificial intelligence ,Outcomes research ,0305 other medical science ,business ,computer ,Algorithm - Abstract
Machine learning, a branch of artificial intelligence, is increasingly used in health research, including nursing and maternal outcomes research. Machine learning algorithms are complex and involve statistics and terminology that are not common in health research. The purpose of this methods paper is to describe three machine learning algorithms in detail and provide an example of their use in maternal outcomes research. The three algorithms, classification and regression trees, least absolute shrinkage and selection operator, and random forest, may be used to understand risk groups, select variables for a model, and rank variables’ contribution to an outcome, respectively. While machine learning has plenty to contribute to health research, it also has some drawbacks, and these are discussed as well. In order to provide an example of the different algorithms’ function, they were used on a completed cross-sectional study examining the association of oxytocin total dose exposure with primary cesarean section. The results of the algorithms are compared to what was done or found using more traditional methods.
- Published
- 2021
14. Development of a morphological color image processing algorithm for paper-based analytical devices
- Author
-
Tsuyoshi Minami, Cristina Malegori, Paolo Oliveri, and Vahid Hamedpour
- Subjects
Fabrication ,Computer science ,02 engineering and technology ,Mathematical morphology ,010402 general chemistry ,01 natural sciences ,Signal ,Square (algebra) ,Image (mathematics) ,Automatic signal readout ,Software ,Digital image processing ,Materials Chemistry ,Isoniazid ,Electrical and Electronic Engineering ,Paper-based analytical devices ,Instrumentation ,Mathematical morphological image processing algorithm ,business.industry ,Metals and Alloys ,021001 nanoscience & nanotechnology ,Condensed Matter Physics ,0104 chemical sciences ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,Outlier ,0210 nano-technology ,business ,Algorithm - Abstract
Although fabrication of colorimetric paper-based analytical devices (PADs) has drawn increasing attention recently, the signal readout method is still a crucial challenge on the way to practical exploitations. We herein introduce an integration of digital image processing with a model PAD for easing and improving the signal readout procedure. The colorimetric detection mechanism of PAD relies on in-situ induced yellowish silver nanoparticles via the interaction of silver ions, poly(vinyl alcohol), and ammonia with isoniazid. The observed color value is related with the concentration of isoniazid. The proposed algorithm is based on mathematical morphology recognition, and minimizes the errors arising from manual area selection. Besides, it allows the recognition of both circle and square shapes in 96-well plate and A4 size array designs. Since this algorithm automatically provides the blank-corrected numerical matrixes and image profiles of red, green and blue channels, further investigations such as outlier classification, construction of regression/prediction models and calculation of detection limit can be easily performed. Comparison of signal readout results of the developed algorithm with ImageJ software demonstrates significant improvements in analysis speed, reproducibility, accuracy and color values. Therefore, application of the proposed algorithm is promising as a robust technique for practical applications.
- Published
- 2020
15. Practical Dirty Paper Coding Schemes Using One Error Correction Code With Syndrome
- Author
-
Kyunghoon Kwon, Jun Heo, and Taehyun Kim
- Subjects
0209 industrial biotechnology ,Theoretical computer science ,Computer science ,Concatenated error correction code ,Variable-length code ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Coding gain ,Computer Science Applications ,020901 industrial engineering & automation ,Transmission (telecommunications) ,Single antenna interference cancellation ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Bit error rate ,Dirty paper coding ,Constant-weight code ,Electrical and Electronic Engineering ,Error detection and correction ,Algorithm ,Decoding methods ,Computer Science::Information Theory - Abstract
Dirty paper coding (DPC) offers an information-theoretic result for pre-cancellation of known interference at the transmitter. In this letter, we propose practical DPC schemes that use only one error correction code. Our designs focus on practical use from the viewpoint of complexity. For fair comparison with previous schemes, we compute the complexity of proposed schemes by the number of operations used. Simulation results show that compared to previous DPC schemes, the proposed schemes require lower transmission power to maintain the bit error rate to be within $10^{-5}$ .
- Published
- 2017
16. Short Paper: Identification by Recursive Least Squares with RMO Applied to a Robotic Manipulator
- Author
-
Rui Araújo, Laurinda L. N. dos Reis, Josias G. Batista, Antonio B. S. Junior, Killdary Aguiar de Santana, José Eduardo Ribeiro Honório Júnior, Darielson A. Souza, and José Nunes da Silva Júnior
- Subjects
Recursive least squares filter ,0209 industrial biotechnology ,Mean squared error ,Computer science ,Covariance matrix ,Short paper ,Robot manipulator ,02 engineering and technology ,Kalman filter ,Identification (information) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Metaheuristic ,Algorithm - Abstract
The present work presents a hybrid identification of recursive least squares with a metaheuristic called radial movement optimization (RMO) applied to the joints of a cylindrical robotic manipulator. The main contribution of the research is to consider the covariance matrix with RMO. At the end a comparative analysis will be made with some classic methods. The results of the identifications were tested with the RMSE. A covariance matrix has also been generated from manipulator identifications.
- Published
- 2019
17. An algorithm for automatic assignment of reviewers to papers
- Author
-
Yordan Kalmukov
- Subjects
Matching (statistics) ,Computer science ,General Social Sciences ,Workload ,Subject (documents) ,Library and Information Sciences ,Digital library ,Limited resources ,Time complexity ,Algorithm ,Computer Science Applications ,Task (project management) ,Domain (software engineering) - Abstract
The assignment of reviewers to papers is one of the most important and challenging tasks in organizing scientific conferences and a peer review process in general. It is a typical example of an optimization task where limited resources (reviewers) should be assigned to a number of consumers (papers), so that every paper should be evaluated by highly competent, in its subject domain, reviewers while maintaining a workload balancing of the reviewers. This article suggests a heuristic algorithm for automatic assignment of reviewers to papers that achieves accuracy of about 98–99% in comparison to the maximum-weighted matching (the most accurate) algorithms, but has better time complexity of Θ(n2). The algorithm provides an uniform distribution of papers to reviewers (i.e. all reviewers evaluate roughly the same number of papers); guarantees that if there is at least one reviewer competent to evaluate a paper, then the paper will have a reviewer assigned to it; and allows iterative and interactive execution that could further increase accuracy and enables subsequent reassignments. Both accuracy and time complexity are experimentally confirmed by performing a large number of experiments and proper statistical analyses. Although it is initially designed to assign reviewers to papers, the algorithm is universal and could be successfully implemented in other subject domains, where assignment or matching is necessary. For example: assigning resources to consumers, tasks to persons, matching men and women on dating web sites, grouping documents in digital libraries and others.
- Published
- 2020
18. [Paper] Efficient Decoding Method for Holographic Data Storage Combining Convolutional Neural Network and Spatially Coupled Low-Density Parity-Check Code
- Author
-
Ishii Norihiko, Yutaro Katano, Nobuhiro Kinoshita, Teruyoshi Nobukawa, and Tetsuhiko Muroi
- Subjects
Computer science ,Signal Processing ,Media Technology ,Code (cryptography) ,Low-density parity-check code ,Holographic data storage ,Computer Graphics and Computer-Aided Design ,Convolutional neural network ,Algorithm ,Decoding methods - Published
- 2021
19. Reproducibility Companion Paper
- Author
-
Haoliang Wang, Dingquan Li, Ming Jiang, Vajira Thambawita, and Tingting Jiang
- Subjects
Experimental Replication ,Image quality ,Computer science ,Norm (mathematics) ,Convergence (routing) ,Normalization (sociology) ,Software package ,Algorithm - Abstract
This companion paper supports the experimental replication of the paper "Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment'' presented at ACM Multimedia 2020. We provide the software package for replicating the implementation of the "Norm-in-Norm'' loss and the corresponding "LinearityIQA'' model used in the original paper. This paper contains the guidelines to reproduce all the experimental results of the original paper.
- Published
- 2021
20. Performance of two multiscale texture algorithms in classifying silver gelatine paper via k-nearest neighbors
- Author
-
Herwig Wendt, Andrew G. Klein, Kirsten R. Basinet, Patrice Abry, Paul Messier, Stéphane Roux, Centre National de la Recherche Scientifique - CNRS (FRANCE), Ecole Normale Supérieure de Lyon - ENS de Lyon (FRANCE), Institut National Polytechnique de Toulouse - INPT (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), Université Claude Bernard-Lyon I - UCBL (FRANCE), Université de Lyon - UDL (FRANCE), Western Washington University - WWU (USA), Yale University (USA), Laboratoire de Physique (Lyon, France), Western Washington University (WWU), Laboratoire de Physique de l'ENS Lyon (Phys-ENS), École normale supérieure - Lyon (ENS Lyon)-Centre National de la Recherche Scientifique (CNRS)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon, CoMputational imagINg anD viSion (IRIT-MINDS), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), Department of Electrical Engineering, College of Engineering, Qatar University, PO BOX 2713, Doha, Qatar, Qatar University, École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Centre National de la Recherche Scientifique (CNRS), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université de Toulouse (UT)-Toulouse Mind & Brain Institut (TMBI), Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT), and Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE)
- Subjects
Photographic paper ,Computer science ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Intelligence artificielle ,k-nearest neighbors algorithm ,Visualization ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Data set ,Texture similarity ,Statistical classification ,Similarity (network science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Crowd-sourcing ,Algorithm ,Multiscale analysis - Abstract
International audience; As part of the Historic Photographic Paper Classification Challenge, a multitude of approaches to quantifying paper texture similarity have been developed. These approaches have yielded encouraging results when applied to very controlled datasets containing photomicrographs of familiar specimens. In this paper, we report on the k-nearest neighbors classification performance of two multiscale analysis-based texture similarity approaches when applied to a much larger reference collection of silver gelatin photographic papers. The clusters for this data set were derived from a visual sorting experiment conducted by art conservators and paper experts later extended through crowd-sourcing. The results show that these texture similarity approaches, when combined with a simple k-nearest neighbors classification algorithm, yield workable performances with accuracy of up to 69%. We discuss this outcome in the context of available data and the cross-validation procedure used, then provide suggestions for improvement.
- Published
- 2018
21. Paper currency defect detection algorithm using quaternion uniform strength
- Author
-
Bangshu Xiong, Xiaolin Xu, and Shan Gai
- Subjects
0209 industrial biotechnology ,Color difference ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Image (mathematics) ,Convolution ,020901 industrial engineering & automation ,Artificial Intelligence ,Currency ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Range (statistics) ,020201 artificial intelligence & image processing ,Quaternion ,Algorithm ,Software - Abstract
In this paper, we propose a novel paper currency defect detection algorithm using quaternion uniform strength. We first build paper currency image preprocessing integration framework which includes intensity balancing, paper currency location, and geometric correction. We then propose a global–local paper currency image registration algorithm by moving key areas within certain range which can eliminate the false difference effectively. Finally, the quaternion uniform strength is calculated by using quaternion convolution edge detector. The defect degree of paper currency is determined by using the quaternion uniform color difference. The proposed algorithm is tested using different datasets from five countries: CNY, USD, EUR, VND, and RUB. The experimental results demonstrate that the proposed algorithm yields better results than the existing state-of-the-art paper currency defect detection techniques. The demo of the proposed paper currency defect detection algorithm will be publicly available.
- Published
- 2020
22. Feature-based Online Segmentation Algorithm for Streaming Time Series (Short Paper)
- Author
-
Yang Xu, Qi Zhang, Peng Zhan, Wei Luo, Yupeng Hu, and Xueqing Li
- Subjects
Signal processing ,Series (mathematics) ,Computer science ,Dimensionality reduction ,Short paper ,02 engineering and technology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Feature based ,020201 artificial intelligence & image processing ,Segmentation ,Time series ,Baseline (configuration management) ,Algorithm - Abstract
Over the last decade, huge number of time series stream data are continuously being produced in diverse fields, including finance, signal processing, industry, astronomy and so on. Since time series data has high-dimensional, real-valued, continuous and other related properties, it is of great importance to do dimensionality reduction as a preliminary step. In this paper, we propose a novel online segmentation algorithm based on the importance of TPs to represent the time series into some continuous subsequences and maintain the corresponding local temporal features of the raw time series data. To demonstrate the advantage of our proposed algorithm, we provide extensive experimental results on different kinds of time series datasets for validating our algorithm and comparing it with other baseline methods of online segmentation.
- Published
- 2019
23. Short Paper: An Empirical Analysis of Blockchain Forks in Bitcoin
- Author
-
Till Neudecker and Hannes Hartenstein
- Subjects
050101 languages & linguistics ,Blockchain ,Computer science ,05 social sciences ,Short paper ,DATA processing & computer science ,Process (computing) ,02 engineering and technology ,Propagation delay ,Order (exchange) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Fork (file system) ,Latency (engineering) ,ddc:004 ,Algorithm ,Block (data storage) - Abstract
Temporary blockchain forks are part of the regular consensus process in permissionless blockchains such as Bitcoin. As forks can be caused by numerous factors such as latency and miner behavior, their analysis provides insights into these factors, which are otherwise unknown. In this paper we provide an empirical analysis of the announcement and propagation of blocks that led to forks of the Bitcoin blockchain. By analyzing the time differences in the publication of competing blocks, we show that the block propagation delay between miners can be of similar order as the block propagation delay of the average Bitcoin peer. Furthermore, we show that the probability of a block to become part of the main chain increases roughly linearly in the time the block has been published before the competing block. Additionally, we show that the observed frequency of short block intervals between two consecutive blocks mined by the same miner after a fork is conspicuously large. While selfish mining can be a cause for this observation, other causes are also possible. Finally, we show that not only the time difference of the publication of competing blocks but also their propagation speeds vary greatly.
- Published
- 2019
24. (Short Paper) A Faster Constant-Time Algorithm of CSIDH Keeping Two Points
- Author
-
Tsuyoshi Takagi, Tsutomu Yamazaki, Hiroshi Onuki, and Yusuke Aikawa
- Subjects
Post-quantum cryptography ,Computer science ,business.industry ,Short paper ,Zero (complex analysis) ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Elliptic curve ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Torsion (algebra) ,020201 artificial intelligence & image processing ,business ,Constant (mathematics) ,Algorithm - Abstract
At ASIACRYPT 2018, Castryck, Lange, Martindale, Panny and Renes proposed CSIDH, which is a key-exchange protocol based on isogenies between elliptic curves, and a candidate for post-quantum cryptography. However, the implementation by Castryck et al. is not constant-time. Specifically, a part of the secret key could be recovered by the side-channel attacks. Recently, Meyer, Campos, and Reith proposed a constant-time implementation of CSIDH by introducing dummy isogenies and taking secret exponents only from intervals of non-negative integers. Their non-negative intervals make the calculation cost of their implementation of CSIDH twice that of the worst case of the standard (variable-time) implementation of CSIDH. In this paper, we propose a more efficient constant-time algorithm that takes secret exponents from intervals symmetric with respect to the zero. For using these intervals, we need to keep two torsion points on an elliptic curve and calculation for these points. We implemented our algorithm by extending the implementation in C of Meyer et al. (originally from Castryck et al.). Then our implementation achieved 152.8 million clock cycles, which is about 29.03% faster than that of Meyer et al.
- Published
- 2019
25. Reconfigurable Intelligent Surface-Aided MISO Systems with Statistical CSI: Channel Estimation, Analysis and Optimization : (Invited Paper)
- Author
-
Kezhi Wang, Maged Elkashlan, Kangda Zhi, Hong Ren, and Cunhua Pan
- Subjects
Minimum mean square error ,Mean squared error ,Computer science ,Rician fading ,Estimator ,Overhead (computing) ,Maximal-ratio combining ,Transmitter power output ,Algorithm ,Computer Science::Information Theory ,Communication channel - Abstract
This paper investigates the reconfigurable intelligent surface (RlS)-aided multiple-input-single-output (MISO) systems with imperfect channel state information (CSI), where RIS-related channels are modeled by Rician fading. Considering the overhead and complexity in practical systems, we employ the low-complexity maximum ratio combining (MRC) at the base station (BS), and configure the phase shifts of the RIS based on long-term statistical CSI. Specifically, we first estimate the overall channel matrix based on the linear minimum mean square error (LMMSE) estimator, and evaluate the performance of mean square error (MSE) and normalized MSE (NMSE). Then, with the estimated channel, we derive the closed-form expressions of the ergodic rate. The derived expressions show that with Rician RIS-related channels, the rate can maintain at a non-zero value when the transmit power is scaled down proportionally to 1 /M or 1 /N2, where M and N are the number of antennas and reflecting elements, respectively. However, if all the RIS-related channels are purely Rayleigh, the transmit power of each user can only be scaled down proportionally to $1/\sqrt M $ or 1/N. Finally, numerical results verify the promising benefits from the RIS to traditional MISO systems.
- Published
- 2021
26. Tensor-based reliability analysis of complex static fault trees: Regular paper
- Author
-
Daniel Szekeres, István Majzik, and Kristóf Marussy
- Subjects
Fault tree analysis ,Computer science ,Binary decision diagram ,Mean time to first failure ,Linear system ,State space ,Tensor ,System of linear equations ,Algorithm ,State space enumeration - Abstract
Fault Tree Analysis is widely used in the reliability evaluation of critical systems, such as railway and automotive systems, power grids, and nuclear power plants. While there are efficient algorithms for calculating the probability of failure in static fault trees, Mean Time to First Failure (MTFF) evaluation remains challenging due to state space explosion. Recently, structural and symmetry reduction methods were proposed to counteract this phenomenon. However, systems with a large number of different and highly interconnected components preclude the use of reduction techniques. Their MTFF analysis requires the solution of a system of linear equations whose size is exponential in the number of components in the system. In this paper, we propose a solution leveraging Tensor Trains as a compressed vector and matrix representation. We build upon Binary Decision Diagram-based techniques to avoid explicit state space enumeration and use linear equation solvers developed specifically for Tensor Trains to efficiently solve the arising linear systems. As a result, our novel approach complements the existing reduction-based techniques and makes some previously intractable models possible to analyze. We evaluate our approach on an industrial case study adapted from a railway system, and other openly available benchmark models.
- Published
- 2021
27. Reduced-Order Zero-Forcing Beamforming vs Optimal Beamforming and Dirty Paper Coding and Massive MIMO Analysis
- Author
-
Christo Kurisummoottil Thomas and Dirk Slock
- Subjects
Beamforming ,Minimum mean square error ,060102 archaeology ,Computer science ,Matched filter ,MIMO ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,06 humanities and the arts ,02 engineering and technology ,Interference (wave propagation) ,Signal-to-noise ratio ,Channel state information ,0202 electrical engineering, electronic engineering, information engineering ,0601 history and archaeology ,Dirty paper coding ,Algorithm ,Computer Science::Information Theory - Abstract
Optimal linear transmitter beamformers in multi-antenna multi-user systems are of the Minimum Mean Squared Error (MMSE) type (dual uplink MMSE receivers). MMSE designs make an optimal compromise between noise enhancement and interference suppression and reduce to matched filters at low SNR and zero-forcing at high SNR. We consider a realistic scenario of user channels of varying attenuation and constrain the beamformers to either zero-force or ignore each interference term. This leads to a reduced-order zero-forcing (RO-ZF) design in which the number of interference sources being zero-forced increases with SNR. We apply a simple large systems analysis (applicable to Massive MIMO) to determine the asymptotic performance of RO-ZF designs, determine the optimal ZF orders, and compare to optimal and ZF linear and Dirty Paper Coding (DPC) designs. RO-ZF designs lead to variable reductions of computational complexity and channel state information (CSI) requirements (esp. in future multi-cell extensions), both important considerations in Massive MIMO systems.
- Published
- 2018
28. DNA computer based algorithm for recyclable waste paper segregation
- Author
-
Hassan Basri, Mohammad Osiur Rahman, Aini Hussain, Mahammad A. Hannan, and Edgar Scavino
- Subjects
Paper recycling ,Computer science ,DNA computing ,law ,Sorting ,Waste paper ,Algorithm ,Massively parallel ,Software ,law.invention - Abstract
DNA computer based algorithm is developed and evaluated for recyclable waste paper segregation.The concepts of replication and massive parallelism operations are used.The matching stage consists of Copy, Extract, Detect and Merge or Union operations.Gel electrophoresis operation is used to identify the candidate paper object grade.Success rate are 92% for WP, 90% for ONP and 93% for OCC with template size 5i?5. This article explores the application of DNA computing in recyclable waste paper sorting. The primary challenge in paper recycling is to obtain raw materials with the highest purity. In recycling, waste papers are segregated according to their various grades, and these are subjected to different recycling processes. Highly sorted paper streams facilitate high quality end products, while saving on processing chemicals and energy. In the industry, different sensors are used in paper sorting systems, namely, ultrasonic, lignin, gloss, stiffness, infra-red, mid-infra red, and color sensors. Different mechanical and optical paper sorting systems have been developed based on the different sensors. However, due to inadequate throughput and some major drawbacks related to mechanical paper sorting systems, the popularity of optical paper sorting systems has increased. The automated paper sorting systems offer significant advantages over the manual systems in terms of human fatigue, throughput, speed, and accuracy. This research has two objectives: (1) to use a web camera as an image sensor for the vision system in lieu of different sensors; and (2) to develop a new DNA computing algorithm based on the theme of template matching techniques for segregating recyclable waste papers according to paper grades. Using the concepts of replication and massive parallelism operations, the DNA computing algorithm can efficiently reduce the computational time of the template matching method. This is the main strength of the DNA computing algorithm in actual inspections. The algorithm is implemented by using a silicon-based computer to verify the success rate in paper grade identification.
- Published
- 2015
29. A 3D Cluster-Based Channel Model with Time-Space Consistency (Invited Paper)
- Author
-
Ziwei Huang and Xiang Cheng
- Subjects
Consistency (statistics) ,Computer science ,MIMO ,Cluster (physics) ,Time domain ,Solid modeling ,Visibility polygon ,Algorithm ,Communication channel ,Domain (software engineering) - Abstract
This paper proposes a novel three-dimensional (3D) cluster-based sixth-generation (6G) model for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) channels. In the proposed model, a novel method based on visibility region (VR) is developed, where the smooth and consistent birth and death of clusters in both the time domain and space domain are successfully captured. By using the developed method, the space-time non-stationarity and time-space consistency of channels can be modeled for the first time. Based on the proposed model, important channel statistical properties are derived and thoroughly investigated. Simulation results demonstrate that the smooth birth and death of clusters, time-space consistency, and space-time non-stationarity are modeled sufficiently.
- Published
- 2021
30. SPAA'21 Panel Paper: Architecture-Friendly Algorithms versus Algorithm-Friendly Architectures
- Author
-
Guy E. Blelloch, William J. Dally, Uzi Vishkin, Katherine Yelick, and Margraret Martonosi
- Subjects
Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Parallel algorithm ,02 engineering and technology ,Architecture ,Algorithm ,Panel discussion - Abstract
The current paper provides preliminary statements of the panelists ahead of a panel discussion at the ACM SPAA 2021 conference on the topic: algorithm-friendly architecture versus architecture-friendly algorithms.
- Published
- 2021
31. Assessing Robustness of Hyperdimensional Computing Against Errors in Associative Memory : (Invited Paper)
- Author
-
Xun Jiao, Ruixuan Wang, Abbas Rahimi, Jeff Jun Zhang, and Sizhe Zhang
- Subjects
Reliability theory ,Dimension (vector space) ,Robustness (computer science) ,Computer science ,Computation ,Content-addressable memory ,Representation (mathematics) ,Algorithm ,Randomness ,Associative property - Abstract
Brain-inspired hyperdimensional computing (HDC) is an emerging computational paradigm that has achieved success in various domains. HDC mimics brain cognition and lever-ages hyperdimensional vectors with fully distributed holographic representation and (pseudo)randomness. Compared to the traditional machine learning methods, HDC offers several critical advantages, including smaller model size, less computation cost, and one-shot learning capability, making it a promising candidate in low-power platforms. Despite the growing popularity of HDC, the robustness of HDC models has not been systematically explored. This paper presents a study on the robustness of HDC to errors in associative memory—the key component storing the class representations in HDC. We perform extensive error injection experiments to the associative memory in a number of HDC models (and datasets), sweeping the error rates and varying HDC configurations (i.e., dimension and data width). Empirically, we observe that HDC is considerably robust to errors in the associative memory, opening up opportunities for further optimizations. Further, results show that HDC robustness varies significantly with different HDC configurations such as data width. Moreover, we explore a low-cost error masking mechanism in the associative memory to enhance its robustness.
- Published
- 2021
32. On the Achievable Sum-rate of the RIS-aided MIMO Broadcast Channel : Invited Paper
- Author
-
Nemanja Stefan Perovic, Le-Nam Tran, Marco Di Renzo, Mark F. Flanagan, University College Dublin [Dublin] (UCD), Laboratoire des signaux et systèmes (L2S), and CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[SPI]Engineering Sciences [physics] ,Radio propagation ,symbols.namesake ,Computational complexity theory ,Computer science ,Gaussian ,MIMO ,Duality (mathematics) ,symbols ,Decomposition method (constraint satisfaction) ,Covariance ,Algorithm ,Communication channel - Abstract
International audience; Reconfigurable intelligent surfaces (RISs) represent a new technology that can shape the radio wave propagation and thus offers a great variety of possible performance and implementation gains. Motivated by this, we investigate the achievable sum-rate optimization in a broadcast channel (BC) that is equipped with an RIS. We exploit the well-known duality between the Gaussian multiple-input multiple-output (MIMO) BC and multiple-access channel (MAC) to derive an alternating optimization (AO) algorithm which jointly optimizes the users’ covariance matrices and the RIS phase shifts in the dual MAC. The optimal users’ covariance matrices are obtained by a dual decomposition method in which each iteration is solved in closed-form. The optimal RIS phase shifts are also computed using a derived closed-form expression. Furthermore, we present a computational complexity analysis for the proposed AO algorithm. Simulation results show that the proposed AO algorithm can provide significant achievable sum-rate gains in a BC.
- Published
- 2021
33. To Jam or Not to Jam in Gaussian MIMO Wiretap Channels ?: Invited Paper
- Author
-
Sergey Loyka and Mahdi Khojastehnia
- Subjects
Cognitive radio ,Computer science ,Feasible region ,MIMO ,Artificial noise ,Jamming ,Fading ,Transmitter power output ,Precoding ,Algorithm ,Computer Science::Cryptography and Security ,Computer Science::Information Theory - Abstract
The popular technique of secure signaling over Gaussian MIMO wiretap channels, which makes use of artificial noise (AN) to increase secrecy rates, is considered. First, we briefly review the current state of affairs in this area and then provide new analytical results and insights on the usefulness of AN (jamming) to boost secrecy rates. The settings considered here go beyond the total transmit power constraint and include a number of additional constraints, such as interference (cognitive radio) and energy-harvesting constraints, for which the feasible set is not isotropic anymore so that the standard tools of the analysis cannot be used. By closely examining optimal precoding for the information-bearing and AN signals, we identify a number of cases where it is optimal to transmit no artificial noise at all (so that all the transmit power goes to the information-bearing signal). These cases include a fixed (no fading) MIMO WTC with single eavesdropper (for which we give a direct matrix-theoretic proof using novel matrix inequalities), multi-eavesdropper (com-pound) degraded and reversely-degraded channels, and multi-eavesdropper channels where there exists a dominant eavesdropper (for which we give a precise definition) or when the eavesdroppers collude. To improve secrecy rates by using AN, one has to look elsewhere.
- Published
- 2021
34. Comparison of document similarity algorithms in extracting document keywords from an academic paper
- Author
-
M. Saef Ullah Miah, Junaida Sulaiman, Kamal Z. Zamli, Saiful Azad, and Rajan Jose
- Subjects
Information management ,Document similarity ,Similarity (network science) ,Computer science ,Calculation algorithm ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,A domain ,Scientific article ,Algorithm ,Domain (software engineering) - Abstract
The idea of this study is to validate a list of keywords derived from a scientific article by a domain expert from years of knowledge with prominent document similarity algorithms. For this study, a list of handcrafted keywords generated by Electric Double Layer Capacitor (EDLC) experts are chosen, and relevant documents to EDLC are considered for the comparison. Then, different similarity calculation algorithms were employed in different settings on the documents such as using the whole texts of the documents, selecting the positive sentences of the documents, and generating similarity score with automatically extracted keywords from the documents. The experiment’s outcome provides us with findings that the machine-generated keywords are mostly similar to the curated list by the domain experts. This study also suggests the preferable algorithms for similarity calculation and automated key-phrase extraction for the EDLC domain.
- Published
- 2021
35. Image digitization of discontinuous and degraded electrocardiogram paper records using an entropy-based bit plane slicing algorithm
- Author
-
R. G. Karandikar and Rupali Patil
- Subjects
Paper ,Computer science ,Information Storage and Retrieval ,Graph paper ,02 engineering and technology ,030204 cardiovascular system & hematology ,Sensitivity and Specificity ,03 medical and health sciences ,Electrocardiography ,0302 clinical medicine ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,Humans ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Digitization ,Ground truth ,business.industry ,Principle of maximum entropy ,Signal Processing, Computer-Assisted ,computer.file_format ,020201 artificial intelligence & image processing ,Image file formats ,Cardiology and Cardiovascular Medicine ,business ,Algorithm ,computer ,Algorithms ,Analog-Digital Conversion ,Bit plane - Abstract
Background Electrocardiograms (ECGs) are routinely recorded and stored in a variety of paper or scanned image format. Current ECG recording machines record ECG on graph paper, also it provides digitized ECG signal along with automated cardiovascular diagnosis (CVD). However, such recording machines cannot analyse preserved paper ECG records as it requires input in terms of digitized signal. Therefore, it is important to extract ECG signal from these preserved paper ECG records using digitization method. There are different paper degradations that adversely affect digitization process. The purpose of this work is to perform an image enhancement and digitization of the degraded ECG images to extract continuous ECG signal. Methods In this paper, we propose entropy-based bit plane slicing (EBPS) algorithm in which pre-processing is done using dominant color detection and local bit plane slicing. Maximum entropy based adaptive bit plane selection is applied to the pre-processed image. Discontinuous ECG correction (DECGC) is then done to produce continuous ECG signal. Results The algorithm is tested on 836 different degraded paper ECG records obtained from various diagnostic centers. After analysis with 101 known ground truth ECG signals the accuracy, sensitivity, specificity and overall F-measure of ECG is 99.42%, 99.69%, 99.81% and 99.26% respectively. The RMS error and correlation between the extracted digitized signal and ground truth for 101 cases is 0.040 and 99.89% respectively. Conclusions The EBPS method is able to remove all types of degradation in paper ECG records to generate a uniform digitized signal. Instead of manual measurement and prediction from archived paper ECG records, automated prediction (using already existing cardiovascular diagnosis software) is possible with the help of extracted digitized signal obtained using proposed digitization method, which will also help retrospective cardiovascular analysis.
- Published
- 2017
36. [Paper] Disparity Compensation Framework for Light-Field Coding Using Weighted Binary Patterns
- Author
-
Toshiaki Fujii, Kohei Isechi, and Keita Takahashi
- Subjects
Computer science ,Signal Processing ,Media Technology ,Binary number ,Computer Graphics and Computer-Aided Design ,Algorithm ,Light field ,Coding (social sciences) - Published
- 2020
37. [Paper] Lossless Color Image Coding Based on Probability Model Optimization Utilizing Example Search and Adaptive Prediction
- Author
-
Kyohei Unno, Sei Naito, Yusuke Kameda, Ichiro Matsuda, and Susumu Itoh
- Subjects
Lossless compression ,Lossless coding ,Computer science ,Signal Processing ,Media Technology ,Quasi-Newton method ,Color image coding ,Computer Graphics and Computer-Aided Design ,Algorithm ,Probability model - Published
- 2020
38. [Invited Paper] Digital RoHR with Lloyd-Max Quantization for Distributed Collaborative MIMO-OFDM Reception
- Author
-
Daisuke Umehara
- Subjects
Computer science ,Quantization (signal processing) ,Signal Processing ,Media Technology ,Spatial division multiplexing ,MIMO-OFDM ,Computer Graphics and Computer-Aided Design ,Algorithm - Published
- 2020
39. Analysis Paper on Different Algorithm, Dataset and Devices Used for Fundus Images
- Author
-
Babanpreet Singh and Priyanka Arora
- Subjects
business.industry ,Computer science ,Image quality ,Fundus image ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Vessel segmentation ,Fundus (eye) ,Segmentation ,Artificial intelligence ,Image extraction ,business ,Algorithm - Abstract
Analysis of images with deep learning and machine learning has recently show more accuracy in medical image processing. The paper review the major machine learning methods is used for the extraction of retinal fundus image. It describes how hardware has been modified according to time for retinal eye segmentation that device is helpful for the environment and also useful for humans The algorithms used for image extraction, clear and noise-free Images of different color scales are used to improve image quality. The paper gives a review of how researchers and scientists have been trying different methods to improve vessel segmentation in the future.
- Published
- 2021
40. Predicting Research Trend Based on Bibliometric Analysis and Paper Ranking Algorithm
- Author
-
Tu Q. H. Duong, Viet T. Nguyen, and Alla G. Kravets
- Subjects
Thematic map ,Bibliometric analysis ,Dataflow ,Computer science ,Algorithm ,Field (computer science) ,Ranking (information retrieval) - Abstract
One of the most essential investigation demands in the computational evaluation of scientific publications is whether the immense collections of scientific papers hold significant indication about the dynamics included in the development of science; signs that may help predict the growth and decline of scientific methods, ideas, and even fields. The research presented in this paper focuses on a general approach to analyze and predict the thematic evolution of a given research field by pointing out uptrend keywords. In particular, we propose a dataflow of method and a paper ranking algorithm. This results in ranking papers, from there we select the best 20 papers and extract from them meaningful keywords (concerned with research field/subfield, algorithms, methods, etc.). After that, we formulate the final score for keywords by summing up scores of papers containing them and then group the obtained results by year. Therefore, we can demonstrate scores of keywords through years in time series and observe which keywords are displaying an upward tendency. As a case study, the proposed approach is applied to analyze the thematic evolution of the Artificial Intelligence research field in the period 2005–2016 from the Web of Science database. Ultimately the method is evaluated by checking occurrences of predicted keywords in true prominent keywords in timeframe 2017–2019 and provides precision 73.33%.
- Published
- 2021
41. Airsense-to-act: A concept paper for covid-19 countermeasures based on artificial intelligence algorithms and multi-source data processing
- Author
-
Gianluca Di Cosmo, Francesco Mauro, Fabrizio Passarini, Marco Carminati, Silvia Liberata Ullo, Alessandro Sebastianelli, Sebastianelli A., Mauro F., Di Cosmo G., Passarini F., Carminati M., and Ullo S.L.
- Subjects
FOS: Computer and information sciences ,Decision support system ,Artificial intelligence ,Pollutants ,Sensor networks ,010504 meteorology & atmospheric sciences ,Computer science ,Process (engineering) ,Geography, Planning and Development ,lcsh:G1-922 ,Cloud computing ,010501 environmental sciences ,Safeguarding ,Pollutant ,01 natural sciences ,Microanalysi ,Computer Science - Computers and Society ,Microanalysis ,Satellite remote sensing ,Computers and Society (cs.CY) ,Earth and Planetary Sciences (miscellaneous) ,Computers in Earth Sciences ,Risk levels ,COVID-19 counteractions ,0105 earth and related environmental sciences ,Artificial neural network ,business.industry ,Mechanism (biology) ,COVID-19 counteraction ,Macroanalysis ,Long short term memory neural network ,Crowding ,Risk level ,Macroanalysi ,Air quality ,Environmental chemistry ,Anthropogenic activities ,business ,Anthropogenic activitie ,Wireless sensor network ,Algorithm ,lcsh:Geography (General) - Abstract
The aim of this concept paper is the description of a new tool to support institutions in the implementation of targeted countermeasures, based on quantitative and multi-scale elements, for the fight and prevention of emergencies, such as the current COVID-19 pandemic. The tool is a cloud-based centralized system, a multi-user platform that relies on artificial intelligence (AI) algorithms for the processing of heterogeneous data, which can produce as an output the level of risk. The model includes a specific neural network which is first trained to learn the correlations between selected inputs, related to the case of interest: environmental variables (chemical&ndash, physical, such as meteorological), human activity (such as traffic and crowding), level of pollution (in particular the concentration of particulate matter) and epidemiological variables related to the evolution of the contagion. The tool realized in the first phase of the project will serve later both as a decision support system (DSS) with predictive capacity, when fed by the actual measured data, and as a simulation bench performing the tuning of certain input values, to identify which of them led to a decrease in the degree of risk. In this way, we aimed to design different scenarios to compare different restrictive strategies and the actual expected benefits, to adopt measures sized to the actual needs, adapted to the specific areas of analysis and useful for safeguarding human health, and we compared the economic and social impacts of the choices. Although ours is a concept paper, some preliminary analyses have been shown, and two different case studies are presented, whose results have highlighted a correlation between NO2, mobility and COVID-19 data. However, given the complexity of the virus diffusion mechanism, linked to air pollutants but also to many other factors, these preliminary studies confirmed the need, on the one hand, to carry out more in-depth analyses, and on the other, to use AI algorithms to capture the hidden relationships among the huge amounts of data to process.
- Published
- 2021
42. Robust Neural Computation From Error Correcting Codes : An Invited Paper
- Author
-
Netanel Raviv
- Subjects
Models of neural computation ,Artificial neural network ,Robustness (computer science) ,Computer science ,Classifier (linguistics) ,Redundancy (engineering) ,Code (cryptography) ,Linear classifier ,Algorithm ,Decoding methods - Abstract
Neural networks (NNs) are a driving force behind the ongoing information revolution, with a broad spectrum of applications affecting most aspects of science and technology. The interest in robust neural computation under adversarial noise has increased lately, due applications in sensitive tasks ranging from healthcare to finance and autonomous vehicles. This has ignited an influx of research on the topic, which for the most part focuses on obtaining robustness by altering the training process. In contrast, this paper surveys and develops a recently proposed novel approach to obtain robustness after training, by adding redundancy to the network and to the data in the form of error correcting codes.Since neural networks are essentially a concatenation of linear classifiers, we focus on obtaining robustness for a single linear classifier by coding the input and the classifier, and then apply the results on the network. We address two different types of adversaries, a worst-case one and an average-case one. For a worst-case adversary, that can choose the input to be attacked, we focus on binarized classifiers and show that the problem is related to construction of certain linear codes with restricted weight patterns. As a result, it is shown that the parity code can obtain robustness against any 1-erasure in any binarized NN, and no decoding is required. For an average-case adversary, that is given a uniformly random input to be attacked, it is shown that the optimal weights for any classifier and any code are given by the Fourier coefficients of that classifier. We demonstrate the latter experimentally, exposing improved accuracy-robustness tradeoff in neural classification of several popular datasets under state-of-the-art attacks.
- Published
- 2021
43. Supplemental data for the paper 'low-complexity detection of small frequency deviations by the generalized LMPU test'
- Author
-
Eyal Levy and Tirza Routtenberg
- Subjects
General method ,Computer science ,Nuisance parameters ,Matlab code ,Notation ,lcsh:Computer applications to medicine. Medical informatics ,Low complexity ,Unbiased test ,03 medical and health sciences ,0302 clinical medicine ,Low-complexity methods ,Special case ,lcsh:Science (General) ,030304 developmental biology ,Data Article ,0303 health sciences ,Signal processing ,Multidisciplinary ,Frequency deviation ,Test (assessment) ,Locally most powerful ,lcsh:R858-859.7 ,Algorithm ,030217 neurology & neurosurgery ,lcsh:Q1-390 - Abstract
This document contains supplemental material for the paper [2] . The notations in this document are the same as in [2] . In particular, we first present here the proof of Theorem 1 in [2] . This theorem expresses the locally most powerful unbiased (LMPU) test, which is a general method for local detection, in the presence of known nuisance parameters. Second, we present here the Matlab code of the LMPU and the generalized LMPU for the special case of detection of a small deviation in the frequency of sinusoidal signals, which arises in various signal processing applications.
- Published
- 2021
44. Selective path-sensitive interval analysis (WIP paper)
- Author
-
Bharti Chimdyalwar and Shrawan Kumar
- Subjects
Set (abstract data type) ,Computer science ,Path (graph theory) ,Join point ,Interval (graph theory) ,Point (geometry) ,Software system ,Algorithm ,Interval arithmetic ,Domain (software engineering) - Abstract
K-limited path-sensitive interval domain is an abstract domain that has been proposed for precise and scalable analysis of large software systems. The domain maintains variables’ value ranges in the form of intervals along a configurable K subsets of paths at each program point, which implicitly provides co-relation among variables. When the number of paths at the join point exceeds K, the set of paths are partitioned into K subsets, arbitrarily, which results in loss of precision required to verify program properties. To address this problem, we propose selective merging of paths - identify and merge paths in such a way that the intervals computed help verifying more properties. Our selective path-sensitive approach is based on the knowledge of variables whose values influence the verification outcomes of program properties. We evaluated our approach on industrial automotive applications as well as academic benchmarks. We show benefits of selective path merging over arbitrary path selection by verifying 40% more properties.
- Published
- 2021
45. Review Paper on ' Detailed Analysis of Bisection Method and Algorithm for Solving Electrical Circuits'
- Author
-
Vishal Vaman Mehtre
- Subjects
Computer science ,law ,Electrical network ,Bisection method ,Algorithm ,law.invention - Published
- 2019
46. Intrusion Detection System Classification Using Different Machine Learning Algorithms on KDD-99 and NSL-KDD Datasets - A Review Paper
- Author
-
Munther Abualkibash and Ravipati Rama Devi
- Subjects
Computer science ,business.industry ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,Machine learning ,computer.software_genre ,Term (time) ,Constant false alarm rate ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Anomaly detection ,Artificial intelligence ,False alarm ,Detection rate ,business ,Algorithm ,computer - Abstract
Intrusion Detection System (IDS) has been an effective way to achieve higher security in detecting malicious activities for the past couple of years. Anomaly detection is an intrusion detection system. Current anomaly detection is often associated with high false alarm rates and only moderate accuracy and detection rates because it’s unable to detect all types of attacks correctly. An experiment is carried out to evaluate the performance of the different machine learning algorithms using KDD-99 Cup and NSL-KDD datasets. Results show which approach has performed better in term of accuracy, detection rate with reasonable false alarm rate.
- Published
- 2019
47. Automated Test Paper Generation Using Utility Based Agent and Shuffling Algorithm
- Author
-
Ali Hussein Saleh Zolait and Sahar A. El-Rahman
- Subjects
Shuffling ,Computer science ,business.industry ,Process (engineering) ,Knowledge level ,05 social sciences ,050301 education ,Time efficient ,Computer Science Applications ,Education ,Test (assessment) ,Cost savings ,Knowledge base ,0502 economics and business ,ComputingMilieux_COMPUTERSANDEDUCATION ,050211 marketing ,business ,0503 education ,Algorithm - Abstract
This article describes how with the advent of computer-based technology, there is movement from manual to automated systems for different aspects of the education system. Testing is an essential part of teaching process that helps academics in classifying the level of students and evaluating the outcomes of their teaching process. The testing process requires a large amount of attention and professionalism. Automated Test Paper Generation is a system applying the shuffling algorithm in designing different sets of questions without repetition and duplication. It helps the faculty in developing and designing exams with a particular level of difficulty required in evaluating the students by using the utility-based agent. The system includes a knowledge base of many questions' types that are linked to a test engine where the faculty can specify the type and the difficulty level of the exam and then the system will assemble the exam and produce the output as electronic or paper-based. Questions will be picked randomly from the knowledge database. This automated system provides cost saving and time efficient solutions.
- Published
- 2019
48. Model Checking Algorithms for Hyperproperties (Invited Paper)
- Author
-
Bernd Finkbeiner
- Subjects
Model checking ,050101 languages & linguistics ,Reduction (recursion theory) ,Computer science ,05 social sciences ,Büchi automaton ,02 engineering and technology ,Extension (predicate logic) ,Predicate (mathematical logic) ,Decidability ,Undecidable problem ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Algorithm - Abstract
Hyperproperties generalize trace properties by expressing relations between multiple computations. Hyperpropertes include policies from information-flow security, like observational determinism or non-interference, and many other system properties including promptness and knowledge. In this paper, we give an overview on the model checking problem for temporal hyperlogics. Our starting point is the model checking algorithm for HyperLTL, a reduction to Buchi automata emptiness. This basic construction can be extended with propositional quantification, resulting in an algorithm for HyperQPTL. It can also be extended with branching time, resulting in an algorithm for HyperCTL*. However, it is not possible to have both extensions at the same time: the model checking problem of HyperQCTL* is undecidable. An attractive compromise is offered by MPL[E], i.e., monadic path logic extended with the equal-level predicate. The expressiveness of MPL[E] falls strictly between that of HyperCTL* and HyperQCTL*. MPL[E] subsumes both HyperCTL* and HyperKCTL*, the extension of HyperCTL* with the knowledge operator. We show that the model checking problem for MPL[E] is still decidable.
- Published
- 2021
- Full Text
- View/download PDF
49. (Short Paper) Simple Matrix Signature Scheme
- Author
-
Yacheng Wang, Changze Yin, and Tsuyoshi Takagi
- Subjects
Public-key cryptography ,Matrix (mathematics) ,Multivariate statistics ,Post-quantum cryptography ,Computer science ,business.industry ,Cryptography ,business ,Algorithm ,Matrix multiplication ,Signature (logic) ,Multivariate cryptography ,Computer Science::Cryptography and Security - Abstract
Multivariate cryptography plays an important role in post-quantum cryptography. Many signature schemes, such as Rainbow remain secure despite the development of several attempted attack algorithms. However, most multivariate signature schemes use relatively large public keys compared with those of other post-quantum signature schemes. In this paper, we present an approach for constructing a multivariate signature scheme based on matrix multiplication. At the same security level, our proposed signature scheme has smaller public key and signature sizes compared with the Rainbow signature scheme.
- Published
- 2021
50. CAD: an algorithm for citation-anchors detection in research papers
- Author
-
Riaz Ahmad and Muhammad Afzal
- Subjects
Measure (data warehouse) ,Computer science ,05 social sciences ,String (computer science) ,General Social Sciences ,CAD ,Library and Information Sciences ,050905 science studies ,Digital library ,Computer Science Applications ,Ranking (information retrieval) ,IS-41 ,Identification (information) ,0509 other social sciences ,050904 information & library sciences ,Citation ,Algorithm - Abstract
Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).
- Published
- 2018
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.