1,860 results
Search Results
2. FDA Releases Two Discussion Papers to Spur Conversation about Artificial Intelligence and Machine Learning in Drug Development and Manufacturing.
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,DRUG factories ,DRUG development ,RECOMBINANT proteins - Abstract
The regulatory uses are real: In 2021, more than 100 drug and biologic applications submitted to the FDA included AI/ML components. Keywords: Algorithms; Artificial Intelligence; Bioengineering; Biologics; Biotechnology; Cybersecurity; Cyborgs; Drug Development; Drug Manufacturing; Drugs and Therapies; Emerging Technologies; FDA; Genetic Engineering; Genetically-Engineered Proteins; Government Agencies Offices and Entities; Health and Medicine; Machine Learning; Office of the FDA Commissioner; Public Health; Technology; U.S. Food and Drug Administration EN Algorithms Artificial Intelligence Bioengineering Biologics Biotechnology Cybersecurity Cyborgs Drug Development Drug Manufacturing Drugs and Therapies Emerging Technologies FDA Genetic Engineering Genetically-Engineered Proteins Government Agencies Offices and Entities Health and Medicine Machine Learning Office of the FDA Commissioner Public Health Technology U.S. Food and Drug Administration 497 497 1 05/22/23 20230523 NES 230523 2023 MAY 22 (NewsRx) -- By a News Reporter-Staff News Editor at Clinical Trials Week -- By: Patrizia Cavazzoni, M.D., Director of the Center for Drug Evaluation and Research Artificial intelligence (AI) and machine learning (ML) are no longer futuristic concepts; they are now part of how we live and work. [Extracted from the article]
- Published
- 2023
3. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
4. New tool detects fake, AI-produced scientific articles.
- Subjects
GENERATIVE artificial intelligence ,ALZHEIMER'S disease ,COMPUTATIONAL intelligence ,SYSTEMS theory ,CHATGPT - Abstract
A new machine-learning algorithm called xFakeSci has been developed by Ahmed Abdeen Hamed, a visiting research fellow at Binghamton University, to detect fake scientific articles produced by artificial intelligence. The algorithm can detect up to 94% of bogus papers, which is nearly twice as successful as other data-mining techniques. Hamed and collaborator Xindong Wu created 50 fake articles for each of three medical topics and compared them to real articles on the same topics. The algorithm analyzes the number of bigrams and how they are linked to other words and concepts in the text to identify patterns that distinguish fake articles from real ones. Hamed plans to expand the range of topics to further develop the algorithm and raise awareness about the issue of fake research papers. [Extracted from the article]
- Published
- 2024
5. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Findings in Fibromyalgia Reported from Federal University of Rio Grande do Norte [Spectrochemical approach combined with symptoms data to diagnose fibromyalgia through paper spray ionization mass spectrometry (PSI-MS) and multivariate...].
- Subjects
FIBROMYALGIA ,MASS spectrometry ,FISHER discriminant analysis ,SYMPTOMS ,DIAGNOSIS ,NEUROMUSCULAR diseases - Abstract
Algorithms, Diagnostics and Screening, Emerging Technologies, Fibromyalgia, Health and Medicine, Linear Discriminant Analysis, Machine Learning, Muscular Diseases and Conditions, Musculoskeletal Diseases and Conditions, Neuromuscular Diseases and Conditions, Rheumatic Diseases and Conditions Keywords: Algorithms; Diagnostics and Screening; Emerging Technologies; Fibromyalgia; Health and Medicine; Linear Discriminant Analysis; Machine Learning; Muscular Diseases and Conditions; Musculoskeletal Diseases and Conditions; Neuromuscular Diseases and Conditions; Rheumatic Diseases and Conditions EN Algorithms Diagnostics and Screening Emerging Technologies Fibromyalgia Health and Medicine Linear Discriminant Analysis Machine Learning Muscular Diseases and Conditions Musculoskeletal Diseases and Conditions Neuromuscular Diseases and Conditions Rheumatic Diseases and Conditions 158 158 1 04/10/23 20230413 NES 230413 2023 APR 13 (NewsRx) -- By a News Reporter-Staff News Editor at Hematology Week -- Research findings on fibromyalgia are discussed in a new report. [Extracted from the article]
- Published
- 2023
7. Improving Refugees' Integration with Online Resource Allocation: Technical Perspective.
- Author
-
Freund, Daniel
- Subjects
REFUGEE resettlement ,RESOURCE allocation ,ALGORITHMS ,EMPLOYMENT - Abstract
The article discusses a research paper that applies online resource allocation algorithms to refugee resettlement, aiming to improve refugees' integration into local communities and employment prospects. By utilizing concepts from algorithm design, such as balancing resource utilization and maintaining capacity for future refugees, the authors were able to enhance the employability metric for resettlement agencies like the Hebrew Immigrant Aid Society (HIAS) by approximately 10%. This research not only addresses critical societal issues but also highlights the potential of algorithms to positively impact real-world outcomes for vulnerable populations, encouraging collaboration between algorithm designers and practitioners on important societal problems.
- Published
- 2024
- Full Text
- View/download PDF
8. Opening the Door to SSD Algorithmics.
- Author
-
Sitaraman, Ramesh K.
- Subjects
ALGORITHMS ,SOLID state drives ,EVOLUTIONARY computation - Abstract
The article addresses the topic of algorithm design in the solid-state drive (SSD) model of computation, noting its limitations as well as its benefits. The author references an accompanying paper which addresses one shortcoming in particular -- the phenomenon of "write amplification" -- by proposing a more accurate theoretical model of SSDs that incorporates read, write and erase operations.
- Published
- 2023
- Full Text
- View/download PDF
9. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
COMPUTER programming ,COMPUTERS in lexicography ,ALGORITHMS ,DATA structures ,COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index
7 ), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
10. Experiments as Research Validation: Have We Gone Too Far?
- Author
-
Ullman, Jeffrey D.
- Subjects
COMPUTER science research ,EXPERIMENTS ,ALGORITHMS ,COMPUTER scientists ,SCIENCE - Abstract
The article offers the author's comments on the role of experimental evidence in computer science research. According to the author, sorting algorithms were a major issue for computer scientists in the 1960s. He adds that experiments conducted on specialized data should not be accepted under any circumstances.
- Published
- 2015
- Full Text
- View/download PDF
11. Multi-Itinerary Optimization as Cloud Service.
- Author
-
Cristian, Alexandru, Marshall, Luke, Negrea, Mihai, Stoichescu, Flavius, Cao, Peiwei, and Menache, Ishai
- Subjects
CLOUD computing ,TRAFFIC flow ,ALGORITHMS ,TRAVELING salesman problem ,TRAVEL time (Traffic engineering) - Abstract
In this paper, we describe multi-itinerary optimization (MIO)--a novel Bing Maps service that automates the process of building itineraries for multiple agents while optimizing their routes to minimize travel time or distance. MIO can be used by organizations with a fleet of vehicles and drivers, mobile salesforce, or a team of personnel in the field, to maximize workforce efficiency. It supports a variety of constraints, such as service time windows, duration, priority, pickup and delivery dependencies, and vehicle capacity. MIO also considers traffic conditions between locations, resulting in algorithmic challenges at multiple levels (e.g., calculating time-dependent travel-time distance matrices at scale and scheduling services for multiple agents). To support an end-to-end cloud service with turnaround times of a few seconds, our algorithm design targets a sweet spot between accuracy and performance. Toward that end, we build a scalable approach based on the ALNS metaheuristic. Our experiments show that accounting for traffic significantly improves solution quality: MIO finds efficient routes that avoid late arrivals, whereas traffic-agnostic approaches result in a 15% increase in the combined travel time and the lateness of an arrival. Furthermore, our approach generates itineraries with substantially higher quality than a cutting-edge heuristic (LKH), with faster running times for large instances. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Feedforward FFT Hardware Architectures Based on Rotator Allocation.
- Author
-
Garrido, Mario, Huang, Shen-Jui, and Chen, Sau-Gee
- Subjects
FAST Fourier transforms ,DIGITAL signal processing ,ALGORITHMS ,DISCRETE Fourier transforms ,HARDWARE - Abstract
In this paper, we present new feedforward FFT hardware architectures based on rotator allocation. The rotator allocation approach consists in distributing the rotations of the FFT in such a way that the number of edges in the FFT that need rotators and the complexity of the rotators are reduced. Radix-2 and radix-2k feedforward architectures based on rotator allocation are presented in this paper. Experimental results show that the proposed architectures reduce the hardware cost significantly with respect to previous FFT architectures. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
13. CORDIC-Based Architecture for Computing Nth Root and Its Implementation.
- Author
-
Luo, Yuanyong, Wang, Yuxuan, Sun, Huaqing, Zha, Yi, Wang, Zhongfeng, and Pan, Hongbing
- Subjects
DIGITAL computer simulation ,ALGORITHMS ,HARDWARE ,COMPUTER simulation ,DIGITAL signal processing - Abstract
This paper presents a COordinate Rotation Digital Computer (CORDIC)-based architecture for the computation of Nth root and proves its feasibility by hardware implementation. The proposed architecture performs the task of Nth root simply by shift-add operations and enables easy tradeoff between the speed (or precision) and the area. Technically, we divide the Nth root computation into three different subtasks, and map them onto three different classes of the CORDIC accordingly. To overcome the drawback of narrow convergence range of the CORDIC algorithm, we adopt several innovative methods to yield a much improved convergence range. Subsequently, in terms of convergence range and precision, a flexible architecture is developed. The architecture is validated using MATLAB with extensive vector matching. Finally, using a pipelined structure with fixed-point input data, we implement the example circuits of the proposed architecture with radicand ranging from zero to one million, and achieve an average mean of approximately 10−7 for the relative error. The design is modeled using Verilog HDL and synthesized under the TSMC 40-nm CMOS technology. The report shows a maximum frequency of 2.083 GHz with $197421.00~{\mu }\text{m}^{2}$ area. The area decreases to $169689.98~{\mu }\text{m}^{2}$ when the frequency lowers to 1.00 GHz. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Event-Triggered Optimized Control for Nonlinear Delayed Stochastic Systems.
- Author
-
Zhang, Guoping and Zhu, Quanxin
- Subjects
STOCHASTIC systems ,ADAPTIVE fuzzy control ,FUZZY logic ,ALGORITHMS ,DYNAMIC programming ,FUZZY systems - Abstract
This paper is concerned with the problem of event-triggered optimized control for uncertain nonlinear Itô-type stochastic systems with time-delay and unknown dynamic. By using fuzzy logic systems to approximate two unknown nonlinear functions with the delayed state and current state, respectively. The adaptive identifier is constructed to determine the stochastic system, and the optimized control is designed by using the identifier and adaptive dynamic programming (ADP) of actor-critic architecture. Almost all of the works are concentrated on ADP-based optimal control and it will inevitably cause the complexity of computation and requirements of persistence excitation (PE) assumption. In this paper, the ADP algorithm is obtained based on the negative gradient of a simple positive function (equivalent to the HJB equation), and so the proposed optimal control is simple and can release the PE assumption. Moreover, the event-triggered control approach is proposed to reduce computing burden and communication resources. Furthermore, we prove that the states of system and FLSs parameter errors are semi-globally uniformly ultimately bounded (SGUUB) in mean square via the adaptive identifier and the Lyapunov direct method as well as identifier-actor-critic architecture-based ADP algorithm. Finally, the effectiveness of the proposed method is illustrated through two numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. FPGA Implementation of Reconfigurable CORDIC Algorithm and a Memristive Chaotic System With Transcendental Nonlinearities.
- Author
-
Mohamed, Sara M., Sayed, Wafaa S., Radwan, Ahmed G., and Said, Lobna A.
- Subjects
TRANSCENDENTAL functions ,MATHEMATICAL functions ,FIELD programmable gate arrays ,ALGORITHMS - Abstract
Coordinate Rotation Digital Computer (CORDIC) is a robust iterative algorithm that computes many transcendental mathematical functions. This paper proposes a reconfigurable CORDIC hardware design and FPGA realization that includes all possible configurations of the CORDIC algorithm. The proposed architecture is introduced in two approaches: multiplier-less and single multiplier approaches, each with its advantages. Compared to recent related works, the proposed implementation overpasses them in the included number of configurations. Additionally, it demonstrates efficient hardware utilization and suitability for potential applications. Furthermore, the proposed design is applied to a memristive chaotic system with different transcendental functions computed using the proposed reconfigurable block. The memristive system design is realized on the Artix-7 FPGA board, yielding throughputs of 0.4483 and 0.3972 Gbit/s for the two approaches of reconfigurable CORDIC. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. On Sampled Metrics for Item Recommendation.
- Author
-
Krichene, Walid and Rendle, Steffen
- Subjects
RECOMMENDER systems ,INFORMATION filtering systems ,INTERNET ,ALGORITHMS ,SOFTWARE measurement - Abstract
Recommender systems personalize content by recommending items to users. Item recommendation algorithms are evaluated by metrics that compare the positions of truly relevant items among the recommended items. To speed up the computation of metrics, recent work often uses sampled metrics where only a smaller set of random items and the relevant items are ranked. This paper investigates such sampled metrics in more detail and shows that they are inconsistent with their exact counterpart, in the sense that they do not persist relative statements, for example, recommender A is better than B, not even in expectation. Moreover, the smaller the sample size, the less difference there is between metrics, and for very small sample size, all metrics collapse to the AUC metric. We show that it is possible to improve the quality of the sampled metrics by applying a correction, obtained by minimizing different criteria. We conclude with an empirical evaluation of the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric calculation, however if an experimental study needs to sample, the proposed corrections can improve the quality of the estimate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Analyzing the Impact of Memristor Variability on Crossbar Implementation of Regression Algorithms With Smart Weight Update Pulsing Techniques.
- Author
-
Afshari, Sahra, Musisi-Nkambwe, Mirembe, and Sanchez Esqueda, Ivan Sanchez
- Subjects
ALGORITHMS ,MEMRISTORS ,COMPUTER architecture ,MATHEMATICAL models ,INTEGRATING circuits - Abstract
This paper presents an extensive study of linear and logistic regression algorithms implemented with 1T1R memristor crossbars arrays. Using a sophisticated simulation platform that wraps circuit-level simulations of 1T1R crossbars and physics-based models of RRAM (memristors), we elucidate the impact of device variability on algorithm accuracy, convergence rate and precision. Moreover, a smart pulsing strategy is proposed for practical implementation of synaptic weight updates that can accelerate training in real crossbar architectures. Stochastic multi-variable linear regression shows robustness to memristor variability in terms of prediction accuracy but reveals impact on convergence rate and precision. Similarly, the stochastic logistic regression crossbar implementation reveals immunity to memristor variability as determined by negligible effects on image classification accuracy but indicates an impact on training performance manifested as reduced convergence rate and degraded precision. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Dualityfree Methods for Stochastic Composition Optimization.
- Author
-
Liu, Liu, Liu, Ji, and Tao, Dacheng
- Subjects
REINFORCEMENT learning ,STATISTICAL learning ,MACHINE learning ,CONJUGATE gradient methods ,EMBEDDINGS (Mathematics) ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
In this paper, we consider the composition optimization with two expected-value functions in the form of $({1}/{n})\sum _{i = 1}^{n} F_{i}\left({({1}/{m})\sum _{j = 1}^{m} G_{j}(x)}\right)+R(x)$ , which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding. Full gradient- or classical stochastic gradient descent-based optimization algorithms are unsuitable or computationally expensive to solve this problem due to the inner expectation $({1}/{m})\sum _{j = 1}^{m} G_{j}(x)$. We propose a dualityfree-based stochastic composition method that combines the variance reduction methods to address the stochastic composition problem. We apply the stochastic variance reduction gradient- and stochastic average gradient algorithm-based methods to estimate the inner function and the dualityfree method to estimate the outer function. We prove the linear convergence rate not only for the convex composition problem but also for the case that the individual outer functions are nonconvex, while the objective function is strongly convex. We also provide the results of experiments that show the effectiveness of our proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. APPLIED STATISTICS ALGORITHMS SECTION.
- Subjects
MATHEMATICS ,ALGORITHMS ,PAPER ,COMPUTER programming ,TECHNICAL specifications - Abstract
The article presents information on the publication of a book "Applied Statistics, Algorithms," relevant to statistics, by the Royal Statistical Society in cooperation with the Science Research Council's Working Party on Statistical Computing. A policy statement describing the editorial policy appears in "Applied Statistics," Vol. 1. No. 1 (1968). A support paper describing the expected contents of the external specification and making recommendation for the layout of algorithms and for programming strategy will appear in the following issue.
- Published
- 1968
20. Good Algorithms Make Good Neighbors: Many computer scientists doubted ad hoc methods would ever give way to a more general approach to finding nearest neighbors. They were wrong.
- Author
-
Klarreich, Erica
- Subjects
NEAREST neighbor analysis (Statistics) ,ALGORITHMS ,NORMED rings ,DATA analysis ,MEASUREMENT of distances ,GRAPHIC methods - Abstract
The article discusses the development of the algorithm known as norms to address the statistic problem known as nearest neighbor, referencing papers in the "Annual Symposium on Foundations of Computer Science" and "Proceedings of the ACM Symposium on Theory of Computing" journals. An overview of researchers' designing of normed spaces is provided. The uses of data analysis and expander graphs, including in regard to measuring the distance from data points, are discussed.
- Published
- 2019
- Full Text
- View/download PDF
21. Compensation Network Optimal Design Based on Evolutionary Algorithm for Inductive Power Transfer System.
- Author
-
Chen, Weiming, Lu, Weiguo, Iu, Herbert Ho-Ching, and Fernando, Tyrone
- Subjects
EVOLUTIONARY algorithms ,CURRENT fluctuations ,EVOLUTIONARY computation ,ALGORITHMS ,MATHEMATICAL models ,EXPERIMENTAL design - Abstract
Conventional design and optimization of passive compensation network (PCN) for inductive power transfer (IPT) system are based on specific topologies. The demerits of this design method are: i) The topology is mostly chosen by experience; ii) The design parameters are not multi-objective optimal. Aiming at these issues, this paper proposes an optimal PCN design scheme based on evolutionary algorithm (EA) to synchronously optimize the topology and parameters of PCN for IPT system. Firstly, a unified mathematical model of the PCN is presented and derived by transmission matrix. Then, according to the mathematical model, the multi-objective functions (such as output fluctuation and efficiency) as well as the constraints (such as load and coupling coefficient) for the optimal PCN design are established. The EA based multi-objective optimal PCN design algorithm is further constructed. Six optimal results are obtained using the algorithm, and one optimized PCN having minimum output current fluctuation and high-efficiency is chosen to validate the effectiveness of the proposed design scheme in experiment. For the given IPT system with the optimized PCN, the maximum fluctuation of output current is no more than 11%, within 200% of load variation and about 77% of coupling variation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. THE ALGORITHM SERIES: LIVE-EVENT SCALING.
- Author
-
Siglin, Tim
- Subjects
ALGORITHMS ,STREAMING video & television ,LOCAL area networks ,MULTICASTING (Computer networks) ,DIGITAL rights management - Abstract
The article discusses concerts, festivals, and in-person gatherings for 2020, has plenty of pent-up demand for large-scale events as head to 2021. Topics include Algorithm Series delves to the math and workflow decisioning used to finetune live video event delivery at scale; and live-streaming solutions focus on unicast delivery, in a single connection between the video streaming server and the end user's streaming player.
- Published
- 2020
23. A New Full Chaos Coupled Mapping Lattice and Its Application in Privacy Image Encryption.
- Author
-
Wang, Xingyuan and Liu, Pengbo
- Subjects
IMAGE encryption ,PRIVACY ,DYNAMICAL systems ,HEURISTIC algorithms ,CRYPTOGRAPHY ,ALGORITHMS - Abstract
Since chaotic cryptography has a long-term problem of dynamic degradation, this paper presents proof that chaotic systems resist dynamic degradation through theoretical analysis. Based on this proof, a novel one-dimensional two-parameter with a wide-range system mixed coupled map lattice model (TWMCML) is given. The evaluation of TWMCML shows that the system has the characteristics of strong chaos, high sensitivity, broader parameter ranges and wider chaos range, which helps to enhance the security of chaotic sequences. Based on the excellent performance of TWMCML, it is applied to the newly proposed encryption algorithm. The algorithm realizes double protection of private images under the premise of ensuring efficiency and safety. First, the important information of the image is extracted by edge detection technology. Then the important area is scrambled by the three-dimensional bit-level coupled XOR method. Finally, the global image is more fully confused by the dynamic index diffusion formula. The simulation experiment verified the effectiveness of the algorithm for grayscale and color images. Security tests show that the application of TWMCML makes the encryption algorithm have a better ability to overcome conventional attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Toward Practical Code-Based Signature: Implementing Fast and Compact QC-LDGM Signature Scheme on Embedded Hardware.
- Author
-
Hu, Jingwei and Cheung, Ray C. C.
- Subjects
PUBLIC key cryptography ,CODING theory ,ALGORITHMS ,FIELD programmable gate arrays ,DATA encryption - Abstract
In this paper, fast and compact implementations for code-based signature are presented. Existing designs are either using enormous memory storage or suffering from slow issuing speed of signatures. A vastly optimized new design solving these problems is proposed by exploiting quasi-cyclic low-density generator matrix codes at different levels. In particular, this paper provides a new algorithmic enhancement of signature generation and gives detailed and optimized solutions for critical steps of this algorithm. The design presented in this paper is the fastest implementation of code-based signatures in open literature. It is shown, for instance, that our implementation of signature generation engine can generate approximately 60 000 signatures per second on a Xilinx Virtex-6 FPGA, requiring only 5992 slices and 60 memory blocks. In addition, a very compact implementation is also provided, producing 5438 signatures per second with only 18 memory blocks. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
25. Finite-/Fixed-Time Synchronization of Memristor Chaotic Systems and Image Encryption Application.
- Author
-
Wang, Leimin, Jiang, Shan, Ge, Ming-Feng, Hu, Cheng, and Hu, Junhao
- Subjects
SLIDING mode control ,CHAOS synchronization ,IMAGE encryption ,IMAGING systems ,LYAPUNOV stability ,ALGORITHMS - Abstract
In this paper, a unified framework is proposed to address the synchronization problem of memristor chaotic systems (MCSs) via the sliding-mode control method. By employing the presented unified framework, the finite-time and fixed-time synchronization of MCSs can be realized simultaneously. On the one hand, based on the Lyapunov stability and sliding-mode control theories, the finite-/fixed-time synchronization results are obtained. It is proved that the trajectories of error states come near and get to the designed sliding-mode surface, stay on it accordingly and approach the origin in a finite/fixed time. On the other hand, we develop an image encryption algorithm as well as its implementation process to show the application of the synchronization. Finally, the theoretical results and the corresponding image encryption application are carried out by numerical simulations and statistical performances. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. Fault Modeling and Efficient Testing of Memristor-Based Memory.
- Author
-
Liu, Peng, You, Zhiqiang, Wu, Jigang, Liu, Bosheng, Han, Yinhe, and Chakrabarty, Krishnendu
- Subjects
BRIDGE defects ,MEMORY testing ,ALGORITHMS ,DISCRETE Fourier transforms ,OPTICAL disks ,MEMRISTORS - Abstract
Memristor-based memory technology is one of the emerging memory technologies, which is a potential candidate to replace traditional memories. Efficient test solutions are required to enable the quality and reliability of such products. In previous works, fault models are caused by open, short and bridge defects and parametric variations during the fabrication. However, these fault models cannot describe the bridge defects that cause the state of the faulty cell to an undefined state. In this paper, we analyze the different effects of bridge defects and aggregate their faulty behavior into new fault models, undefined coupling fault and dynamic undefined coupling fault. In addition, an enhanced March algorithm is designed to detect all the modeled faults. In one resistor crossbar with $N$ memristors, the enhanced March algorithm requires $8N$ write and $7N$ read operations with negligible hardware overhead. To reduce the test time, a March RC algorithm is proposed based on read operations with new reference currents, which requires $4N+2$ write and $6N$ read operations. Analytical results show that the proposed test algorithms can detect all the modeled faults outperforming all the previous methods. Subsequently, a Design-for-Testability scheme is proposed to implement March RC algorithm with a little area overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. The Impact of Device Uniformity on Functionality of Analog Passively-Integrated Memristive Circuits.
- Author
-
Fahimi, Z., Mahmoodi, M. R., Klachko, M., Nili, H., and Strukov, D. B.
- Subjects
UNIFORMITY ,MEMRISTORS ,COMPUTER systems ,ANALOG circuits ,ALGORITHMS ,NEUROMORPHICS - Abstract
Passively-integrated memristors are the most prospective candidates for designing high-speed, energy-efficient, and compact neuromorphic circuits. Despite all the promising properties, experimental demonstrations of passive memristive crossbars have been limited to circuits with few thousands of devices until now, which stems from the strict uniformity requirements on the IV characteristics of memristors. This paper expands upon this vital challenge and investigates how uniformity impacts the computing accuracy of analog memristive circuits, focusing on neuromorphic applications. Specifically, the paper explores the tradeoffs between computing accuracy, crossbar size, switching threshold variations, and target precision. All-embracing simulations of matrix multipliers and deep neural networks on CIFAR-10 and ImageNet datasets have been carried out to evaluate the role of uniformity on the accuracy of computing systems. Further, we study three post-fabrication methods that increase the accuracy of nonuniform 0T1R neuromorphic circuits: hardware-aware training, improved tuning algorithm, and switching threshold modification. The application of these techniques allows us to implement advanced deep neural networks with almost no accuracy drop, using current state-of-the-art analog 0T1R technology. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. A Smoothed LASSO-Based DNN Sparsification Technique.
- Author
-
Koneru, Basava Naga Girish, Chandrachoodan, Nitin, and Vasudevan, Vinita
- Subjects
ERROR functions ,ALGORITHMS ,APPROXIMATION algorithms ,SMOOTHNESS of functions ,COST functions - Abstract
Deep Neural Networks (DNNs) are increasingly being used in a variety of applications. However, DNNs have huge computational and memory requirements. One way to reduce these requirements is to sparsify DNNs by using smoothed LASSO (Least Absolute Shrinkage and Selection Operator) functions. In this paper, we show that irrespective of error profile, the sparsity values obtained using various smoothed LASSO functions are similar, provided the maximum error of these functions with respect to the LASSO function is the same. We also propose a layer-wise DNN pruning algorithm, where the layers are pruned based on their individual allocated accuracy loss budget, determined by estimates of the reduction in number of multiply-accumulate operations (in convolutional layers) and weights (in fully connected layers). Further, the structured LASSO variants in both convolutional and fully connected layers are explored within the smoothed LASSO framework and the tradeoffs involved are discussed. The efficacy of proposed algorithm in enhancing the sparsity within the allowed degradation in DNN accuracy and results obtained on structured LASSO variants are shown on MNIST, SVHN, CIFAR-10, and Imagenette datasets and on larger networks such as ResNet-50 and Mobilenet. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Extended Polynomial Growth Transforms for Design and Training of Generalized Support Vector Machines.
- Author
-
Gangopadhyay, Ahana, Chatterjee, Oindrila, and Chakrabartty, Shantanu
- Subjects
SUPPORT vector machines ,MACHINE learning ,POLYNOMIALS ,NONLINEAR programming ,ALGORITHMS - Abstract
Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Joint Sparsity and Order Optimization Based on ADMM With Non-Uniform Group Hard Thresholding.
- Author
-
Matsuoka, Ryo, Kyochi, Seisuke, Ono, Shunsuke, and Okuda, Masahiro
- Subjects
FINITE impulse response filters ,DIGITAL signal processing ,LEAST squares ,PROGRAM transformation ,MULTIPLIERS (Mathematical analysis) ,ALGORITHMS - Abstract
This paper proposes a new optimization framework for the joint optimization of sparsity and filter order (JOSFO) for FIR filter design. Since the cost function for JOSFO involves \ell 0 and non-uniform overlapped group \ell 0 norms, which are not convex, a global optimal solution is difficult to obtain. To find an approximate solution of the non-convex problem, existing approaches repeat the following steps: 1) approximate the cost function; 2) find candidates of zero coefficients by minimizing the cost function; and 3) set them to zero. On the other hand, this paper directly solves the optimization problem, without any approximation to the cost function, by using the alternating direction method of multipliers with the pseudo-proximity operators of \ell 0 and non-uniform non-overlapped group \ell 0 norms. Experimental results show that resulting filters designed by the proposed method have sparser coefficients and lower orders, while satisfying filter specifications, such as an error from a desired frequency response. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
31. Tidy Towers: A tale of two towers.
- Author
-
Shasha, Dennis
- Subjects
ALGORITHMS ,BLOCKS (Toys) ,STRATEGY games - Abstract
The author presents a set of puzzles involving tidy towers, a colored block stacking puzzle. First, several scenarios related to the puzzle are presented and solved. Lastly, the author presents two puzzles and information is provided for readers to submit their solutions.
- Published
- 2023
- Full Text
- View/download PDF
32. Constructing Higher-Dimensional Digital Chaotic Systems via Loop-State Contraction Algorithm.
- Author
-
Wang, Qianxue, Yu, Simin, Guyeux, Christophe, and Wang, Wei
- Subjects
PROBLEM solving ,ALGORITHMS ,TIME series analysis ,COMPACT spaces (Topology) - Abstract
This paper aims to refine and expand the theoretical and application framework of higher-dimensional digital chaotic system (HDDCS). Topological mixing for HDDCS is strictly proved theoretically at first. Topological mixing implies Devaney’s definition of chaos in a compact space, but not vice versa. Therefore, the proof of topological mixing promotes the theoretical research of HDDCS. Then, a general design method for constructing HDDCS via loop-state contraction algorithm is given. The construction of the iterative function uncontrolled by random sequences (hereafter called iterative function) is the starting point of this research. On this basis, this paper put forward a general design method to solve the construction problem of HDDCS, and several examples illustrate the effectiveness and feasibility of this method. The adjacency matrix corresponding to the designed HDDCS is used to construct the chaotic Echo State Network (ESN) for predicting Mackey-Glass time series. Compared with other ESNs, the chaotic ESN has better prediction performance and is able to accurately predict a much longer period of time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Efficient Row-Layered Decoder for Sparse Code Multiple Access.
- Author
-
Pang, Xu, Song, Wenqing, Shen, Yifei, You, Xiaohu, and Zhang, Chuan
- Subjects
BIT error rate ,MESSAGE passing (Computer science) ,ALGORITHMS ,WIRELESS communications ,TECHNOLOGY convergence ,BARBELLS ,VERY large scale circuit integration ,JACOBIAN matrices - Abstract
Sparse code multiple access (SCMA) is a promising technology for the development of wireless communication, which supports a large number of overloading users and enjoys high spectral efficiency. However, conventional SCMA decoders suffer very high complexity in implementations. Changing the updating scheme is a superior approach to reduce complexity, which guarantees the updated information immediately join in the following message propagating of the current iteration and accelerates the decoding convergence. In this paper, a row-layered message passing algorithm (MPA) is proposed, which offers a good trade-off between the hardware complexity and the bit error rate (BER) performance. Simulation results show that the proposed decoder saves 66.7% computation complexity compared with the original MPA with the similar BER performance. Pipelining and folding technology are adopted in VLSI implementations. The synthesis results with 45-nm CMOS technology show that the proposed decoder can achieve higher hardware efficiency and throughput under a high frequency than the existing decoders, achieving 1777.78 Mb/s throughput with 1.112 mm
2 area consumption. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
34. Privacy-Preserving Consensus for Multi-Agent Systems via Node Decomposition Strategy.
- Author
-
Wang, Yaqi, Lu, Jianquan, Zheng, Wei Xing, and Shi, Kaibo
- Subjects
MULTIAGENT systems ,DISTRIBUTED algorithms ,ALGORITHMS ,UNDIRECTED graphs ,COMPUTATIONAL complexity ,INFORMATION sharing - Abstract
This paper proposes two kinds of algorithms to achieve privacy-preserving consensus of multi-agent systems over undirected graphs via node decomposition mechanism and homomorphic cryptography technique. Based on the number of neighboring nodes (|N
i |), every agent is decomposed into |Ni | subagents, which are connected as a chain graph. Note that every subagent connects one and only one non-homologous subagent (generated by different agents). Information interaction between non-homologous subagents is encrypted by a homomorphic cryptography algorithm, and homologous subagents exchange information directly. In this regard, the proposed node decomposition mechanism enhances the privacy of the initial values without increasing the computational complexity of encryption. The first privacy-preserving algorithm can achieve the accurate average consensus, which means that the agreement value of every subagent is consistent with the original average consensus value. The second algorithm studies the privacy-preserving scaled consensus problem without a priori knowledge about the underlying graph. Although the final convergence values of subagents do not keep exactly the same, homologous subagents can compute the original group decision value by resorting to the product of the limit value and agent’s degree. Importantly, this algorithm also guarantees the privacy of group decision value of the whole system. Besides, it is proved that the privacy of the initial value can be preserved if the agent has at least one neutral neighbor. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
35. Dynamic Deadband Event-Triggered Strategy for Distributed Adaptive Consensus Control With Applications to Circuit Systems.
- Author
-
Xu, Yong, Sun, Jian, Pan, Ya-Jun, and Wu, Zheng-Guang
- Subjects
MULTIAGENT systems ,SELF-tuning controllers ,DATA transmission systems ,DATA reduction ,ADAPTIVE control systems ,ALGORITHMS - Abstract
This paper focuses on the distributed consensus seeking of multi-agent systems (MASs) with discrete-time control updating and intermittent communications among agents. Compared with existing linearly coupled protocols, a nonlinear coupled Zeno-free event-triggered controller is first proposed, which is further to project the static and dynamic triggering mechanisms exploited by using the deadband control method. Then, the node-based nonlinear coupled adaptive event-triggered controller with online self-tuning of time-varying coupling weight and its corresponding to static and dynamic deadband-based event-triggered mechanisms are designed, respectively. The exploited adaptive event-triggered controller does not rely on any global information of interaction structure and is implemented in a fully distributed fashion. In addition, two dynamic proposals not only cover existing static strategies as special cases, but also show that the minimal inter-execution time of dynamic one is not smaller than that of static one. Theoretical analysis shows that the proposed static and dynamic deadband-based event-triggered mechanisms can not only ensure the average consensus with Zeno-freeness, but also achieve the data reduction of communication and control. Finally, the proposed algorithms applied to circuit implementation are corroborated to prove its practical merits and validity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. IEEE Transactions on Circuits and Systems—I: Regular Papers information for authors.
- Subjects
PUBLISHING ,MANUSCRIPT preparation (Authorship) ,COMPUTER-aided design ,SIGNAL processing ,ELECTRIC circuits ,ALGORITHMS - Abstract
Provides instructions and guidelines to prospective authors who wish to submit manuscripts. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
37. A New Fast Algorithm for Discrete Fractional Hadamard Transform.
- Author
-
Cariow, Aleksandr, Majorkowska-Mech, Dorota, Paplinski, Janusz P., and Cariowa, Galina
- Subjects
HADAMARD matrices ,DISCRETE Fourier transforms ,ALGORITHMS ,SYMMETRIC matrices ,MATRIX decomposition ,VECTOR data - Abstract
This paper proposes a new fast algorithm for calculating the discrete fractional Hadamard transform for data vectors whose size $N$ is a power of two. A direct method for the calculation of the discrete fractional Hadamard transform requires $O(N^{2})$ multiplications, the last fast algorithm requires $O(N \log _{2}~N)$ , while in the proposed algorithm the number of multiplications is reduced to $O(N)$. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Efficient Shift-Add Implementation of FIR Filters Using Variable Partition Hybrid Form Structures.
- Author
-
Ray, Dwaipayan, George, Nithin V., and Meher, Pramod Kumar
- Subjects
FINITE impulse response filters ,ENERGY consumption ,HARDWARE ,DIGITAL signal processing ,ALGORITHMS - Abstract
Single constant multiplication (SCM) and multiple constant multiplications (MCM) are among the most popular schemes used for low-complexity shift-add implementation of finite impulse response (FIR) filters. While SCM is used in the direct form realization of FIR filters, MCM is used in the transposed direct form structures. Very often, the hybrid form FIR filters where the sub-sections are implemented by fixed-size MCM blocks provide better area, time, and power efficiency than those of traditional MCM and SCM based implementations. To have an efficient hybrid form filter, in this paper, we have performed a detailed complexity analysis in terms of the hardware and time consumed by the hybrid form structures. We find that the existing hybrid form structures lead to an undesirable increase of complexity in the structural-adder block. Therefore, to have a more efficient implementation, a variable size partitioning approach is proposed in this paper. It is shown that the proposed approach consumes less area and provides nearly 11% reduction of critical path delay, 40% reduction of power consumption, 15% reduction of area-delay product, 52% reduction of energy-delay product, and 42% reduction of power-area product, on an average, over the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Multidimensional Divide-and-Conquer.
- Author
-
Bentley, Jon Louis
- Subjects
COMPUTER algorithms ,DATA structures ,ALGORITHMS ,ELECTRONIC file management ,COMPUTER programming ,ELECTRONIC data processing - Abstract
Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-anti-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maximal range searching, closest pair, and all nearest neighbor problems The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems. [ABSTRACT FROM AUTHOR]
- Published
- 1980
- Full Text
- View/download PDF
40. Cooperative Stabilization of a Class of LTI Plants With Distributed Observers.
- Author
-
Liu, Kexin, Zhu, Henghui, and Lu, Jinhu
- Subjects
LINEAR systems ,ALGORITHMS ,COMPUTER simulation - Abstract
Over the last decades, the cooperative design of complex networked systems has received an increasing attention in real-world engineering practices. Traditionally, each node in the network is assumed to obtain the same signal. However, each agent often possesses different measurement due to the observability or configuration of the systems. To solve the stabilization problem in this case, we aim to establish a unified framework for the cooperative control of complex network with distributed observers. In detail, different from the traditional centralized design, this paper initiates a cooperative approach by only using the local information of the networked systems. In allusion to the three kinds of representative networks, this paper establishes some sufficient or necessary conditions for the existence of network parameters that guarantee the stabilization of the LTI plants. Moreover, some corresponding algorithms are developed to find the suitable parameters of networked observers. Last but not least, numerical simulations are also provided to verify the above theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Index Ranges for Matrix Calculi.
- Author
-
Bayer, R., Witzgall, C., and Gries, D.
- Subjects
MATHEMATICAL analysis ,ALGORITHMS ,DATA structures ,COMPUTER programming ,ELECTRONIC file management ,ELECTRONIC data processing - Abstract
The paper describes a scheme for symbolic manipulation of index expressions which arise as a by-product of the symbolic manipulation of expressions in the matrix calculi described by the authors in a previous paper. This scheme attempts program optimization by transforming the original algorithm rather than the machine code. The goal is to automatically generate code for handling the tedious address calculations necessitated by complicated data structures. The paper is therefore preoccupied with "indexing by position." The relationship of "indexing by name" and "indexing by position" is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
42. A Computer System for Transformational Grammar.
- Author
-
Bobrow, D. G. and Friedman, Joyce
- Subjects
COMPUTER systems ,GENERATIVE grammar ,IBM computers ,PROGRAMMING languages ,ALGORITHMS ,COMPUTER industry ,COMPUTATIONAL linguistics - Abstract
A comprehensive system for transformational grammar has been designed and implemented on the IBM 360/67 computer. The system deals with the transformational model of syntax, along the lines of Chomsky's Aspects of the Theory of Syntax. The major innovations include a full, formal description of the syntax of a transformational grammar, a directed random phrase structure generator, a lexical insertion algorithm, an extended definition of analysis, and simple problem-oriented programming language in which the algorithm for application of transformations can be expressed. In this paper we present the system as a whole, first discussing the general attitudes underlying the development of the system, then outlining the system and discussing its more important special features. References are given to papers which consider some particular aspect of the system in detail. [ABSTRACT FROM AUTHOR]
- Published
- 1969
43. acm forum.
- Subjects
LETTERS to the editor ,GRAPH theory ,ALGORITHMS - Abstract
Several letters to the editor and a reply are presented in response to the article "A Graph Formulation of a School Scheduling Algorithm," by A. Salazar and R. V. Oakford in the December 1974 issue of "Communications."
- Published
- 1975
- Full Text
- View/download PDF
44. Letters to the Editor.
- Author
-
Parnas, David L., deVeber, Jeffrey L., Rice, John R., and Dijkstra, Edsger W.
- Subjects
LETTERS to the editor ,CONFERENCES & conventions ,COMPUTERS ,ALGORITHMS ,ALGEBRA ,COORDINATES - Abstract
Presents several letters to the editor about the computing machinery. Dissatisfaction with the quality and nature of the average technical paper presented at the Joint Computer Conferences; Comment on the article "One-Pass Compilation of Arithmetic Expressions for a Parallel Processor," by Harold S. Stone; Use of a coordinate system or characteristic indices for the progress of an algorithm.
- Published
- 1968
- Full Text
- View/download PDF
45. Constant Overhead Quantum Fault Tolerance with Quantum Expander Codes.
- Author
-
Fawzi, Omar, Grospellier, Antoine, and Leverrier, Anthony
- Subjects
QUANTUM computing ,COMPUTER programming ,ALGORITHMS ,FAULT-tolerant computing - Abstract
The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the small-set-flip decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the bit-flip algorithm applied to the (classical) expander codes of Sipser and Spielman. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Low-Complexity and Low-Latency SVC Decoding Architecture Using Modified MAP-SP Algorithm.
- Author
-
Hong, Seungwoo, Kam, Dongyun, Yun, Sangbu, Choe, Jeongwon, Lee, Namyoon, and Lee, Youngjoo
- Subjects
DECODING algorithms ,ALGORITHMS ,STATIC VAR compensators ,COMPUTER architecture ,PARALLEL processing - Abstract
The compressive sensing (CS) based sparse vector coding (SVC) method is one of the promising ways for the next-generation ultra-reliable and low-latency communications. In this paper, we present advanced algorithm-hardware co-optimization schemes for realizing a cost-effective SVC decoding architecture. The previous maximum a posteriori subspace pursuit (MAP-SP) algorithm is newly modified to relax the computational overheads by applying novel residual forwarding and LLR approximation schemes. A fully-pipelined parallel hardware is also developed to support the modified decoding algorithm, reducing the overall processing latency, especially at the support identification step. In addition, an advanced least-square-problem solver is presented by utilizing the parallel Cholesky decomposer design, further reducing the decoding latency with parallel updates of support values. The implementation results from a 22nm FinFET technology showed that the fully-optimized design is 9.6 times faster while improving the area efficiency by 12 times compared to the baseline realization. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Distributed Model Predictive Consensus of Heterogeneous Time-Varying Multi-Agent Systems: With and Without Self-Triggered Mechanism.
- Author
-
Li, Huiyan and Li, Xiang
- Subjects
MULTIAGENT systems ,TIME-varying systems ,PREDICTION models ,CONSTRAINT algorithms ,INTEGRATORS ,ALGORITHMS - Abstract
This paper provides a framework of designing distributed model predictive controller to reach consensus of a heterogeneous time-varying multi-agent system, and the dynamics of heterogeneous agents are modeled by double integrators and Euler-Lagrange (EL) equations. Firstly, a DMPC-based consensus algorithm is proposed, where the constraints in the algorithm depend on the heterogeneous dynamics. We prove that the resultant DMPC optimization problem is feasible with the designed controllers, which is stable when the system reaches consensus. To further reduce communication cost and solve the problem with asynchronous discrete-time information exchange, self-triggered mechanism is introduced into the framework. Trigger intervals are alternatively optimized with the control inputs, and the influence on the system performance is analyzed. Numerical examples are provided to verify the effectiveness and advantages of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Implementation of Supersingular Isogeny-Based Diffie-Hellman and Key Encapsulation Using an Efficient Scheduling.
- Author
-
Farzam, Mohammad-Hossein, Bayat-Sarmadi, Siavash, and Mosanaei-Boorani, Hatameh
- Subjects
ALGORITHMS ,CONSTRAINT programming ,SCHEDULING ,QUANTUM cryptography ,CRYPTOGRAPHY - Abstract
Isogeny-based cryptography is one of the promising post-quantum candidates mainly because of its smaller public key length. Due to its high computational cost, efficient implementations are significantly important. In this paper, we have proposed a high-speed FPGA implementation of the supersingular isogeny Diffie-Hellman (SIDH) and key encapsulation (SIKE). To this end, we have adapted the algorithm of finding optimal large-degree isogeny computation strategy for hardware implementations. Using this algorithm, hardware-suited strategies (HSSs) can be devised. We have also developed a tool to schedule field arithmetic operations efficiently using constraint programming. This tool enables reducing the latency of SIDH and SIKE subroutines by up to 14% at NIST’s highest security level, i.e., using the SIKEp751 parameter set. We have also improved the latency of field inversion, the most costly field operation in SIDH, by 23% using the Montgomery ladder technique. We have provided constant-time implementations of SIDH and SIKE on Virtex-7 using SIKEp751 utilizing 6 and 8 prime field multipliers to resemble the previous work. Experimental results show that using 8 multipliers SIDH and SIKE encapsulation and decapsulation can be performed in 24.66 ms and 24.10 ms, which is 1.37 and 1.12 times faster than the latest corresponding FPGA implementations, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Leakage-Aware Battery Lifetime Analysis Using the Calculus of Variations.
- Author
-
Jafari-Nodoushan, Mostafa, Safaei, Bardia, Ejlali, Alireza, and Chen, Jian-Jia
- Subjects
ELECTRIC batteries ,LEAKAGE ,CURVE fitting ,ALGORITHMS ,ONLINE algorithms ,CALCULUS of variations - Abstract
Due to non-linear factors such as the rate capacity and the recovery effect, the shape of the battery discharge curve plays a significant role in the overall lifetime of the batteries. Accordingly, this paper proposes a simple heuristic battery-aware speed scheduling policy for periodic and non-periodic real-time tasks in Dynamic Voltage Scaling (DVS) systems with non-negligible leakage/static power. A set of comprehensive analysis has been conducted to compare the battery efficiency of the proposed policies with an optimal solution, which could be derived via the Calculus of Variations (CoV). These evaluations have taken into account both periodic and non-periodic tasks in DVS-based systems. Our experiments have shown a maximum of 7% difference between the optimal solution and the simple heuristic speed scheduling for realistic settings of the battery model. By considering the calculated optimal speed scheduling for different tasks (with different utilizations), a two-phase algorithm has been proposed, in which a speed approximation function is being calculated offline based on curve fitting, while the best execution speed is applied online. The results show a maximum of 17.7% and 11.3% battery charge saving for non-periodic and periodic tasks in comparison to the baseline critical frequency method, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Real-Time Distance Evaluation System for Wireless Localization.
- Author
-
Piccinni, Giovanni, Avitabile, Gianfranco, Coviello, Giuseppe, and Talarico, Claudio
- Subjects
MATHEMATICAL sequences ,FIELD programmable gate arrays ,ALGORITHMS ,WIRELESS localization ,MULTIPATH channels - Abstract
The paper describes the FPGA implementation of a novel position evaluation algorithm based on the time difference of arrival (TDOA) principle that combines the characteristics of an Orthogonal Frequency Division Modulation (OFDM) symbol with the properties of the Zadoff-Chu mathematical sequences. The resulting system is highly scalable and its characteristics are easily adaptable to different operating scenarios. The algorithm has been implemented using the Stratix IV-E EP4SGX70HF35C3 FPGA, requiring about 112k bit of memory and less than 44k logic elements of which about 16k are registers. The paper describes the algorithm and its FPGA implementation along with experimental results validating the system performance even in the presence of multipath interferences and showing precision in target position estimation that is better than 2 cm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.