22 results on '"huffman"'
Search Results
2. An Encoding and Decoding Technique to Compress Huffman Tree Size in an Efficient Manner
- Author
-
Sultana, Zinnia, Nahar, Lutfun, Tasnim, Farzana, Hossain, Mohammad Shahadat, Andersson, Karl, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Vasant, Pandian, editor, Weber, Gerhard-Wilhelm, editor, Marmolejo-Saucedo, José Antonio, editor, Munapo, Elias, editor, and Thomas, J. Joshua, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Arithmetic N-gram: an efficient data compression technique
- Author
-
Hassan, Ali, Javed, Sadaf, Hussain, Sajjad, Ahmad, Rizwan, and Qazi, Shams
- Published
- 2024
- Full Text
- View/download PDF
4. Study of the Impact of Data Compression on the Energy Consumption Required for Data Transmission in a Microcontroller-Based System
- Author
-
Dominik Piątkowski, Tobiasz Puślecki, and Krzysztof Walkowiak
- Subjects
embedded systems ,data compression ,data transmission ,Huffman ,LZ77 ,LZ78 ,Chemical technology ,TP1-1185 - Abstract
As the number of Internet of Things (IoT) devices continues to rise dramatically each day, the data generated and transmitted by them follow similar trends. Given that a significant portion of these embedded devices operate on battery power, energy conservation becomes a crucial factor in their design. This paper aims to investigate the impact of data compression on the energy consumption required for data transmission. To achieve this goal, we conduct a comprehensive study using various transmission modules in a severely resource-limited microcontroller-based system designed for battery power. Our study evaluates the performance of several compression algorithms, conducting a detailed analysis of computational and memory complexity, along with performance metrics. The primary finding of our study is that by carefully selecting an algorithm for compressing different types of data before transmission, a significant amount of energy can be saved. Moreover, our investigation demonstrates that for a battery-powered embedded device transmitting sensor data based on the STM32F411CE microcontroller, the recommended transmission module is the nRF24L01+ board, as it requires the least amount of energy to transmit one byte of data. This module is most effective when combined with the LZ78 algorithm for optimal energy and time efficiency. In the case of image data, our findings indicate that the use of the JPEG algorithm for compression yields the best results. Overall, our research underscores the importance of selecting appropriate compression algorithms tailored to specific data types, contributing to enhanced energy efficiency in IoT devices.
- Published
- 2023
- Full Text
- View/download PDF
5. In search for the simplest example that proves Huffman coding overperforms Shannon-Fano coding.
- Author
-
Breazu, Macarie, Morariu, Daniel I., Crețulescu, Radu G., Pitic, Antoniu G., and Bărglăzan, Adrian A.
- Subjects
HUFFMAN codes ,BIG data ,DATA compression ,PROBABILITY theory ,BINARY codes - Abstract
Shannon-Fano coding (SFC) and Huffman coding (HC) are classic and well-known algorithms, but still in use today. The search for the simplest example that proves HC overperforms SFC is still of interest. The problem is not as trivial as it looks like at first view because of several decisions that must be considered. We perform a full-search of the stream data space for a maximum stream length of 100. Depending on additional requests we impose, the simplest solution we found is {1,1,1,1,3} when we accept to select a specific cutting, {2,3,3,3,7} when we accept only deterministic (unique) cuttings and {4,5,6,7,14} when we also ask for different frequencies for symbols as well. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Introduction to Adjacent Distance Array with Huffman Principle: A New Encoding and Decoding Technique for Transliteration Based Bengali Text Compression
- Author
-
Sarker, Pranta, Rahman, Mir Lutfur, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Panigrahi, Chhabi Rani, editor, Pati, Bibudhendu, editor, Pattanayak, Binod Kumar, editor, Amic, Seeven, editor, and Li, Kuan-Ching, editor
- Published
- 2021
- Full Text
- View/download PDF
7. Files cryptography based on one-time pad algorithm.
- Author
-
Al-Smadi, Ahmad Mohamad, Al-Smadi, Ahmad, Ali Aloglah, Roba Mahmoud, Abu-Darwish, Nisrein, and Abugabah, Ahed
- Subjects
ENCRYPTION protocols ,DATA compression ,CRYPTOGRAPHY ,ALGORITHMS ,HUFFMAN codes - Abstract
The Vernam-Cipher is known as a one-time pad of algorithm that is an unbreakable algorithm because it uses a typically random key equal to the length of data to be coded, and a component of the text is encrypted with an element of the encryption key. In this paper, we propose a novel technique to overcome the obstacles that hinder the use of the Vernam algorithm. First, the Vernam and advance encryption standard AES algorithms are used to encrypt the data as well as to hide the encryption key; Second, a password is placed on the file because of the use of the AES algorithm; thus, the protection record becomes very high. The Huffman algorithm is then used for data compression to reduce the size of the output file. A set of files are encrypted and decrypted using our methodology. The experiments demonstrate the flexibility of our method, and it is successful without losing any information. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Comparative study of lossy and lossless data compression in distributed optical fiber sensing systems.
- Author
-
Atubga, David, Huijuan Wu, Lidong Lu, and Xiaoyan Sun
- Subjects
- *
LOSSLESS data compression , *OPTICAL fiber detectors , *DATA transmission systems - Abstract
Typical fully distributed optical fiber sensors (DOFS) with dozens of kilometers are equivalent to tens of thousands of point sensors along the whole monitoring line, which means tens of thousands of data will be generated for one pulse launching period. Therefore, in an all-day nonstop monitoring, large volumes of data are created thereby triggering the demand for large storage space and high speed for data transmission. In addition, when the monitoring length and channel numbers increase, the data also increase extensively. The task of mitigating large volumes of data accumulation, large storage capacity, and high-speed data transmission is, therefore, the aim of this paper. To demonstrate our idea, we carried out a comparative study of two lossless methods, Huffman and Lempel Ziv Welch (LZW), with a lossy data compression algorithm, fast wavelet transform (FWT) based on three distinctive DOFS sensing data, such as Φ-OTDR, P-OTDR, and B-OTDA. Our results demonstrated that FWT yielded the best compression ratio with good consumption time, irrespective of errors in signal construction of the three DOFS data. Our outcomes indicate the promising potentials of FWT which makes it more suitable, reliable, and convenient for real-time compression of the DOFS data. Finally, it was observed that differences in the DOFS data structure have some influence on both the compression ratio and computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. COMPRESSION STRATEGIES OF 2D POINT CLOUDS.
- Author
-
Stan, Marian, Spataru, Ionut, and Bucur, Ion
- Subjects
CLOUD computing ,DATA compression ,PROBLEM solving ,HUFFMAN codes ,ALPHA shapes - Abstract
The purpose of this article is to describe the concept of point clouds and to present the main idea and the important steps made in this research domain. This article also presents methods that aim to solve the point clouds compression problem. [ABSTRACT FROM AUTHOR]
- Published
- 2016
10. Impact of Compression and Small Cell Deployment on NB-IoT Devices Coverage and Energy Consumption with a Realistic Simulation Model
- Author
-
M. Zeinali and John Thompson
- Subjects
Battery (electricity) ,LPWAN ,Lempel-Ziv-Welch ,Smart meter ,Computer science ,Real-time computing ,huffman ,TP1-1185 ,Biochemistry ,Article ,Analytical Chemistry ,law.invention ,Physical Phenomena ,Electric Power Supplies ,Relay ,law ,NB-IoT ,Wireless ,energy consumption modeling ,Computer Simulation ,Electrical and Electronic Engineering ,Instrumentation ,latency ,business.industry ,Network packet ,Chemical technology ,Compression ,small cell ,Lempel–Ziv–Welch ,Energy consumption ,Data Compression ,compression ,Atomic and Molecular Physics, and Optics ,Latency ,business - Abstract
In the last few years, Low-Power Wide-Area Network (LPWAN) technologies have been proposed for Machine-Type Communications (MTC). In this paper, we evaluate wireless relay technologies that can improve LPWAN coverage for smart meter communication applications. We provide a realistic coverage analysis using a realistic correlated shadow-fading map and path-loss calculation for the environment. Our analysis shows significant reductions in the number of MTC devices in outage by deploying either small cells or Device-to-Device (D2D) communications. In addition, we analyzed the energy consumption of the MTC devices for different data packet sizes and Maximum Coupling Loss (MCL) values. Finally, we study how compression techniques can extend the battery lifetime of MTC devices.
- Published
- 2021
11. Files cryptography based on one-time pad algorithm
- Author
-
Roba Mahmoud Ali Aloglah, Ahed Abugabah, Nisrein Jamal sanad Abu-darwish, Ahmad Al-Smadi, and Ahmad Mohamad Al-Smadi
- Subjects
Password ,General Computer Science ,Steganography ,Huffman ,Computer science ,business.industry ,Encryption ,Cryptography ,Huffman coding ,One-time pad ,symbols.namesake ,Key (cryptography) ,symbols ,Electrical and Electronic Engineering ,business ,AES algorithm ,Vernam ,Algorithm ,Data compression - Abstract
The Vernam-cipher is known as a one-time pad of algorithm that is an unbreakable algorithm because it uses a typically random key equal to the length of data to be coded, and a component of the text is encrypted with an element of the encryption key. In this paper, we propose a novel technique to overcome the obstacles that hinder the use of the Vernam algorithm. First, the Vernam and advance encryption standard AES algorithms are used to encrypt the data as well as to hide the encryption key; Second, a password is placed on the file because of the use of the AES algorithm; thus, the protection record becomes very high. The Huffman algorithm is then used for data compression to reduce the size of the output file. A set of files are encrypted and decrypted using our methodology. The experiments demonstrate the flexibility of our method, and it’s successful without losing any information.
- Published
- 2021
12. An Overview of Image Compression.
- Author
-
Anitha S.
- Subjects
IMAGE compression ,DATA compression ,DATA transmission systems ,INFORMATION theory ,COMPUTER science research - Abstract
This paper presents an introduction about Image compression techniques. Image compression is an application of data compression that encodes the original image with few bits. The objective of image compression is to reduce the redundancy of the image and to store or transmit data in an efficient form. In recent years, the development and demand of multimedia product grows increasingly fast, contributing to insufficient bandwidth of network and storage of memory device. Therefore, the theory of data compression becomes more and more significant for reducing the data redundancy to save more hardware space and transmission bandwidth. In computer science and information theory, data compression or source coding is the process of encoding information using fewer bits or other information-bearing units than an unencoded representation. Compression is useful because it helps reduce the consumption of expensive resources such as hard disk space or transmission bandwidth. [ABSTRACT FROM AUTHOR]
- Published
- 2014
13. An Ingenious Design of a High Performance-Low Complexity Image Compressor for Wireless Capsule Endoscopy
- Author
-
Hongying Meng, Ioannis Intzes, and John Cosmas
- Subjects
wireless capsule endoscopy ,0209 industrial biotechnology ,encounter ,Computer science ,Gastrointestinal Diseases ,lossless image compression ,RTL ,02 engineering and technology ,huffman ,Huffman coding ,lcsh:Chemical technology ,Biochemistry ,DPCM ,Capsule Endoscopy ,Article ,multiplier-less ,Analytical Chemistry ,law.invention ,03 medical and health sciences ,symbols.namesake ,020901 industrial engineering & automation ,0302 clinical medicine ,Capsule endoscopy ,law ,Image Processing, Computer-Assisted ,Wireless ,Humans ,lcsh:TP1-1185 ,low-complexity ,Electrical and Electronic Engineering ,Instrumentation ,FPGA ,low-power consumption ,Lossless compression ,business.industry ,ASIC ,Data Compression ,Atomic and Molecular Physics, and Optics ,030220 oncology & carcinogenesis ,FinFet ,symbols ,subtraction ,cadence ,business ,Wireless Technology ,Computer hardware ,Algorithms ,Data compression - Abstract
© 2020 by the authors. Wireless Capsule Endoscopy is a state-of-the-art technology for medical diagnoses of gastrointestinal diseases. The amount of data produced by an endoscopic capsule camera is huge. These vast amounts of data are not practical to be saved internally due to power consumption and the available size. So, this data must be transmitted wirelessly outside the human body for further processing. The data should be compressed and transmitted efficiently in the domain of power consumption. In this paper, a new approach in the design and implementation of a low complexity, multiplier-less compression algorithm is proposed. Statistical analysis of capsule endoscopy images improved the performance of traditional lossless techniques, like Huffman coding and DPCM coding. Furthermore the Huffman implementation based on simple logic gates and without the use of memory tables increases more the speed and reduce the power consumption of the proposed system. Further analysis and comparison with existing state-of-the-art methods proved that the proposed method has better performance.
- Published
- 2020
14. Lossless Image Compression Techniques: A State-of-the-Art Survey
- Author
-
Mohamed Hamada and Md. Atiqur Rahman
- Subjects
Physics and Astronomy (miscellaneous) ,Computer science ,LZW ,General Mathematics ,compression ratio ,02 engineering and technology ,Data_CODINGANDINFORMATIONTHEORY ,Huffman coding ,PSNR and efficiency ,run-length ,average code length ,symbols.namesake ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Entropy (information theory) ,lossless and lossy compression ,Shannon–Fano ,Lossless compression ,Huffman ,lcsh:Mathematics ,020206 networking & telecommunications ,lcsh:QA1-939 ,Arithmetic coding ,arithmetic coding ,Computer engineering ,Chemistry (miscellaneous) ,Compression ratio ,symbols ,020201 artificial intelligence & image processing ,Decoding methods ,Image compression ,Data compression - Abstract
Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them. As an example, hospitals produce a huge amount of data on a daily basis, which makes a big challenge to store it in a limited storage or to communicate them through the restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in data compression and communication theory to deal with such challenges. Such research responds to the requirements of data transmission at high speed over networks. In this paper, we focus on deep analysis of the most common techniques in image compression. We present a detailed analysis of run-length, entropy and dictionary based lossless image compression algorithms with a common numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed based on some bench-marked images. Finally, we use standard metrics such as average code length (ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and decoding time (DT) in order to measure the performance of the state-of-the-art techniques.
- Published
- 2019
- Full Text
- View/download PDF
15. Selective document image data compression technique
- Author
-
Petrich, Loren [1674 Cordoba St., #4, Livermore, CA 94550]
- Published
- 1998
16. SIMULATION BASED VIDEO COMPRESSION THROUGH DIGITAL COMMUNICATION SYSTEM
- Author
-
Pundaraja and Manjunath
- Subjects
Huffman ,IDCT ,BPSK ,DCS ,DCT ,Quantization ,business.industry ,Computer science ,Data_CODINGANDINFORMATIONTHEORY ,business ,Communications system ,Simulation based ,Computer hardware ,Data compression - Abstract
The paper is about the transmission, compression, detection of the video based on simulation for the various communication applications. The video and image compression overcomes the problem of reducing the amount of data required to the information that has to be transmitted and this saves the bandwidth required for transmission of data and memory which is required for storage purpose. Hence video compression reduces the volume of the video data with a small change in quality of the video. Compressed video transmission can be done over a channel by huffman coding for the source at transmitter side and then channel codes is done by technique called hamming. The data which is to be sent through channel is a BPSK modulated so the received data is demodulated followed by the channel decoding, source decoding using inverse of the techniques used in the transmitter side to obtain the original transmitted video. The above procedure is done for the input video taken by camera and this compressed video can be transmitted then detected at receiver by digital communication system(DCS) which is simulated in the MATLAB.
- Published
- 2017
- Full Text
- View/download PDF
17. Data compression for the CMS pixel detector at High-Luminosity LHC
- Author
-
Poulios, Stamatios
- Subjects
FIS/01 FISICA SPERIMENTALE ,Arithmetic ,Huffman ,CMS ,Pixel Detector ,Arithmetic, CMS, Data Compression, HEP, Huffman, Pixel Detector ,Data Compression ,HEP - Published
- 2017
18. Burrows–Wheeler Transform Based Lossless Text Compression Using Keys and Huffman Coding.
- Author
-
Rahman, Md. Atiqur and Hamada, Mohamed
- Subjects
- *
HUFFMAN codes , *DATA compression , *ALGORITHMS , *IMAGE compression , *INTERNET , *PATTERN matching - Abstract
Text compression is one of the most significant research fields, and various algorithms for text compression have already been developed. This is a significant issue, as the use of internet bandwidth is considerably increasing. This article proposes a Burrows–Wheeler transform and pattern matching-based lossless text compression algorithm that uses Huffman coding in order to achieve an excellent compression ratio. In this article, we introduce an algorithm with two keys that are used in order to reduce more frequently repeated characters after the Burrows–Wheeler transform. We then find patterns of a certain length from the reduced text and apply Huffman encoding. We compare our proposed technique with state-of-the-art text compression algorithms. Finally, we conclude that the proposed technique demonstrates a gain in compression ratio when compared to other compression techniques. A small problem with our proposed method is that it does not work very well for symmetric communications like Brotli. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Tagged-Sub Optimal Code (TSC) Compression for Improving Performance of Web Services
- Author
-
Haifa Alyahya and Iehab Al Rassan
- Subjects
computer.internet_protocol ,SOAP ,Computer science ,Performance ,Network ,Data_CODINGANDINFORMATIONTHEORY ,computer.software_genre ,Huffman coding ,symbols.namesake ,Web Services ,Code (cryptography) ,TSC ,General Environmental Science ,Huffman ,business.industry ,Service-oriented architecture ,XML ,Computer engineering ,symbols ,Operating system ,General Earth and Planetary Sciences ,The Internet ,Web service ,Transmission time ,business ,computer ,Data compression - Abstract
Compression can be used to reduce the size of files and speeding up the transmission time over networks. However, not all compression techniques have the same features and capabilities to improve the performance of transmission over networks. This paper shows a comparison between different compression algorithms in order to improve the performance of web-services over the Internet. Nowadays, Service Oriented Architecture (SOA) being used heavily between applications as interaction between loosely coupled services, which are function independently. Therefore, fast and efficient services offered through the web services are needed. Enhancing performance of web services, would improve overall system's performance. As a result, compressing and reducing the size of SOAP messages traveling over the network, improves the webs-service performance. This paper compares the performance of web-services by compressing SOAP messages using Tagged Sub-optimal Code (TSC) and Huffman Encoding Algorithms. Experimental results show that web-services compressed using TSC speeds up the performance of web-services compared to normal web-services and web-services compressed using Huffman encoding.
- Published
- 2015
- Full Text
- View/download PDF
20. Lossless Image Compression Techniques: A State-of-the-Art Survey.
- Author
-
Rahman, Md. Atiqur and Hamada, Mohamed
- Subjects
- *
IMAGE compression , *DATA compression , *DATA transmission systems , *SIGNAL-to-noise ratio , *BANDWIDTHS , *EVERYDAY life - Abstract
Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them. As an example, hospitals produce a huge amount of data on a daily basis, which makes a big challenge to store it in a limited storage or to communicate them through the restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in data compression and communication theory to deal with such challenges. Such research responds to the requirements of data transmission at high speed over networks. In this paper, we focus on deep analysis of the most common techniques in image compression. We present a detailed analysis of run-length, entropy and dictionary based lossless image compression algorithms with a common numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed based on some bench-marked images. Finally, we use standard metrics such as average code length (ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and decoding time (DT) in order to measure the performance of the state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Fast Construction of Nearly-Optimal Prefix Codes without Probability Sorting.
- Author
-
Osorio, Roberto R. and Gonz´lez, Patricia
- Abstract
In this abstract, an algorithm is proposed that achieves nearly-optimal coding without sorting the probabilities or building a tree of codes. The complexity is proportional to the maximum code length, making it especially attractive for large alphabets. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
22. Compression of Endpoint Identifiers in Delay Tolerant Networking
- Author
-
Young, David A.
- Subjects
- Computer Science, DTN, delay-tolerant networking, EID, endpoint identifier, data compression, text compression, zlib, huffman, addressing, naming
- Abstract
Delay and Disruption Tolerant Networking (DTN) was developed to deliver network communications to so-called "challenged environments." These include space, military, and other networks that can be described as having extremely long link delay and frequent disconnections. The DTN paradigm implements a store-and-forward network of nodes to overcome these limited environments as well as delivering "bundles" of data instead of packets. The bundles nominally contain enough data to constitute an entire atomic unit of communication. DTN introduces the Endpoint Identifier (EID) to identify bundle Agents or groups. The EID can imply naming, addressing, routing and network topology, but these features and flexibility come at the cost of verbosity and a per-packet overhead introduced by large and descriptive EIDs.In this document, we apply lossless text compression to EIDs using Zlib's DEFLATE algorithm. We develop a novel method for generating a large sample of verbose EIDs based upon Apache access logs, allowing testing over a larger, more varied, and more realistic data set than would be possible with the current DTN testing networks. Analysis of the processing overhead and compression ratio lead us to the conclusion that Zlib reduces the overhead of EIDs substantially. By compressing the dictionary, more featureful EIDs can be used without increasing overhead in the form of larger bundle dictionaries due to syntactical verbosity.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.