2,593 results on '"Sub-band coding"'
Search Results
2. Improvised method for analysis and synthesis of NUFB for Speech and ECG signal applications.
- Author
-
Keerthana, B. and Raju, N.
- Subjects
FILTER banks ,IMPULSE response ,BANDPASS filters ,SPEECH enhancement ,SIGNAL reconstruction - Abstract
This article presents a rapidly converging optimization technique using a single parameter for designing non-uniform cosine modulated filter banks (CMFB
S ). The non-uniform cosine modulated filter banks are derived from closed-form uniform cosine modulated filter banks by merging the relevant bandpass filters based on given decimation factors. In this proposed method, the cut-off frequency of the prototype filter is varied through analytically calculated step size using control parameters so that the filter coefficients at quadrature frequency are approximately equal to 0.707 and the formulated objective function is satisfied with the prescribed tolerance. Simulation results demonstrate that the proposed algorithm achieves superior performance, with amplitude distortion levels significantly outperforming existing methods in the literature, reaching as low as 2.4483 × 10⁻4 . For the prototype filter design, a constrained equiripple finite impulse response (FIR) digital filter is employed, with the roll-off factor and error ratio chosen based on a stopband attenuation, a passband attenuation and a filter order. The results highlight the proposed algorithm's effectiveness for high-quality reconstruction of speech signals, particularly in speech coding and enhancement, as well as ECG signals. This makes the method highly versatile and suitable for various practical applications, including sub-band coding of real-time and near real-time signals. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
3. Designing optimal prototype filters for maximally decimated Cosine Modulated filter banks with rapid convergence
- Author
-
B. Keerthana, N. Raju, Ravikumar CV, Rajesh Anbazhagan, Tai-hoon Kim, and Faruq Mohammad
- Subjects
Cosine-modulated filter bank (CMFB) ,Least-square technique ,Near perfect reconstruction ,Prototype filter ,Sub-band coding ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
An analytic design of a prototype filter for M-channel maximally decimated cosine-modulated Near Perfect Reconstruction (NPR) filter banks is proposed in this work. The prototype filter is created using the least-square (CLS) method with weighted constraints, which is one-dimensional and requires single-parameter optimization. Compared to existing approaches, this suggested method achieves rapid convergence by analytically determining the optimal step size, ensuring the 3 dB cutoff frequency at π/2 M. The simulation results for design examples outperform the techniques in the available literature in terms of amplitude and aliasing distortion, reaching distortion around 2.4489 × 10−4 and 3.4907 × 10−9, respectively. This optimization algorithm's usefulness is further demonstrated with the sub-band coding of ECG signals. Implementing optimal prototype filters has tangible real-world effects, especially in critical sectors like healthcare and communications, improving diagnostics accuracy, data transmission efficiency, and overall performance.
- Published
- 2024
- Full Text
- View/download PDF
4. Image Interpolation Based on 2D-DWT with Novel Regularity-Preserving Algorithm Using RLS Adaptive Filters.
- Author
-
Sadaghiani, Abdol Vahab Khalili, Sheikhaei, Samad, and Forouzandeh, Behjat
- Subjects
- *
ADAPTIVE filters , *INTERPOLATION , *DISCRETE wavelet transforms , *ALGORITHMS , *SMOOTHING (Numerical analysis) - Abstract
This paper proposes a novel method for the image interpolation problem based on two-dimensional discrete wavelet transform (DWT) with the edge preserving approach. The purpose of this method is to consider two contrasting issues of over-smoothing and creation of spurious edges at the same time, and offer a novel solution based on statistical dependencies of image sub-bands, and noise behavior. The offered method has a multi-faceted approach for the problem; by sub-band coding, it handles each 2D-DWT image sub-band with a different solution. For LH and HL sub-bands, two algorithms work together in order to preserve regularity. Area_Check algorithm is a four-phase edge-preserving algorithm that aims to recognize and interpolate separating lines of environments and edgy regions in the best possible way. On the other hand, RLS_AVG algorithm interpolates smooth surfaces of the image by keeping the regularity of the image without over-smoothing. In this regard, the offered algorithm has a great power to counter jaggies and annoying artifacts. In the end, in order to demonstrate the capability, and performance of the proposed method, the final results in various metrics are compared with the results of the most famous and the newest image interpolation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Neural Network Analysis for Image Classification
- Author
-
Nikolay, Vershkov, Mikhail, Babenko, Viktor, Kuchukov, Natalia, Kuchukova, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Tchernykh, Andrei, editor, Alikhanov, Anatoly, editor, Babenko, Mikhail, editor, and Samoylenko, Irina, editor
- Published
- 2022
- Full Text
- View/download PDF
6. A wavelet filter comparison on multiple datasets for signal compression and denoising.
- Author
-
Gnutti, Alessandro, Guerrini, Fabrizio, Adami, Nicola, Migliorati, Pierangelo, and Leonardi, Riccardo
- Abstract
In this paper, we explicitly analyze the performance effects of several orthogonal and bi-orthogonal wavelet families. For each family, we explore the impact of the filter order (length) and the decomposition depth in the multiresolution representation. In particular, two contexts of use are examined: compression and denoising. In both cases, the experiments are carried out on a large dataset of different signal kinds, including various image sets and 1D signals (audio, electrocardiogram and seismic). Results for all the considered wavelets are shown on each dataset. Collectively, the study suggests that a meticulous choice of wavelet parameters significantly alters the performance of the above mentioned tasks. To the best of authors' knowledge, this work represents the most complete analysis and comparison between wavelet filters. Therefore, it represents a valuable benchmark for future works. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. A Combined Crypto-steganographic Approach for Information Hiding in Audio Signals Using Sub-band Coding, Compressive Sensing and Singular Value Decomposition
- Author
-
Lal, G. Jyothish, Veena, V. K., Soman, K. P., Thampi, Sabu M., editor, Atrey, Pradeep K., editor, Fan, Chun-I, editor, and Perez, Gregorio Martinez, editor
- Published
- 2013
- Full Text
- View/download PDF
8. Design of Two-Channel Quadrature Mirror Filter Banks Using Differential Evolution with Global and Local Neighborhoods
- Author
-
Ghosh, Pradipta, Zafar, Hamim, Banerjee, Joydeep, Das, Swagatam, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Panigrahi, Bijaya Ketan, editor, Suganthan, Ponnuthurai Nagaratnam, editor, Das, Swagatam, editor, and Satapathy, Suresh Chandra, editor
- Published
- 2011
- Full Text
- View/download PDF
9. Design of the Multi-Channel Cosine-Modulated Filter Bank Using the Bacterial Foraging Optimization Algorithm.
- Author
-
Verma, Agya Ram and Singh, Yashvir
- Abstract
The design of a multi-channel cosine-modulated filter bank (CMFB) using bacterial foraging optimization (BFO) is proposed. In this work, the canonic signed digit (CSD) technique is used to optimize the filter coefficients. At pass-band frequency, the magnitude response of the proposed filter bank is near to that of a perfect filter. The performance of the proposed BFO scheme is evaluated and compared with that of the reported modified cuckoo search (MCS) and artificial bee colony modified rate (ABC-MR) algorithms for design of the CMFB using different windows. Our simulation results reveal that a reduction up to 88% and 90% can be achieved in average amplitude distortion and average aliasing distortion, respectively, with 22% reduction in computation time using the proposed BFO algorithm when compared with the MCS technique. The proposed BFO algorithm is a more efficient technique for reconstruction of original signals with minimum computation time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Audio Signal Processing Using Time-Frequency Approaches: Coding, Classification, Fingerprinting, and Watermarking
- Author
-
Sridhar Krishnan, Behnaz Ghoraani, and K. Umapathy
- Subjects
Audio mining ,Audio signal ,Audio electronics ,Computer science ,business.industry ,Speech recognition ,lcsh:Electronics ,Speech coding ,lcsh:TK7800-8360 ,computer.software_genre ,Anti-aliasing ,lcsh:Telecommunication ,Sub-band coding ,Audio watermark ,lcsh:TK5101-6720 ,business ,Audio signal processing ,Digital watermarking ,computer ,Digital signal processing ,Digital audio - Abstract
Audio signals are information rich nonstationary signals that play an important role in our day-to-day communication, perception of environment, and entertainment. Due to its non-stationary nature, time- or frequency-only approaches are inadequate in analyzing these signals. A joint time-frequency (TF) approach would be a better choice to efficiently process these signals. In this digital era, compression, intelligent indexing for content-based retrieval, classification, and protection of digital audio content are few of the areas that encapsulate a majority of the audio signal processing applications. In this paper, we present a comprehensive array of TF methodologies that successfully address applications in all of the above mentioned areas. A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.
- Published
- 2022
- Full Text
- View/download PDF
11. Backward Adaptive and Quasi-Logarithmic Quantizer for Sub-Band Coding of Audio.
- Author
-
Tomić, Stefan, Perić, Zoran, Tančić, Milan, and Nikolić, Jelena
- Subjects
ADAPTIVE codes ,DIGITAL audio ,SIGNAL processing ,COMPANDING ,LOGARITHMS - Abstract
This research presents an audio coding scheme, based on sub-band coding (SBC), with the implementation of quasi-logarithmic compandors. The presented coding scheme is based on signal decomposition and individual processing of the different sub-bands. Two SBC schemes for audio coding are presented, a non-adaptive and an adaptive coding scheme. The application of backward adaptation technique further improves the performance of this coding scheme, especially when using smaller compression factor values. This paper also describes the determination of an efficient bit allocation, used for coding the individual sub-bands. The results indicate that the proposed coding schemes can successfully be implemented in audio signal coding, providing a high quality output signal. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. A wavelet filter comparison on multiple datasets for signal compression and denoising
- Author
-
Pierangelo Migliorati, Alessandro Gnutti, Nicola Adami, Riccardo Leonardi, and Fabrizio Guerrini
- Subjects
Computer science ,Noise reduction ,02 engineering and technology ,Sub-band coding ,Signal ,Wavelet ,Artificial Intelligence ,Compression (functional analysis) ,0202 electrical engineering, electronic engineering, information engineering ,Representation (mathematics) ,Wavelet filter comparison ,Denoising ,business.industry ,Applied Mathematics ,Compression ,Signal compression ,020206 networking & telecommunications ,Pattern recognition ,Computer Science Applications ,Filter design ,Hardware and Architecture ,Signal Processing ,Discrete wavelet transform ,Benchmark (computing) ,Sub-band coding, Discrete wavelet transform, Wavelet filter comparison, Multiresolution analysis, Compression, Denoising ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Multiresolution analysis ,Software ,Information Systems - Abstract
In this paper, we explicitly analyze the performance effects of several orthogonal and bi-orthogonal wavelet families. For each family, we explore the impact of the filter order (length) and the decomposition depth in the multiresolution representation. In particular, two contexts of use are examined: compression and denoising. In both cases, the experiments are carried out on a large dataset of different signal kinds, including various image sets and 1D signals (audio, electrocardiogram and seismic). Results for all the considered wavelets are shown on each dataset. Collectively, the study suggests that a meticulous choice of wavelet parameters significantly alters the performance of the above mentioned tasks. To the best of authors’ knowledge, this work represents the most complete analysis and comparison between wavelet filters. Therefore, it represents a valuable benchmark for future works.
- Published
- 2021
- Full Text
- View/download PDF
13. Sub-band coding of hexagonal images
- Author
-
Md. Mamunur Rashid and Usman R. Alim
- Subjects
Square tiling ,Computer science ,Multiresolution analysis ,Image and Video Processing (eess.IV) ,Wavelet transform ,02 engineering and technology ,Electrical Engineering and Systems Science - Image and Video Processing ,GeneralLiterature_MISCELLANEOUS ,Sub-band coding ,Tree (data structure) ,020401 chemical engineering ,Signal Processing ,Color depth ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Hexagonal lattice ,Computer Vision and Pattern Recognition ,0204 chemical engineering ,Electrical and Electronic Engineering ,Algorithm ,Software ,Image compression ,MathematicsofComputing_DISCRETEMATHEMATICS ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
According to the circle-packing theorem, the packing efficiency of a hexagonal lattice is higher than an equivalent square tessellation. Consequently, in several contexts, hexagonally sampled images compared to their Cartesian counterparts are better at preserving information content. In this paper, novel mapping techniques alongside the wavelet compression scheme are presented for hexagonal images. Specifically, we introduce two tree-based coding schemes, referred to as SBHex (spirally-mapped branch-coding for hexagonal images) and BBHex (breadth-first block-coding for hexagonal images). Both of these coding schemes respect the geometry of the hexagonal lattice and yield better compression results. Our empirical results show that the proposed algorithms for hexagonal images produce better reconstruction quality at low bits per pixel representations compared to the tree-based coding counterparts for the Cartesian grid., Accepted Manuscript
- Published
- 2021
14. EEG Compression Using Motion Compensated Temporal Filtering and Wavelet Based Subband Coding
- Author
-
Syed Muhammad Anwar, Majdi R. Alnowami, Muhammad Majid, Beenish Khalid, and Imran Fareed Nizami
- Subjects
Discrete wavelet transform ,set partitioning in hierarchical tress (SPIHT) ,discrete wavelet transform ,General Computer Science ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Signal compression ,motion compensated temporal filtering (MCTF) ,Pattern recognition ,Data_CODINGANDINFORMATIONTHEORY ,compression ,Sub-band coding ,Set partitioning in hierarchical trees ,Wavelet ,Distortion ,Redundancy (engineering) ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Electroencephalography (EEG) ,business ,lcsh:TK1-9971 ,Communication channel - Abstract
Electroencephalography (EEG) signals are commonly used in medical applications for prevention, diagnosis, and detection of neurological diseases. These EEG signals have also been used in designing brain computer interfaces for assistive technologies. For densely placed electrodes and long EEG recordings, a large amount of data needs to be stored, preferably in compressed form. This EEG signal compression is particularly required in an out-of-the-lab environment so that these signals are efficiently transmitted over wired/wireless communication channels. To this end, we propose a novel compression scheme for EEG signals, which exploits the intra-channel redundancy using motion compensated temporal filtering (MCTF) and discrete wavelet transform (DWT) based sub-band coding. In the pre-processing stage, multi-frame data is constructed such that each group of picture (GOP) contains information from a single channel of EEG data. This helps in removing the intra-channel redundancy. We apply MCTF on each GOP to exploit temporal redundancy, following which the DWT is applied on temporally decomposed frames to exploit spatial redundancy. Each spatio-temporal decomposed frame is assigned a bit budget for minimum distortion. For this purpose, we assign more bit budget to temporally decomposed low pass frames as compared to high pass frames. Spatio-temporal frames are then encoded at the assigned bit rate by using set partitioning in hierarchical tree (SPIHT) algorithm to create the bit stream. Our experimental results showed 4.5% and 2.4% reduction in distortion at the same data rate for BCI-3 and BCI-4 datasets, respectively. These results improve upon the reduction in data size achieved using state-of-the-art compression methods such as SPIHT and SPIHT with independent component analysis.
- Published
- 2020
- Full Text
- View/download PDF
15. A Novel Strong Decorrelation Approach for Image Subband Coding Using Polynomial EVD Algorithms
- Author
-
G. Ramachandra Reddy and Karuna Yepuganti
- Subjects
Discrete wavelet transform ,Redundancy (information theory) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Physics and Astronomy ,Data_CODINGANDINFORMATIONTHEORY ,Quantization (image processing) ,Decorrelation ,Algorithm ,Coding gain ,Eigendecomposition of a matrix ,Data compression ,Sub-band coding - Abstract
Subband coding is a popular technique to achieve multichannel data compression and efficient data transmission in image and video communications. In this paper, we focus on designing a data-dependent image subband coder which divides the image into strongly (total) decorrelated and spectrally majorized subbands, which seeks to reduce the redundancy to achieve better compression and efficient transmission. To achieve strong decorrelation and spectral majorization between the subbands, we adopt a set of new iterative polynomial eigenvalue decomposition (PEVD) algorithms: sequential matrix diagonalization (SMD) and maximum element sequential matrix diagonalization (ME-SMD). Using this SMD-based PEVD approach, we design the data dependent subband coder (DDSC) for image subband coding. We compare the performance of the proposed SMD/ME-SMD algorithms with the existing DDSC methods like SBR2, SBR2C, K–L transform (KLT) coder and data independent subband coder (DISC)-based discrete wavelet transform (DWT) technique. To measure the performance, we use the parameters like coding gain, correlation coefficient, MSE (mean square error) and peak signal-to-noise ratio (PSNR). The presented simulation results for standard images in the absence of quantization show that the proposed SMD-based PEVD technique performs far better than the existing techniques.
- Published
- 2019
- Full Text
- View/download PDF
16. An improved robust image-adaptive watermarking with two watermarks using statistical decoder
- Author
-
Preeti Bhinder, Neeru Jindal, and Kulbir Singh
- Subjects
Computer Networks and Communications ,Computer science ,Gaussian ,020207 software engineering ,02 engineering and technology ,Sub-band coding ,Image (mathematics) ,Moment (mathematics) ,symbols.namesake ,Hardware and Architecture ,Robustness (computer science) ,Computer Science::Multimedia ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Kurtosis ,symbols ,Digital watermarking ,Algorithm ,Software ,Computer Science::Cryptography and Security ,Block (data storage) - Abstract
This paper presents an improved image-adaptive watermarking technique. Two image watermarks are embedded in the high entropy 8 × 8 blocks of the host image. DWT is applied on these blocks using the principle of sub band coding. This decomposes the high entropy blocks into four sub band coefficients, wherein the approximation and vertical frequency coefficients are modeled using Gaussian (or Normal) distribution. The two watermarks are inserted in the host image using Adjustable Strength Factor (ASF). It is calculated adaptively using the fourth statistical moment known as kurtosis. A limited side information is also transmitted along with the watermarked image. This side information consists of high entropy block positions and Gaussian distribution parameters. To extract both watermarks from the received watermarked image, the high entropy block positions sent in the side information help in applying DWT to calculate the approximation and vertical frequency coefficients. Gaussian (or Normal) distribution is similarly used for modeling and calculating the distribution parameters. This helps the Maximum Likelihood (ML) decoder to recover the watermarks successfully using a statistical approach. Two important contributions are presented in this paper. Firstly, adjustable kurtosis values are used which improves the capacity and robustness of the proposed technique. Secondly, the proposed work is implemented on medical applications and gives better performance as compared to the existing methods. Further, the efficiency of the proposed work is evaluated by better simulation results using PSNR, NCC, SSIM and GMSD under different attacks. The technique is highly robust as watermarks survive under different attacks. This increases security and ensures copyright protection.
- Published
- 2019
- Full Text
- View/download PDF
17. Kalman Filtering Based Motion Estimation for Video Coding
- Author
-
Chung-Ming Kuo, Chaur-Heh Hsieh, and Nai-Chung Yang
- Subjects
Motion compensation ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,Coding tree unit ,Video compression picture types ,Quarter-pixel motion ,Sub-band coding ,Computer vision ,Artificial intelligence ,Multiview Video Coding ,business ,Context-adaptive binary arithmetic coding ,Block-matching algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Video compression is a very efficient method for storage and transmission of digital video signal. The applications include multimedia transmission, teleconferencing, videophone, high-definition television (HDTV), CD-ROM storages, etc. The hybrid coding techniques based on predictive and transform coding are the most popular and adopted by many video coding standards such as MPEG-1/2/4 [1] and H.261/H.263/H.264 [2, 3], owing to its high compression efficiency. In the hybrid coding system, the motion compensation, first proposed by Netravali and Robbins in 1997, plays a key role from the view point of coding efficiency and implementation cost [4-11]. A generic hybrid video coder is depicted in Figure 1. Fig. 1. A generic hybrid motion compensated DCT video coder. DCT Quan-tization De-Quan-tization IDCT Motion Compensation Motion Estimation Frame Buffer Variable Length Coder
- Published
- 2021
18. A Multispectral Image Compression Algorithm for Small Satellites Based on Wavelet Subband Coding
- Author
-
Joel Telles and Guillermo Kemper
- Subjects
Set partitioning in hierarchical trees ,Standard test image ,business.industry ,Computer science ,Multispectral image ,Pattern recognition ,Artificial intelligence ,Lossy compression ,Quantization (image processing) ,business ,Data compression ,Sub-band coding ,Image compression - Abstract
This article proposes a lossy compression algorithm and scalable multispectral image coding—including blue, green, red, and near-infrared wavelengths—aimed at increasing image quality based on the amount of data received. The algorithm is based on wavelet subband coding and quantization, predictive multispectral image coding at different wavelengths, and the Huffman coding. The methodology was selected due to small satellites’ low data rate and a brief line of sight to earth stations. The test image database was made from the PeruSat-1 and LANDSAT 8 satellites in order to have different spatial resolutions. The proposed method was compared with the SPIHT, EZW, and STW techniques and subsequently submitted to a peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) evaluation; it showed better efficiency and reached compression ratios of 20, with a PSNR of 30 and an SSIM of approximately 0.8, depending on the multispectral image wavelength.
- Published
- 2020
- Full Text
- View/download PDF
19. An Improved Approach for the Design of Two-Channel Quadrature Mirror Filter Bank Using Unconstrained Optimization.
- Author
-
Agrawal, S. K. and Sahu, O. P.
- Subjects
- *
QUADRATURE amplitude modulation , *FILTERS & filtration , *MULTIPHASE flow , *EIGENVALUES , *EIGENVECTORS , *COEFFICIENTS (Statistics) - Abstract
In this paper, an improved method for the design of two-channel Quadrature Mirror Filter (QMF) bank with linear phase characteristics is presented. A prototype low-pass filter of the QMF bank is implemented using polyphase components. The design technique optimizes the value of the prototype filter coefficients to match the ideal transfer function of the filter bank using Eigenvalue-Eigenvector approach without any matrix inversion. The optimization is carried out to minimize an objective function which is a linear combination of pass-band error and stop-band residual energy of the low-pass analysis filter of the filter bank, and square error of the distortion transfer function of the QMF bank at the quadrature frequency. The simulation results show that the proposed method requires less computational time and number of iterations with similar performance of Peak Reconstruction Error (PRE) in comparison to already existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2014
20. Audio coding via EMD
- Author
-
Thierry Chonavel, Kais Khaldi, Mounia Turki Hadj-Alouane, Ali Komaty, Abdel-Ouahab Boudraa, Institut de Recherche de l'Ecole Navale (IRENAV), Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Arts et Métiers Sciences et Technologies, HESAM Université (HESAM)-HESAM Université (HESAM), Département Signal et Communications (IMT Atlantique - SC), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Lab-STICC_IMTA_CID_TOMS, Laboratoire des sciences et techniques de l'information, de la communication et de la connaissance (Lab-STICC), École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), and Institut Mines-Télécom [Paris] (IMT)
- Subjects
Computer science ,Audio coding ,02 engineering and technology ,Sub-band coding ,Hilbert–Huang transform ,Wavelet ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Codec ,Psychoacoustics ,Electrical and Electronic Engineering ,Empirical mode decomposition ,Audio signal ,Applied Mathematics ,020206 networking & telecommunications ,Stationarity index ,Maxima and minima ,Computational Theory and Mathematics ,Signal Processing ,020201 artificial intelligence & image processing ,Empirical mode compression ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Algorithm ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Psychoacoustic model ,Coding (social sciences) - Abstract
International audience; In this paper an audio coding scheme based on the empirical mode decomposition in association with a psychoacoustic model is presented. The principle of the method consists in breaking down adaptively the audio signal into intrinsic oscillatory components, called Intrinsic Mode Functions (IMFs), that are fully described by their local extrema. These extrema are encoded. The coding is carried out frame by frame and no assumption is made upon the signal to be coded. The number of allocated bits varies from mode to mode and obeys to the coding error inaudibility constraint. Due to the symmetry of an IMF, only the extrema (maxima or minima) of one of its interpolating envelopes are perceptually coded. In addition, to deal with rapidly changing audio signals, a stationarity index is used and when a transient is detected, the frame is split into two overlapping sub-frames. At the decoder side, the IMFs are recovered using the associated coded maxima, and the original signal is reconstructed by IMFs summation. Performance of the proposed coding is analyzed and compared to that of MP3 and AAC codecs, and the wavelet-based coding approach. Based on the analyzed mono audio signals, the obtained results show that the proposed coding scheme outperforms the MP3 and the wavelet-based coding methods and performs slightly better than the AAC codec, showing thus the potential of the EMD for data-driven audio coding.
- Published
- 2020
- Full Text
- View/download PDF
21. Perceptual Vibration Hashing by Sub-Band Coding: An Edge Computing Method for Condition Monitoring
- Author
-
Xiaohong Wang, Chengliang Liu, Li Fajia, Haining Liu, Yixiang Wang, and Michael Pecht
- Subjects
General Computer Science ,Computer science ,condition monitoring ,Hash function ,02 engineering and technology ,Perceptual hashing ,Wavelet packet decomposition ,edge computing ,0202 electrical engineering, electronic engineering, information engineering ,Discrete cosine transform ,bearing fault diagnosis ,General Materials Science ,Computer Science::Databases ,Edge computing ,perceptual hashing ,020203 distributed computing ,degradation assessment ,business.industry ,General Engineering ,Condition monitoring ,Prognostics and health management (PHM) ,Sub-band coding ,Computer data storage ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 ,Algorithm - Abstract
High data throughput during real-time vibration monitoring can easily lead to network congestion, insufficient data storage space, heavy computing burden, and high communication costs. As a new computing paradigm, edge computing is deemed to be a good solution to these problems. In this paper, perceptual hashing is proposed as an edge computing form, aiming not only to reduce the data dimensionality but also to extract and represent the machine condition information. A sub-band coding method based on wavelet packet transform, two-dimensional discrete cosine transform, and symbolic aggregate approximation is developed for perceptual vibration hashing. When the sub-band coding method is implemented on a monitoring terminal, the acquired kilobyte-long vibration signal can be transformed into a machine condition hash occupying only a few bytes. Therefore, the efficiency of condition monitoring can benefit from the compactness of the machine condition hash, while comparable diagnostic and prognostic results can still be achieved. The effectiveness of the developed method is verified with two benchmark bearing datasets. Considerations on practical condition monitoring applications are also presented.
- Published
- 2019
- Full Text
- View/download PDF
22. Variable Block-Sized Signal-Dependent Transform for Video Coding
- Author
-
Xu Jizheng, Wenjun Zeng, Guangming Shi, Feng Wu, and Cuiling Lan
- Subjects
Theoretical computer science ,Macroblock ,020206 networking & telecommunications ,02 engineering and technology ,Sub-band coding ,Algorithmic efficiency ,Sum of absolute transformed differences ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Discrete cosine transform ,Lapped transform ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Decoding methods ,Transform coding ,Mathematics - Abstract
Transform, as one of the most important modules of mainstream video coding systems, seems very stable over the past several decades. However, recent developments indicate that bringing more options for transform can lead to coding efficiency benefits. In this paper, we go further to investigate how the coding efficiency can be improved over the state-of-the-art method by adapting a transform for each block. We present a variable block-sized signal-dependent transforms (SDTs) design based on the High Efficiency Video Coding (HEVC) framework. For a coding block ranged from $4\times4$ to $32\times32$ , we collect a quantity of similar blocks from the reconstructed area and use them to derive the Karhunen–Loeve transform. We avoid sending overhead bits to denote the transform by performing the same procedure at the decoder. In this way, the transform for every block is tailored according to its statistics, to be signal-dependent. To make the large block-sized SDTs feasible, we present a fast algorithm for transform derivation. Experimental results show the effectiveness of the SDTs for different block sizes, which leads to up to 23.3% bit-saving. On average, we achieve BD-rate saving of 2.2%, 2.4%, 3.3%, and 7.1% under AI-Main10, RA-Main10, RA-Main10, and LP-Main10 configurations, respectively, compared with the test model HM-12 of HEVC. The proposed scheme has also been adopted into the joint exploration test model for the exploration of potential future video coding standard.
- Published
- 2018
- Full Text
- View/download PDF
23. Diversity-Based Reference Picture Management for Low Delay Screen Content Coding
- Author
-
Bin Li, Jiahao Li, Xu Jizheng, and Ruiqin Xiong
- Subjects
Motion compensation ,Computer science ,business.industry ,Low delay ,020206 networking & telecommunications ,02 engineering and technology ,Coding tree unit ,Sub-band coding ,Computer engineering ,Motion estimation ,Bit rate ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Multiview Video Coding ,business ,Context-adaptive binary arithmetic coding ,Coding (social sciences) - Abstract
Screen content coding plays an important role in many applications. Conventional reference picture management (RPM) strategies developed for natural content may not work well for screen content. This is because many regions in screen content remain static for a long time, causing a lot of repetitive contents to stay in the decoded picture buffer. The repetitive contents are not conducive to inter prediction, but still occupy valuable memory. This paper proposes a diversity-based RPM scheme for screen content coding. The concept of diversity is introduced for the reference picture set (RPS) to help formulate the RPM problem. By maximizing the diversity of RPS, more potentially better predictions are provided. Better compression performance can then be achieved. Meanwhile, the proposed scheme is nonnormative and compatible with existing video coding standards, such as High Efficiency Video Coding. The experimental results show that, for low delay screen content coding, the bit saving of the proposed scheme is 4.9% on average and up to 13.7%, without increasing encoding time.
- Published
- 2018
- Full Text
- View/download PDF
24. DWT Based on OFDM Multicarrier Modulation Using Multiple Input Output Antennas System.
- Author
-
Manasra, Ghandi, Najajri, Osama, Rabah, Samer, and Arram, Hashem Abu
- Subjects
WAVELETS (Mathematics) ,MULTI-carrier modulation ,ORTHOGONAL frequency division multiplexing ,RANDOM noise theory ,STOCHASTIC information theory ,BIT error rate ,MIMO systems - Abstract
The transmission through wireless channel suffers from many challenges due to the multipath effect that causes Inter Symbol Interference (ISI) problem. Multicarrier modulation (MCM) is proposed as solution to overcome the ISI. This research presents the Discrete Wavelet Transform (DWT) based multicarrier modulation as alternative platform of conventional OFDM in which there is no need for cyclic prefix overhead due to the overlapping nature of DWT. Simulation based analysis will be used to simulate the two multicarrier systems, DWT with Haar mother based multicarrier and the conventional OFDM, under the scenario of having multiple antennas system, with BPSK and QPSK as two modulation schemes in additive white Gaussian noise channel (AWGN). Based on the bit error rate performance and the transmission capacity, the DWT based multicarrier system was found to be superior to the conventional OFDM system. [ABSTRACT FROM AUTHOR]
- Published
- 2012
25. Pixel Interlacing Based Video Transmission for Low-Complexity Intra-Frame Error Concealment.
- Author
-
Yan, Bo, Gharavi, Hamid, and Hu, Bin
- Subjects
- *
VIDEO processing , *COMPUTATIONAL complexity , *ELECTRIC interference , *VIDEO compression , *RELIABILITY in engineering , *MOBILE communication systems , *SIGNAL-to-noise ratio , *IMAGE quality analysis - Abstract
When multi-path fading and interference frequently disrupts a mobile radio communication system, it can seriously undermine its reliability for transmission of compressed video signals. In this paper, we present a pixel interlacing based video transmission system for low-complexity Intra-frame error concealment over error-prone mobile networks, especially under severe channel conditions. The proposed method, despite its low-complexity which is based on a simple pixel interlacing technique at the encoder, can significantly enhance the quality of the corrupted video signal. Experimental results show that the proposed method can significantly improve image quality (with the average PSNR gain up to 15.80 dB) in comparison with the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
26. Design of the Multi-Channel Cosine-Modulated Filter Bank Using the Bacterial Foraging Optimization Algorithm
- Author
-
Yashvir Singh and Agya Ram Verma
- Subjects
Optimization algorithm ,Computer science ,Foraging ,020206 networking & telecommunications ,02 engineering and technology ,Filter bank ,Sub-band coding ,030507 speech-language pathology & audiology ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,0202 electrical engineering, electronic engineering, information engineering ,Trigonometric functions ,0305 other medical science ,Algorithm ,Multi channel - Abstract
The design of a multi-channel cosine-modulated filter bank (CMFB) using bacterial foraging optimization (BFO) is proposed. In this work, the canonic signed digit (CSD) technique is used to ...
- Published
- 2018
- Full Text
- View/download PDF
27. Discrete Wavelet Transform Based on Coextensive Distributive Computation on FPGA
- Author
-
K. B. Sowmya and Jose Mathew
- Subjects
Discrete wavelet transform ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,computer.file_format ,Filter bank ,Sub-band coding ,03 medical and health sciences ,0302 clinical medicine ,Wavelet ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Bit Rate Reduction ,Field-programmable gate array ,computer ,Algorithm ,030217 neurology & neurosurgery ,Image compression ,Data compression - Abstract
Subband coding has recently emerged as the leading standardization candidate in audio/video/image compression, echo cancellation, radar, image analysis, communications, medical imaging etc. Due to this, Discrete Wavelet Transform has gained much attention in memory efficient technique for handling large amounts of data. The implicit overlapping and variable-length basis functions from wavelets produce smoother and more pleasant reconstructions. Data compression, bit rate reduction is done by implementing memory efficient Discrete Wavelet Transform based on Coextensive Distributive Computation on FPGA. In this work Low complexity DWT Architecture which utilizes the Look-Up Table, using Shift register FPGA structure is developed to develop a filter bank without multiplier which is the important components in a DWT structure. This results in better space usage and reduction in the area size. Filter bank construction is done using DB-4 Daubechies 9/7 wavelets. This paper presents a consistent performance, good operating speed and area efficient DWT processor along with best utilization of resources available on target FPGA. The proposed Discrete Wavelet Transform System is implemented on Xilinx xc2vp30-7-ff896 Field Programmable Gate Array with highest operating frequency of 141.055MHz.
- Published
- 2018
- Full Text
- View/download PDF
28. Fast Randomization for Distributed Low-Bitrate Coding of Speech and Audio
- Author
-
Johannes Fischer, Tom Bäckström, Publica, Dept Signal Process and Acoust, Friedrich-Alexander University Erlangen-Nürnberg, Aalto-yliopisto, and Aalto University
- Subjects
Acoustics and Ultrasonics ,Computer science ,speech coding ,Speech recognition ,Speech coding ,superfast algorithm ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,randomization ,distributed coding ,orthonormal matrix ,Audio codec ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Speech ,Electrical and Electronic Engineering ,Quantization (signal) ,Voice activity detection ,ta213 ,Complexity theory ,Quantization (signal processing) ,020206 networking & telecommunications ,Linear predictive coding ,Speech processing ,Sub-band coding ,Codecs ,Computational Mathematics ,audio coding ,Adaptive Multi-Rate audio codec ,020201 artificial intelligence & image processing - Abstract
Efficient coding of speech and audio in a distributed system requires that quantization errors across nodes are uncorrelated. Yet, with conventional methods at low bitrates, quantization levels become increasingly sparse, which does not correspond to the distribution of the input signal and, importantly, also reduces coding efficiency in a distributed system. We have recently proposed a distributed speech and audio codec design, which applies quantization in a randomized domain such that quantization errors are randomly rotated in the output domain. Similar to dithering, this ensures that quantization errors across nodes are uncorrelated and coding efficiency is retained. In this paper, we improve this approach by proposing faster randomization methods, with a computational complexity of $\mathcal O(N\log N)$ . The presented experiments demonstrate that the proposed randomizations yield uncorrelated signals, that perceptual quality is competitive, and that the complexity of the proposed methods is feasible for practical applications.
- Published
- 2018
- Full Text
- View/download PDF
29. Distributed Video Coding for Illumination Compensation of Multi-view Video.
- Author
-
Seanae Park, Donggyu Sim, and Byeungwoo Jeon
- Subjects
ENTROPY ,CAMCORDERS ,DISCRETE cosine transforms ,VIEW cameras ,DIGITAL electronics ,ALGORITHMS - Abstract
In this paper, we propose an improved distributed multi-view video coding method that is robust to illumination changes among different views. The use of view dependency is not effective for multi-view video because each view has different intrinsic and extrinsic camera parameters. In this paper, a modified distributed multi-view coding method is presented that applies illumination compensation when generating side information. The proposed encoder codes DC values of discrete cosine transform (DCT) coefficients separately by entropy coding. The proposed decoder can generate more accurate side information by using the transmitted DC coefficients to compensate for illumination changes. Furthermore, AC coefficients are coded with conventional entropy or channel coders depending on the frequency band. We found that the proposed algorithm is about 0.1~0.5 dB better than conventional algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
30. Wavetet Packets Feasibility Study for the Design of an ECG Compressor.
- Author
-
Blanco-Velasco, Manuel, Cruz-Roldán, Fernando, Godino-Llorente, Juan Ignacio, and Barner, Kenneth E.
- Subjects
- *
ELECTROCARDIOGRAPHY , *ELECTRODIAGNOSIS , *HEART disease diagnosis , *ELECTROKYMOGRAPHY , *PATIENTS , *MEDICAL care - Abstract
Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: 1) The scheme should be simple to allow real-time implementation; 2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. High-Efficiency Image Coding via Near-Optimal Filtering
- Author
-
Wen Gao, Xinfeng Zhang, Shiqi Wang, Yabin Zhang, Siwei Ma, and Weisi Lin
- Subjects
Mean squared error ,Pixel ,Computer science ,Applied Mathematics ,Wiener filter ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Filter (signal processing) ,Sub-band coding ,symbols.namesake ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Image restoration ,Image compression - Abstract
Wiener filtering, which has been widely used in the field of image restoration, is statistically optimal in the sense of mean square error. The adaptive loop filter in video coding inherits the design of Wiener filters, and has been proved to achieve significant improvement on compression performance by reducing coding artifacts and providing high-quality references for subsequent frames. To further improve the compression performance via filtering technique, we explore the factors that may hinder the potential performance of Wiener-based filters, and propose a near-optimal filter learning scheme for high-efficiency image coding. Based on the analyses, we observe that the foremost factor affecting the performance of Wiener-based filters is the divergence of statistical characteristics of training samples, instead of the filter taps or shapes. In view of this, we propose an iterative training method to derive the near-optimal Wiener filter parameters by simultaneously labeling sample categories at the pixel level. These parameters are compressed and transmitted to the decoder side to improve the quality of decoded images by reducing the coding artifacts. Experimental results show that the proposed scheme achieves significant bitrate savings compared with high-efficiency video coding in high-bitrate intra coding scenario.
- Published
- 2017
- Full Text
- View/download PDF
32. Temporally Dependent Rate-Distortion Optimization for Low-Delay Hierarchical Video Coding
- Author
-
Ce Zhu, Shuai Li, Tianwu Yang, and Yanbo Gao
- Subjects
Real-time computing ,020206 networking & telecommunications ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Coding tree unit ,Intra-frame ,Sub-band coding ,Video compression picture types ,Rate–distortion optimization ,0202 electrical engineering, electronic engineering, information engineering ,Codec ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Context-adaptive binary arithmetic coding ,Mathematics ,Coding (social sciences) - Abstract
Low-delay hierarchical coding structure (LD-HCS), as one of the most important components in the latest High Efficiency Video Coding (HEVC) standard, greatly improves coding performance. It groups consecutive P/B frames into different layers and encodes them with different quantization parameters (QPs) and reference mechanisms in such a way that temporal dependency among frames can be exploited. However, due to varying characteristics of video contents, temporal dependency among coding units differs significantly from each other in the same or different layers, while a fixed LD-HCS scheme cannot take full advantage of the dependency, leading to a substantial loss in coding performance. This paper addresses the temporally dependent rate distortion optimization (RDO) problem by attempting to exploit varying temporal dependency of different units. First, the temporal relationship of different frames under the LD-HCS is examined, and hierarchical temporal propagation chains are constructed to represent the temporal dependency among coding units in different frames. Then, a hierarchical temporally dependent RDO scheme is developed specifically for the LD-HCS based on a source distortion propagation model. Experimental results show that our proposed scheme can achieve 2.5% and 2.3% BD-rate gain in average compared with the HEVC codec under the same configuration of P and B frames, respectively, with a negligible increase in encoding time. Furthermore, coupled with QP adaption, our proposed method can achieve higher coding gains, e.g., with multi-QP optimization, about 5.4% and 5.0% BD-rate saving in average over the HEVC codec under the same setting of P and B frames, respectively.
- Published
- 2017
- Full Text
- View/download PDF
33. High quality audio object coding framework based on non-negative matrix factorization
- Author
-
Tingzhao Wu, Xiaochen Wang, Jinshan Wang, Shanfa Ke, and Ruimin Hu
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Speech coding ,020207 software engineering ,02 engineering and technology ,Coding tree unit ,Sub-band coding ,Matrix decomposition ,Non-negative matrix factorization ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Sound quality ,business ,Decoding methods ,Coding (social sciences) - Abstract
Object-based audio coding is the main technique of audio scene coding. It can effectively reconstruct each object trajectory, besides provide sufficient flexibility for personalized audio scene reconstruction. So more and more attentions have been paid to the object-based audio coding. However, existing object-based techniques have poor sound quality because of low parameter frequency domain resolution. In order to achieve high quality audio object coding, we propose a new coding framework with introducing the non-negative matrix factorization (NMF) method. We extract object parameters with high resolution to improve sound quality, and apply NMF method to parameter coding to reduce the high bitrate caused by high resolution. And the experimental results have shown that the proposed framework can improve the coding quality by 25%, so it can provide a better solution to encode audio scene in a more flexible and higher quality way.
- Published
- 2017
- Full Text
- View/download PDF
34. Feedback-Free Binning Design for Mobile Wyner-Ziv Video Coding: An Operational Duality between Source Distortion and Channel Capacity
- Author
-
Yiqiang Chen, Xiangyang Ji, and Wen Ji
- Subjects
Channel code ,Computer Networks and Communications ,Computer science ,Real-time computing ,Variable-length code ,020206 networking & telecommunications ,02 engineering and technology ,Code rate ,Coding tree unit ,Sub-band coding ,Channel capacity ,Shannon–Fano coding ,Distortion ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Multiview Video Coding ,Algorithm ,Encoder ,Software ,Context-adaptive binary arithmetic coding ,Decoding methods ,Communication channel - Abstract
Most mobile video applications require the encoder to have low complexity. Wyner-Ziv (WZ) video coding removes complex motion estimation from the encoder, and provides error resilience from the embedded channel coding module. WZ video coding is regarded as a promising encoder for wireless video systems. Most WZ video coding based on channel codes is a practical implementation of the binning schemes. In this work, we present a novel two-tier binning scheme that consists of the inner and outer structure, for improving rate-distortion performance. First, we develop a Raptor coding with side information to construct the inner binning structure, which provides a lower rate. Second, for the outer binning, we model the WZ video coding architecture as a multi-access channel, so that we can exploit the property of channel capacity. Third, we exploit the duality property of WZ video coding. Based on such a property, both the primal and dual solutions are subsequently provided in this study. For the primal problem of distortion minimization, we develop dynamic programming to find the optimal binning policy, whereas for the dual problem of capacity maximization, we devise a near sum-capacity binning algorithm. The objective is to lower the coding rate with lower complexity. Experimental results showed that when compared with the state-of-the-art coding, the decoding performance and the quality of our proposed method were respectively enhanced. Besides, we observed that the decoding distortion was reduced through the proposed outer binning, while the proposed inner binning based on Raptor coding by jointly considering side information (SI) lead to a low bitrate when a target decoding quality was specified. Such findings have substantiated the effectiveness of our method.
- Published
- 2017
- Full Text
- View/download PDF
35. Improving ECG signal denoising using wavelet transform for the prediction of malignant arrhythmias
- Author
-
Domenico Andrea Giliberti, Cataldo Guaragnella, and Agostino Giorgio
- Subjects
Computer science ,Noise reduction ,Biomedical Engineering ,Medicine (miscellaneous) ,Health Informatics ,02 engineering and technology ,Signal ,Biomaterials ,Biomedical Electronics ,0202 electrical engineering, electronic engineering, information engineering ,Detection theory ,Field-programmable gate array ,FPGA ,Wavelet Transforms ,Denoising ,Signal Detection ,Noise (signal processing) ,business.industry ,020208 electrical & electronic engineering ,Wavelet transform ,Pattern recognition ,Filter (signal processing) ,Sub-band coding ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
This paper deals with the accuracy of algorithms for the detection of ventricular late potentials (VLP) in an electrocardiographic signal (ECG), being associated with malignant arrhytmias and possible cardiac death. VLP detection is strongly influenced by signal noise. The objective of this paper is to define a denoising algorithm improving the VLPs detection. The method uses wavelet denoising, subband coding, unfortunately introducing heavy linear distortions. Therefore, an equalisation filter has been properly designed, in order to cancel the distortions. The algorithm has been implemented and successfully verified using MATLAB. Then, it has been implemented on Altera's FPGA and then verified on the evaluation board DE1-SoC. On-board processed results and theoretical results are consistent, validating the algorithm. The results show that the algorithm capability to be implemented as programmable hardware. It also could be used for upgrading ECG devices reliability in the field of heart diseases prevention.
- Published
- 2020
36. An improved optimal bit allocation method for sub-band coding
- Author
-
Wang, Ze, Lee, Yin, Leung, Chi-Sing, Wong, Tien-Tsin, and Zhu, Yi-Sheng
- Subjects
- *
IDENTIFICATION , *RESOURCE allocation , *SIMULATION methods & models - Abstract
This paper presents an improved optimal bit allocation method for sub-band coding. To speed up the allocation process, a coarse bit range for each sub-band is firstly obtained using the log–variance method, and then an optimal searching routine is applied to produce the final solution. Good features are showed in the simulations for evaluating the proposed method. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
37. Scalable Lossless Coding of Dynamic Medical CT Data Using Motion Compensated Wavelet Lifting with Denoised Prediction and Update
- Author
-
Daniela Lanz, Andre Kaup, and Franz Schilling
- Subjects
Lossless compression ,Discrete wavelet transform ,0303 health sciences ,Motion compensation ,Computer science ,030310 physiology ,Image and Video Processing (eess.IV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,030229 sport sciences ,Filter (signal processing) ,Electrical Engineering and Systems Science - Image and Video Processing ,Sub-band coding ,03 medical and health sciences ,0302 clinical medicine ,Wavelet ,Compression ratio ,FOS: Electrical engineering, electronic engineering, information engineering ,High-pass filter ,Algorithm - Abstract
Professional applications like telemedicine often require scalable lossless coding of sensitive data. 3-D subband coding has turned out to offer good compression results for dynamic CT data and additionally provides a scalable representation in terms of low- and highpass subbands. To improve the visual quality of the lowpass subband, motion compensation can be incorporated into the lifting structure, but leads to inferior compression results at the same time. Prior work has shown that a denoising filter in the update step can improve the compression ratio. In this paper, we present a new processing order of motion compensation and denoising in the update step and additionally introduce a second denoising filter in the prediction step. This allows for reducing the overall file size by up to 4.4%, while the visual quality of the lowpass subband is kept nearly constant.
- Published
- 2019
- Full Text
- View/download PDF
38. Sub-band Vector Quantized Variational AutoEncoder for Spectral Envelope Quantization
- Author
-
Tanasan Srikotr and Kazunori Mano
- Subjects
Computer science ,business.industry ,Quantization (signal processing) ,Deep learning ,Vector quantization ,Data_CODINGANDINFORMATIONTHEORY ,010501 environmental sciences ,Speech processing ,01 natural sciences ,Autoencoder ,Sub-band coding ,Convolution ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Quantization (physics) ,Spectral envelope ,Artificial intelligence ,0305 other medical science ,business ,Algorithm ,Decoding methods ,0105 earth and related environmental sciences - Abstract
Recently, a lot of deep learning model successful in taking over conventional methods in speech processing fields. Vector quantization is a popular technique to reduce the amount of speech data before transmitting. The conventional vector quantization method is based on the mathematical model. Last few years, the Vector Quantized Variational AutoEncoder has been proposed for an end-to-end vector quantization based on deep learning techniques. In this paper, we investigate the sub-band quantization in the Vector Quantized Variational AutoEncoder. This model can concentrate on specific frequency bands to assign more bits and leave the unnecessary band with few bits. Experimental results show the efficiency of the proposed quantization method for the spectral envelope parameters of the high-quality vocoder that operates at 48 kHz sampling frequency named WORLD vocoder. At the same four target bit rates, the sub-band Vector Quantized Variational AutoEncoder can reduce the Log Spectral Distortion around 0.93 dB in average.
- Published
- 2019
- Full Text
- View/download PDF
39. A Design Approach of Quadrature Mirror Filter Banks Using Improved Version of Artificial Bee Colony Algorithm
- Author
-
Kirti Pathak and O. P. Sahu
- Subjects
Fine-tuning ,Finite impulse response ,Mean squared error ,Fir filter design ,Computer science ,05 social sciences ,050301 education ,02 engineering and technology ,Quadrature mirror filter ,Sub-band coding ,Artificial bee colony algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Control parameters ,0503 education ,Algorithm - Abstract
This proposed paper presents a design approach for two-channel quadrature mirror filter (QMF) bank using improved artificial bee colony algorithm. Artificial bee colony algorithm has been modified by fine tuning some existing control parameters. Here QMF bank design is used for a lowpass prototype FIR filter design problem. The objective function is the minimizing of mean square error of magnitude responses between desired and designed low-pass prototype FIR filters. The objective function is minimized by using the modified ABC algorithm yielding improved results. Two design examples are also presented to show the efficiency of the proposed method over the existing methods in the literature. The results of the proposed design approach have also been compared with that of the produced results in the literature.
- Published
- 2019
- Full Text
- View/download PDF
40. Lossy Image Compression with Filter Bank Based Convolutional Networks
- Author
-
Ziyang Zheng, Wenrui Dai, Hongkai Xiong, and Shaohui Li
- Subjects
Set partitioning in hierarchical trees ,Artificial neural network ,Computer science ,Convolutional code ,Filter (video) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,Mutual information ,Filter bank ,Algorithm ,Sub-band coding ,Arithmetic coding - Abstract
Filter bank based convolutional networks (FBCNs) enable efficient separable multiscale and multidirectional decomposition with a convolutional cascade of 1-D radial and directional filter banks. In this paper, we propose a two-stage subband coding framework for FBCN analysis coefficients using a SPIHT-like algorithm and subsequent primitive-based adaptive arithmetic coding (AAC). The SPIHT-like algorithm extends spatial orientation tree to exploit inter-subband dependency between subbands of different scales and directions. Mutual information is estimated for information-theoretical measurement to formulate such dependencies. Various primitives are designed adaptively encode the generated bitstream by fitting its varying lists and passes. Neural networks are leveraged to improve probability estimation for AAC, where nonlinear prediction is made based on contexts regarding scales, directions, locations and significance of analysis coefficients. Experimental results show that the proposed framework improves the lossy coding performance for FBCN analysis coefficients in comparison to the state-of-the-arts subband coding schemes SPIHT.
- Published
- 2019
- Full Text
- View/download PDF
41. Sub-band coding of hexagonal images.
- Author
-
Rashid, Md Mamunur and Alim, Usman R.
- Subjects
- *
BLOCK codes , *IMAGE processing , *GEOMETRY , *PIXELS - Abstract
According to the circle-packing theorem, the packing efficiency of a hexagonal lattice is higher than an equivalent square tessellation. Consequently, in several contexts, hexagonally sampled images compared to their Cartesian counterparts are better at preserving information content. In this paper, novel mapping techniques alongside the wavelet compression scheme are presented for hexagonal images. Specifically, we introduce two tree-based coding schemes, referred to as SBHex (spirally-mapped branch-coding for hexagonal images) and BBHex (breadth-first block-coding for hexagonal images). Both of these coding schemes respect the geometry of the hexagonal lattice and yield better compression results. Our empirical results show that the proposed algorithms for hexagonal images produce better reconstruction quality at low bits per pixel representations compared to the tree-based coding counterparts for the Cartesian grid. • Tree-based hexagonal wavelet sub-band coding scheme for hexagonally sampled images. • Spiral wavelet tree rearrangement that preserves spatial coherence of coefficients. • Traversal scheme and parent-to-children relationships that respect hexagonal geometry. • Compressed files that are upto 2.5 times smaller compared to similar Cartesian schemes. • Improved quality at low bit rates compared to similar Cartesian schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Linear Sub-band Decomposition based Pre-processing Algorithm for Perceptual Video Coding
- Author
-
Kwang Yeon Choi and Byung Cheol Song
- Subjects
Computer science ,business.industry ,Speech recognition ,Perception ,media_common.quotation_subject ,Pattern recognition ,Artificial intelligence ,business ,Sub-band coding ,Coding (social sciences) ,media_common - Published
- 2017
- Full Text
- View/download PDF
43. Hybrid Wyner-Ziv Video Coding with No Feedback Channel
- Author
-
Byeungwoo Jeon, Tammam Tillo, and Ho-Young Lee
- Subjects
Motion compensation ,Theoretical computer science ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Variable-length code ,Distributed source coding ,020207 software engineering ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Coding tree unit ,Sub-band coding ,Shannon–Fano coding ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Multiview Video Coding ,Algorithm ,Context-adaptive binary arithmetic coding - Abstract
In this paper, we propose a hybrid Wyner-Ziv video coding structure that combines conventional motion predictive video coding and Wyner-Ziv video coding to eliminate the feedback channel, which is a major practical problem in applications using the Wyner-Ziv video coding approach. The proposed method divides a hybrid frame into two regions. One is coded by a motion predictive video coder, and the other by the Wyner-Ziv coding method. The proposed encoder estimates side information with low computational complexity, using the coding information of the motion predictive coded region, and estimates the number of syndrome bits required to decode the region. The decoder generates side information using the same method as the encoder, which also reduces the computational complexity in the decoder. Experimental results show that the proposed method can eliminate the feedback channel without incurring a significant rate-distortion performance loss.
- Published
- 2016
- Full Text
- View/download PDF
44. Adaptive Color-Space Transform in HEVC Screen Content Coding
- Author
-
Joel Sole, Jianle Chen, Yan Ye, Woo-Shik Kim, Marta Karczewicz, Yunwen He, Xu Jizheng, Li Zhang, and Xiaoyu Xiu
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Color space ,Residual ,Coding tree unit ,Sub-band coding ,Correlation ,Color depth ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Algorithm design ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm - Abstract
The screen content coding (SCC) Extensions of high efficiency video coding (HEVC) employs in-loop adaptive color-space transform (ACT) technique to explore the inter-color-component redundancy, i.e., statistical redundancy among different color components. In ACT, the prediction residual signal is adaptively converted into a different color space, i.e., YCgCo. The rate-distortion criteria is employed to decide whether to code the residual signal in the original color space or YCgCo color space. Typically, the inter-color-component correlation could be reduced when ACT is enabled. The residual signal after possible color-space conversion is then coded, following the existing HEVC framework, i.e., transform if necessary, quantization and entropy coded. This paper describes the design of ACT from several points of view, from theoretical analysis to implementation details. Experimental results are also provided to demonstrate the significant coding gains of ACT in the HEVC SCC Extensions.
- Published
- 2016
- Full Text
- View/download PDF
45. On the Rate-Distortion Function for Binary Source Coding With Side Information
- Author
-
Samuel Cheng, Andrei Sechelea, Adrian Munteanu, Nikos Deligiannis, Faculty of Engineering, and Electronics and Informatics
- Subjects
Tunstall coding ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Variable-length code ,Distributed source coding ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,source coding ,Coding tree unit ,Sub-band coding ,Shannon–Fano coding ,Rate distortion theory ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Context-adaptive binary arithmetic coding ,information theory ,Computer Science::Information Theory ,Mathematics ,Context-adaptive variable-length coding - Abstract
We present an in-depth analysis of the problem of lossy compression of binary sources in the presence of correlated side information, where the correlation is given by a generic binary asymmetric channel and the Hamming distance is the distortion metric. Our analysis is motivated by systematic rate-distortion gains observed when applying asymmetric correlation models in Wyner-Ziv video coding. Firstly, we derive for the first time the rate-distortion function for conventional predictive coding in the binary-asymmetric-correlation-channel scenario. Secondly, we propose a new bound for the case where the side information is only available at the decoder - Wyner-Ziv coding. We conjecture this bound to be tight. We show that the maximum rate needed to encode as well as the maximum rate-loss of Wyner-Ziv coding relative to predictive coding correspond to uniform sources and symmetric correlations. Importantly, we show that the upper bound on the rate-loss established by Zamir is not tight and that the maximum value is actually significantly lower. Moreover, we prove that the only binary correlation channel that incurs no rate-loss for Wyner-Ziv coding compared to predictive coding is the Z-channel. Finally, we complement our analysis with new compression performance results obtained with our state-of-the-art Wyner-Ziv video coding system.
- Published
- 2016
- Full Text
- View/download PDF
46. Scalable Audio Coding Using Trellis-Based Optimized Joint Entropy Coding and Quantization
- Author
-
Peter Kabal and Mahmood Movassagh
- Subjects
Theoretical computer science ,Acoustics and Ultrasonics ,Computer science ,Tunstall coding ,Speech coding ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Huffman coding ,030507 speech-language pathology & audiology ,03 medical and health sciences ,symbols.namesake ,Shannon–Fano coding ,Computer Science::Multimedia ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,Variable-length code ,020206 networking & telecommunications ,Coding tree unit ,Sub-band coding ,Computational Mathematics ,symbols ,0305 other medical science ,Algorithm ,Context-adaptive binary arithmetic coding - Abstract
There is a considerable performance gap between the current scalable audio coding schemes and a nonscalable coder operating at the same bitrate. This suboptimality results from the independent coding of the layers in these systems. One of the aspects that plays a role in this suboptimality is the entropy coding. In practical audio coding systems including MPEG advanced audio coding (AAC), the transform domain coefficients are quantized using an entropy-constrained quantizer. In MPEG-4 scalable AAC (S-AAC), the quantization and coding are performed separately at each layer. In case of Huffman coding, the redundancy introduced by the entropy coding at each layer is larger at lower quantization resolutions. Also, the redundancy for the overall coder becomes larger as the number of layers increases. In fact, there is a tradeoff between the overall redundancy and the fine-grain scalability in which the bitrate per layer is smaller and more layers are required. In this paper, a fine-grain scalable coder for audio signals is proposed where the entropy coding of a quantizer is made scalable via joint design of entropy coding and quantization. By constructing a Huffman-like coding tree where the internal nodes can be mapped to the reconstruction points, the tree can be pruned at any internal node to control the rate-distortion (RD) performance of the encoder in a fine-grain manner. A set of metrics and a trellis-based approach is proposed to create a coding tree so that an appropriate path is generated on the RD plane. The results show the proposed method outperforms the scalable audio coding performed based on reconstruction error quantization as used in practical systems, e.g., in S-AAC.
- Published
- 2016
- Full Text
- View/download PDF
47. Linear Sub-band Decomposition-based Pre-processing for Perceptual Video Coding
- Author
-
Byung Cheol Song and Kwang Yeon Choi
- Subjects
Computer science ,Algorithmic efficiency ,Speech recognition ,Signal Processing ,Codec ,Electrical and Electronic Engineering ,Multiview Video Coding ,Encoder ,Coding tree unit ,Algorithm ,Context-adaptive binary arithmetic coding ,Sub-band coding ,Coding (social sciences) - Abstract
This paper proposes a pre-processing algorithm to improve the coding efficiency of perceptual video coding. First, an input image is decomposed into multiple sub-bands through linear sub-band decomposition. Then, the sub-bands that have low visual sensitivity are suppressed by assigning small gains to them. Experimental results show that if the proposed algorithm is adopted for pre-processing in a High Efficiency Video Coding (HEVC) encoder, it can provide significant bit-saving effects of approximately 12% in low delay mode and 9.4% in random access mode.
- Published
- 2016
- Full Text
- View/download PDF
48. A frame-level encoder rate control scheme for transform domain Wyner-Ziv video coding
- Author
-
Jian Chen, Yonghong Kuo, Qing Hu, and Shuai Zheng
- Subjects
Average bitrate ,Computer Networks and Communications ,Computer science ,Real-time computing ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Codec ,Parity bit ,Constant bitrate ,020206 networking & telecommunications ,Code rate ,Variable bitrate ,Coding tree unit ,Sub-band coding ,Adaptive coding ,Hardware and Architecture ,020201 artificial intelligence & image processing ,Multiview Video Coding ,Encoder ,Algorithm ,Software ,Decoding methods ,Harmonic Vector Excitation Coding ,Context-adaptive binary arithmetic coding ,Communication channel - Abstract
Available distributed video coding codecs are mostly based on decoder rate control scheme where the parity bits for decoding can be achieved over a feedback channel. Meanwhile, the frequent requests over feedback channel increase the transmission delay. The feedback-free distributed video coding, relying on encoder rate control in literatures, has overcome the aforementioned shortcoming. However, when performing parity bitrate estimation and other operations, the feedback-free distributed video coding systems based on bit-plane usually require high precision of bitrate estimation and high quality of side information at the encoder. In this paper, we propose a frame-level distributed video coding system based on encoder rate control. The innovations include three parts: 1) an adaptive coding mode selection algorithm is proposed, which utilizes both temporal and spatial correlation and reduces the complexity of encoder; 2) a bit-plane rearrangement method is adopted, which makes the coding rate on each bit-plane homogeneous and effectively reduces the accuracy requirement of the parity bitrate prediction and improves the efficiency of rate estimation; 3) a frame-level parity bitrate estimation scheme is presented to enhance the efficiency of rate estimation on the basis of a look-up table. Numerical results verify that the proposed scheme remarkably improves the rate distortion performance of distributed video coding at low bitrate.
- Published
- 2016
- Full Text
- View/download PDF
49. Efficient Residual DPCM Using an <tex-math notation='LaTeX'>$l_1$</tex-math> Robust Linear Prediction in Screen Content Video Coding
- Author
-
Nayoung Kim, Su-Kyung Ryu, Je-Won Kang, and Min-Joo Kang
- Subjects
Code-excited linear prediction ,Computer science ,Speech recognition ,Tunstall coding ,Variable-length code ,020206 networking & telecommunications ,Linear prediction ,02 engineering and technology ,Coding tree unit ,Coding gain ,Computer Science Applications ,Sub-band coding ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Codec ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Vector sum excited linear prediction ,Context-adaptive binary arithmetic coding ,Decoding methods ,Context-adaptive variable-length coding - Abstract
In this paper, a residual differential pulse code modulation (RDPCM) coding technique using a weighted linear combination of neighboring residual samples is proposed to provide coding efficiency in the screen content video coding. The RDPCM performs the sample-based prediction of residue to reduce spatial redundancies. The proposed method uses the $l_1$ optimization in the weight derivation by considering the statistical characteristics of graphical components in videos in an intracoding. Specifically we use the least absolute shrinkage and selection operator to derive the weights because the solution is accurate in high variance residue. Furthermore, we enhance parallelism in a line processing by restricting the support to the row-wise prediction to above samples or the column-wise prediction to the left samples. The proposed method uses an explicit RDPCM scheme, so a coding mode determined by rate-distortion optimization is transmitted to a decoder. For coding the overhead, we develop a context design in CABAC based on correlation between an intraprediction direction and an RDPCM prediction mode. It is demonstrated with the experimental results that the proposed method provides a significant coding gain over the state-of-the-art reference codec for screen content video coding.
- Published
- 2016
- Full Text
- View/download PDF
50. Rate-Constrained Region of Interest Coding Using Adaptive Quantization in Transform Domain Wyner–Ziv Video Coding
- Author
-
Jongbin Park and Byeungwoo Jeon
- Subjects
Computer science ,Quantization (signal processing) ,Speech recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Coding tree unit ,Sub-band coding ,ComputingMethodologies_PATTERNRECOGNITION ,Region of interest ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Discrete cosine transform ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Transform coding ,Decoding methods ,Coding (social sciences) - Abstract
In this paper, we propose a rate-constrained distributed video coding-based region of interest (ROI) coding scheme. The proposed scheme appropriately determines ROI according to an available bit budget that depends on transmission channel and decoding environment. Its subjective quality is faithfully maintained via adaptive ROI definition of the available resources. Prior knowledge about the ROI is represented in terms of Gaussian mixture weighting function, which in turn determines the ROI according to the number of available bits. An adaptive quantization method is used in the ROI bit allocation. Compared to the existing non-ROI-based schemes, the proposed scheme not only improves the ROI coding performance but also the subjective quality of the entire picture.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.