293 results on '"Marcellin, Michael W."'
Search Results
252. Rate Allocation for Spotlight SAR Phase History Data Compression.
- Author
-
Owens, James W. and Marcellin, Michael W.
- Subjects
- *
DATA compression , *SYNTHETIC apertures - Abstract
Presents a study which are concerned with the compression of the complex phase history data obtained by a spotlight synthetic aperture radar systems. Rate allocation for spotlight SAR phase history data; Performance and operational evaluation; Results and discussion.
- Published
- 1999
- Full Text
- View/download PDF
253. Compression of Synthetic Aperture Radar Video Phase History Data Using Trellis-Coded Quantization...
- Author
-
Owens, James W. and Marcellin, Michael W.
- Subjects
- *
SOURCE code , *SYNTHETIC aperture radar - Abstract
Presents information on a study which focused on the application of trellis-coded quantization (TCQ), a source coding technique, to the problem of downlink data rate reduction in synthetic aperture radar system. Information on TCQ; Implementation of a universal TCQ coding system; Implementation of trellis-coded vector quantization coding system; Performance evaluation.
- Published
- 1999
- Full Text
- View/download PDF
254. Another Stopping Rule for Linear Iterative Signal Restoration.
- Author
-
Walsh, David O. and Marcellin, Michael W.
- Subjects
- *
DIGITAL signal processing , *ITERATIVE methods (Mathematics) , *DESIGN - Abstract
Proposes a stopping rule for linear, iterative signal restoration using the gradient decent and conjugate gradient algorithms. Objective of the proposed rule; Approximation for the iterative algorithm; Application to the restoration of a complex magnetic resonance image from pseudorandomly sampled Fourier data.
- Published
- 1999
- Full Text
- View/download PDF
255. Near-lossless image compression: Minimum-entropy, constrained-error DPCM.
- Author
-
Ke, Ligang and Marcellin, Michael W.
- Subjects
- *
IMAGE compression - Abstract
Presents a near-lossless image compression scheme. Details on the scheme being a differential pulse code modulation system; Function of the scheme; Classification of image compression techniques.
- Published
- 1998
- Full Text
- View/download PDF
256. Hyperspectral image compression using entropy-constrained predictive trellis coded quantization.
- Author
-
Abousleman, Glen P. and Marcellin, Michael W.
- Subjects
- *
IMAGE compression , *GEOMETRIC quantization - Abstract
Details a hyperspectral image coding scheme that utilizes the two-dimensional discrete cosine transform (DCT) and entropy-constrained predictive trellis coded quantization (ECPTCQ) to encode hyperspectral imagery. Interband correlation of hyperspectral imagery; Experimentation using hyperspectral data from AVIRIS; Side information for the algorithm.
- Published
- 1997
- Full Text
- View/download PDF
257. POCS-based error concealment for packet video using multiframe overlap information.
- Author
-
Yu, Gong-San, Liu, Max M.-K., and Marcellin, Michael W.
- Subjects
VIDEO compression ,ERROR analysis in mathematics - Abstract
Analyses the effective elimination of error propagation effects on video compression, an error concealment algorithm for packet video was proposed. Adoption of this type of process in standard codecs; Details on the use of leaky prediction to assist in the solution to the problems in packet video; Methods used to evaluate the error propagation effects.
- Published
- 1998
- Full Text
- View/download PDF
258. A vector quantizer for image restoration.
- Author
-
Sheppard, David G., Bligin, Ali, Nadar, Mariappan S., Hunt, Bobby R., and Marcellin, Michael W.
- Subjects
NONLINEAR mechanics ,MATHEMATICAL models - Abstract
Provides information on the presentation of a novel technique for image restoratin based on nonlinear interpolative vector quantization (NLVQ). Information on the image restoration; Detailed information on nonlinear VQ image restoration; Conclusion reached.
- Published
- 1998
- Full Text
- View/download PDF
259. Wavelet Amendment of Polynomial Models in Hammerstein Systems Identification.
- Author
-
Śliwińnski, Przemyslaw, Rozenblit, Jerzy, Marcellin, Michael W., and Kiempous, Ryszard
- Subjects
WAVELETS (Mathematics) ,NONLINEAR theories ,MATHEMATICAL models ,ALGORITHMS ,SYSTEM analysis ,MATHEMATICAL analysis - Abstract
A new wavelet algorithm for on-line improvement of an existing polynomial model of nonlinearity in a Hammerstein system is proposed and its properties are examined. The algorithm employs wavelet bases on interval. Convergence of the resulting assembly, comprising the parametric polynomial model and a nonparametric wavelet add-on, to the system nonlinearity is shown. Rates of convergence for uniformly smooth and piecewise smooth nonlinearities with discontinuities are both established. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
260. Collaborative multihop transmission of distributed sensor imagery
- Author
-
Dagher, Joseph C., Marcellin, Michael W., and Neifeld, Mark A.
- Abstract
We consider a network of imaging sensors. We address the problem of energy-efficient communication of the measurements of the sensors. A novel algorithm is presented for the purpose of exploiting intersensor and intrasensor correlation, which is inherent in a network of imaging sensors. The collaborative algorithm is used in conjunction with a cooperative multihop routing strategy to maximize the lifetime of the network. The algorithm is demonstrated to achieve an average gain in the lifetime as high as 3.2 over previous methods.
- Published
- 2006
261. Efficient storage and transmission of ladar imagery
- Author
-
Dagher, Joseph C., Marcellin, Michael W., and Neifeld, Mark A.
- Abstract
We develop novel methods for compressing volumetric imagery that has been generated by single-platform (mobile) range sensors. We exploit the correlation structure inherent in multiple views in order to improve compression efficiency. We show that, for lossless compression, three-dimensional volumes compress more efficiently than two-dimensional (2D) images by a factor of 60%. Furthermore, our error metric for lossy compression suggests that accumulating more than nine range images in one volume before compression yields as much as a 99% improvement in compression performance over 2D compression.
- Published
- 2003
262. Compressive detection of direct sequence spread spectrum signals.
- Author
-
Feng Liu, Marcellin, Michael W., Goodman, Nathan A., and Bilgin, Ali
- Subjects
- *
SPREAD spectrum communications , *COMPRESSED sensing , *INTERFERENCE (Telecommunication) , *PSEUDONOISE sequences (Digital communications) , *BLOCK diagrams - Abstract
In spread spectrum (SS) communications, the input signal is spread over a wider bandwidth to avoid interference or interception. One of the most common SS techniques is direct sequence SS (DSSS). In this Letter, the authors propose non-cooperative compressive detection techniques for DSSS signals. The proposed compressive detection framework allows the use of random as well as designed measurement kernels. A technique for designing compressive measurement kernels which exploit DSSS signal structure and non-uniform usage of spreading codes is proposed. Theoretical and simulation results are provided to compare the performance of the proposed methods with their conventional counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
263. Three-dimensional image compression with integer wavelet transforms
- Author
-
Bilgin, Ali, Zweig, George, and Marcellin, Michael W.
- Abstract
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
- Published
- 2000
264. Communication theoretic image restoration for binary-valued imagery
- Author
-
Neifeld, Mark A., Xuan, Ruozhong, and Marcellin, Michael W.
- Abstract
We present a new image-restoration algorithm for binary-valued imagery. A trellis-based search method is described that exploits the finite alphabet of the target imagery. This algorithm seeks the maximum-likelihood solution to the image-restoration problem and is motivated by the Viterbi algorithm for traditional binary data detection in the presence of intersymbol interference and noise. We describe a blockwise method to restore two-dimensional imagery on a row-by-row basis and in which a priori knowledge of image pixel correlation structure can be included through a modification to the trellis transition probabilities. The performance of the new Viterbi-based algorithm is shown to be superior to Wiener filtering in terms of both bit error rate and visual quality. Algorithmic choices related to trellis state configuration, complexity reduction, and transition probability selection are investigated, and various trade-offs are discussed.
- Published
- 2000
265. Compression Based on a Joint Task-Specific Information Metric.
- Author
-
Pu, Lingling, Marcellin, Michael W., Bilgin, Ali, and Ashok, Amit
- Published
- 2015
- Full Text
- View/download PDF
266. Compressive Detection of Multiple Frequency-Hopping Spread Spectrum Signals.
- Author
-
Liu, Feng, Marcellin, Michael W., Goodman, Nathan A., and Bilgin, Ali
- Published
- 2014
- Full Text
- View/download PDF
267. JPEG2000: Image Compression Fundamentals, Standards and Practice.
- Author
-
Taubman, David S., Marcellin, Michael W., and Rabbani, Majid
- Published
- 2002
- Full Text
- View/download PDF
268. Visually Lossless Compression of Windowed Images.
- Author
-
Leung, Tony, Marcellin, Michael W., and Bilgin, Ali
- Published
- 2013
- Full Text
- View/download PDF
269. Visually Lossless Compression of Stereo Images.
- Author
-
Feng, Hsin-Chang, Marcellin, Michael W., and Bilgin, Ali
- Published
- 2013
- Full Text
- View/download PDF
270. Visibility of quantization errors in reversible JPEG2000.
- Author
-
Liu, Feng, Ahanonu, Eze, Marcellin, Michael W., Lin, Yuzhang, Ashok, Amit, and Bilgin, Ali
- Subjects
- *
IMAGE compression , *VISIBILITY , *IMAGE reconstruction , *EYE , *WAVELET transforms - Abstract
Image compression systems that exploit the properties of the human visual system have been studied extensively over the past few decades. For the JPEG2000 image compression standard, all previous methods that aim to optimize perceptual quality have considered the irreversible pipeline of the standard. In this work, we propose an approach for the reversible pipeline of the JPEG2000 standard. We introduce a new methodology to measure visibility of quantization errors when reversible color and wavelet transforms are employed. Incorporation of the visibility thresholds using this methodology into a JPEG2000 encoder enables creation of scalable codestreams that can provide both near-threshold and numerically lossless representations, which is desirable in applications where restoration of original image samples is required. Most importantly, this is the first work that quantifies the bitrate penalty incurred by the reversible transforms in near-threshold image compression compared to the irreversible transforms. • A method to measure visibility of quantization error in reversible JPEG2000 is proposed. • Near-threshold and lossless compression are enabled in a single scalable codestream. • The impact of the nonlinearities in the reversible pipeline of JPEG2000 on near-threshold compression performance is quantified. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
271. Regression Wavelet Analysis for Near-Lossless Remote Sensing Data Compression.
- Author
-
Alvarez-Cortes, Sara, Serra-Sagrista, Joan, Bartrina-Rapesta, Joan, and Marcellin, Michael W.
- Subjects
- *
WAVELETS (Mathematics) , *REMOTE sensing , *REGRESSION analysis , *DATA compression , *DISCRETE wavelet transforms - Abstract
Regression wavelet analysis (RWA) is one of the current state-of-the-art lossless compression techniques for remote sensing data. This article presents the first regression-based near-lossless compression method. It is built upon RWA, a quantizer, and a feedback loop to compensate the quantization error. Our near-lossless RWA (NLRWA) proposal can be followed by any entropy coding technique. Here, the NLRWA is coupled with a bitplane-based coder that supports progressive decoding. This successfully enables gradual quality refinement and lossless and near-lossless recovery. A smart strategy for selecting the NLRWA quantization steps is also included. Experimental results show that the proposed scheme outperforms the state-of-the-art lossless and the near-lossless compression methods in terms of compression ratios and quality retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
272. Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images.
- Author
-
Hernandez-Cabronero, Miguel, Sanchez, Victor, Blanes, Ian, Auli-Llinas, Francesc, Marcellin, Michael W., and Serra-Sagrista, Joan
- Subjects
- *
ARTIFICIAL intelligence , *DIGITAL image processing , *MATHEMATICAL optimization , *WAVELET transforms , *INFORMATION technology - Abstract
The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen–Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
273. The Current Role of Image Compression Standards in Medical Imaging.
- Author
-
Feng Liu, Hernandez-Cabronero, Miguel, Sanchez, Victor, Marcellin, Michael W., and Bilgin, Ali
- Subjects
- *
DIAGNOSTIC imaging , *BIG data - Abstract
With the increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
274. A Lightweight Contextual Arithmetic Coder for On-Board Remote Sensing Data Compression.
- Author
-
Bartrina-Rapesta, Joan, Blanes, Ian, Auli-Llinas, Francesc, Serra-Sagrista, Joan, Sanchez, Victor, and Marcellin, Michael W.
- Subjects
- *
REMOTE sensing , *DATA compression , *ARITHMETIC coding , *NATURAL satellites , *EARTH stations - Abstract
The Consultative Committee for Space Data Systems (CCSDS) has issued several data compression standards devised to reduce the amount of data transmitted from satellites to ground stations. This paper introduces a contextual arithmetic encoder for on-board data compression. The proposed arithmetic encoder checks the causal adjacent neighbors, at most, to form the context and uses only bitwise operations to estimate the related probabilities. As a result, the encoder consumes few computational resources, making it suitable for on-board operation. Our coding approach is based on the prediction and mapping stages of CCSDS-123 lossless compression standard, an optional quantizer stage to yield lossless or near-lossless compression and our proposed arithmetic encoder. For both lossless and near-lossless compression, the achieved coding performance is superior to that of CCSDS-123, M-CALIC, and JPEG-LS. Taking into account only the entropy encoders, fixed-length codeword is slightly better than MQ and interleaved entropy coding. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
275. Regression Wavelet Analysis for Lossless Coding of Remote-Sensing Data.
- Author
-
Amrani, Naoufal, Serra-Sagrista, Joan, Laparra, Valero, Marcellin, Michael W., and Malo, Jesus
- Subjects
- *
HYPERSPECTRAL imaging systems , *WAVELETS (Mathematics) , *PARTITION coefficient (Chemistry) , *REMOTE-sensing images , *REGRESSION analysis - Abstract
A novel wavelet-based scheme to increase coefficient independence in hyperspectral images is introduced for lossless coding. The proposed regression wavelet analysis (RWA) uses multivariate regression to exploit the relationships among wavelet-transformed components. It builds on our previous nonlinear schemes that estimate each coefficient from neighbor coefficients. Specifically, RWA performs a pyramidal estimation in the wavelet domain, thus reducing the statistical relations in the residuals and the energy of the representation compared to existing wavelet-based schemes. We propose three regression models to address the issues concerning estimation accuracy, component scalability, and computational complexity. Other suitable regression models could be devised for other goals. RWA is invertible, it allows a reversible integer implementation, and it does not expand the dynamic range. Experimental results over a wide range of sensors, such as AVIRIS, Hyperion, and Infrared Atmospheric Sounding Interferometer, suggest that RWA outperforms not only principal component analysis and wavelets but also the best and most recent coding standard in remote sensing, CCSDS-123. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
276. Efficient storage of microCT data preserving bone morphometry assessment.
- Author
-
Bartrina-Rapesta, Joan, Aulí-Llinàs, Francesc, Blanes, Ian, Marcellin, Michael W., Sanchez, Victor, and Serra-Sagristà, Joan
- Subjects
- *
COMPUTED tomography , *BONE densitometry , *MORPHOMETRICS , *BONE physiology , *THREE-dimensional imaging , *MEDICAL imaging systems - Abstract
Preclinical micro-computed tomography (microCT) images are of utility for 3D morphological bone evaluation, which is of great interest in cancer detection and treatment development. This work introduces a compression strategy for microCTs that allocates specific substances in different Volumes of Interest (VoIs). The allocation procedure is conducted by the Hounsfield scale. The VoIs are coded independently and then grouped in a single DICOM-compliant file. The proposed method permits the use of different codecs, identifies and transmit data corresponding to a particular substance in the compressed domain without decoding the volume(s), and allows the computation of the 3D morphometry without needing to store or transmit the whole image. The proposed approach reduces the transmitted data in more than 90% when the 3D morphometry evaluation is performed in high density and low density bone. This work can be easily extended to other imaging modalities and applications that work with the Hounsfield scale. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
277. Analysis-Driven Lossy Compression of DNA Microarray Images.
- Author
-
Hernandez-Cabronero, Miguel, Blanes, Ian, Pinho, Armando J., Marcellin, Michael W., and Serra-Sagrista, Joan
- Subjects
- *
DNA microarrays , *LOSSY data compression , *GENETIC research , *DATA transmission systems , *INFORMATION sharing - Abstract
DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
278. Isorange Pairwise Orthogonal Transform.
- Author
-
Blanes, Ian, Hernández-Cabronero, Miguel, Aulí-Llinàs, Francesc, Serra-Sagristà, Joan, and Marcellin, Michael W.
- Subjects
- *
HYPERSPECTRAL imaging systems , *DATA compression , *TRANSFORM coding , *SPECTRUM analysis , *PERFORMANCE evaluation - Abstract
Spectral transforms are tools commonly employed in multi- and hyperspectral data compression to decorrelate images in the spectral domain. The pairwise orthogonal transform (POT) is one such transform that has been specifically devised for resource- constrained contexts similar to those found on board satellites or airborne sensors. Combining the POT with a 2-D coder yields an efficient compressor for multi- and hyperspectral data. However, a drawback of the original POT is that its dynamic range expansion, i.e., the increase in bit depth of transformed images, is not constant, which may cause problems with hardware implementations. Additionally, the dynamic range expansion is often too large to be compatible with the current 2-D standard CCSDS 122.0-B-1. This paper introduces the isorange POT, a derived transform that has a small and limited dynamic range expansion, compatible with CCSDS 122.0-B-1 in almost all scenarios. Experimental results suggest that the proposed transform achieves lossy coding performance close to that of the original transform. For lossless coding, the original POT and the proposed isorange POT achieve virtually the same performance. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
279. Low Delay Robust Audio Coding by Noise Shaping, Fractional Sampling, and Source Prediction
- Author
-
Jan Ostergaard, Bilgin, Ali, Marcellin, Michael W., Serra-Sagrista, Joan, and Storer, James A.
- Subjects
PEAQ ,Audio signal ,Computer science ,Network packet ,low delay ,020206 networking & telecommunications ,02 engineering and technology ,fractional sampling ,Noise shaping ,030507 speech-language pathology & audiology ,03 medical and health sciences ,audio coding ,Packet loss ,0202 electrical engineering, electronic engineering, information engineering ,source predictions ,Oversampling ,Multiple descriptions ,0305 other medical science ,Algorithm ,noise shaping ,Decoding methods ,Data compression - Abstract
It was recently shown that the combination of source prediction, two-times oversampling, and noise shaping, can be used to obtain a robust (multiple-description) audio coding frame- work for networks with packet loss probabilities less than 10%. Specifically, it was shown that audio signals could be encoded into two descriptions (packets), which were separately sent over a communication channel. Each description yields a desired performance by itself, and when they are combined, the performance is improved. This paper extends the previ- ous work to an arbitrary number of descriptions (packets) by using fractional oversampling and a new decoding principle. We demonstrate that, due to source aliasing, existing MSE optimized reconstruction rules from noisy sampled data, performs poorly from a perceptual point of view. A simple reconstruction rule is proposed, that improves the PEAQ objective difference grades (ODG) by more than 2 points. The proposed audio coder enables low- delay high-quality audio streaming on networks with late packet arrivals or packet losses. With a coding delay of 2.5 ms, and a total bitrate of 300 kbps, it is demonstrated that mean PEAQ ODGs around -0.65 can be obtained for 48 kHz (mono) music (pop & rock), and packet loss probabilities of 20%.
- Published
- 2021
280. The Exponential Distribution in Rate Distortion Theory:The Case of Compression with Independent Encodings
- Author
-
Jan Ostergaard, Ram Zamir, Uri Erez, Bilgin, Ali, Marcellin, Michael W., Serra-Sagrista, Joan, and Storer, James A.
- Subjects
Rate–distortion theory ,Discrete mathematics ,Exponential distribution ,Approximation error ,Distortion ,Compression (functional analysis) ,Estimator ,Code rate ,Expression (mathematics) ,Mathematics - Abstract
In this paper, we consider the rate-distortion problem where a source X is encoded into k parallel descriptions Y1, . . . , Yk, such that the error signals X −Yi, i = 1, . . . , k, are mutually independent given X. We show that if X is one-sided exponentially distributed, the optimal decoder (estimator) under the one-sided absolute error criterion, is simply given by the maximum of the outputs Y1, . . . , Yk. We provide a closed-form expression for the rate and distortion for any k number of parallel descriptions and for any coding rate. We furthermore show that as the coding rate per description becomes asymptotically small, encoding into k parallel descriptions and using the maximum output as the source estimate, is rate-distortion optimal.
- Published
- 2020
281. Error Correction Capability of Column-Weight-Three LDPC Codes Under the Gallager A Algorithm--Part II.
- Author
-
Chilappagari, Shashi Kiran, Nguyen, Dung Viet, Vasic, Bane, and Marcellin, Michael W.
- Subjects
- *
ERROR-correcting codes , *ARTIFICIAL intelligence , *INFORMATION theory , *CODING theory , *ALGORITHMS - Abstract
The relation between the girth and the error correction capability of column-weight-three LDPC codes under the Gallager A algorithm is investigated. It is shown that a column-weight-three LDPC code with Tanner graph of girth g ≥ 10 can correct all error patterns with up to (g/2 - 1) errors in at most g/2 iterations of the Gallager A algorithm. For codes with Tanner graphs of girth g ≤ 8, it is shown that girth alone cannot guarantee correction of all error patterns with up to (g/2 - 1) errors under the Gallager A algorithm. Sufficient conditions to correct (g/2 - 1) errors are then established by studying trapping sets. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
282. On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes.
- Author
-
Chilappagari, Shashi Kiran, Nguyen, Dung Viet, Vasic, Bane, and Marcellin, Michael W.
- Subjects
- *
ALGORITHMS , *ERROR-correcting codes , *AUTOMATIC control systems , *CODING theory , *INFORMATION theory - Abstract
The relation between the girth and the guaranteed error correction capability of γ-left-regular low-density parity-check (LDPC) codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least 3γ/4 is found based on the Moore bound. This bound, combined with the well known expander based arguments, leads to a lower bound on the guaranteed error correction capability. The decoding failures of the bit flipping algorithms are characterized using the notions of trapping sets and fixed sets. The relation between fixed sets and a class of graphs known as cage graphs is studied. Upper bounds on the guaranteed error correction capability are then established based on the order of cage graphs. The results are extended to left-regular and right-uniform generalized LDPC codes. It is shown that this class of generalized LDPC codes can correct a linear number of worst case errors (in the code length) under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. A lower bound on the size of variable node sets which have the required expansion is established. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
283. Improved compressibility of multislice CT datasets using 3D JPEG2000 compression
- Author
-
Siddiqui, Khan M., Siegel, Eliot L., Reiner, Bruce I., Crave, Olivier, Johnson, Jeffrey P., Wu, Zhenyu, Dagher, Joseph C., Bilgin, Ali, Marcellin, Michael W., and Nadar, Mariappan
- Subjects
- *
IMAGE compression , *THREE-dimensional imaging , *DATA compression , *ALGORITHMS - Abstract
This study evaluated the compressibility of multislice CT (MSCT) datasets and its dependence on (1) slice thickness and (2) the use of three-dimensional (3D) vs. 2D JPEG2000 compression methods. Five thoracic CT datasets were obtained using a 16-detector MSCT scanner with a collimation of 0.75 mm (120 kVp, 90 mAs) and reconstructed at five slice thicknesses from 0.75 to 10.0 mm. These datasets were irreversibly compressed using a standard 2D JPEG2000 encoder and a developmental 3D JPEG2000 algorithm based on Part 2 of the JPEG2000 standard. Compression ratios ranged from 4:1 to 64:1. Image distortion was computed utilizing peak signal-to-noise ratio (PSNR) and the Sarnoff JNDmetrix visual discrimination model. For 2D compression, the thinnest sections were substantially less compressible than thicker sections for the same level of image quality, particularly at higher compression ratios. Applying 3D compression yielded consistently higher image quality in most cases compared to 2D compression at the same ratios. The advantage of 3D compression increased for thinner slices and higher compression ratios. These results indicate that 3D JPEG2000 (Part 2) compression offers substantial advantages over the current 2D JPEG2000 standard, yielding better quantitative image quality at similar compression ratios or comparable image quality at higher compression ratios. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
284. Wave Atoms for Lossy Compression of Digital Holograms
- Author
-
David Blinder, Ayyoub Ahar, Peter Schelkens, Colas Schretter, Tomasz Kozacki, Tobias Birnbaum, Serra-Sagristà, Joan, Bilgin, Ali, Storer, James A., Marcellin, Michael W., Electronics and Informatics, and Faculty of Engineering
- Subjects
Wavefront ,Computer Networks and Communications ,Computer science ,Acoustics ,Holography ,Wavelet transform ,02 engineering and technology ,computer.file_format ,Lossy compression ,compression ,law.invention ,wave atoms ,law ,Frequency domain ,JPEG 2000 ,0202 electrical engineering, electronic engineering, information engineering ,Codec ,020201 artificial intelligence & image processing ,Hologram ,image coding ,computer ,Transform coding - Abstract
Compression of digital holograms is a major challenge that needs to be resolved to enable the efficient storage, transmission and rendering of macroscopic holographic signals. In this work, we propose to deploy the wave atom transform that has been utilized before for interferometric modalities such as acoustic and seismic signals. This non-adaptive multi- resolution transform has good space-frequency localization and its orthonormal basis is suitable for sparsifying holographic signals. By replacing the CDF 9/7 wavelet transform stage in a JPEG 2000 codec with the proposed wave atom transform, we did assess its suitability for coding complex amplitude wavefronts. Experimental results demonstrate improved rate-distortion performance with respect to JPEG 2000 and H.265/HEVC for a set of computer-generated, diffuse, macroscopic holograms.
- Published
- 2019
285. Constructing Antidictionaries in Output-Sensitive Space
- Author
-
Golnaz Badkobeh, Alice Héliou, Gabriele Fici, Solon P. Pissis, Lorraine A.K. Ayad, Department of Informatics [King's College London], King‘s College London, Goldsmiths, University of London (Goldsmiths College), University of London [London], Dipartimento di Matematica e Informatica [Palermo], Università degli studi di Palermo - University of Palermo, Centrum Wiskunde & Informatica (CWI), Equipe de recherche européenne en algorithmique et biologie formelle et expérimentale (ERABLE), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Department of Computing, Goldsmiths, University of London, Dipartimento di Matematica e Informatica, Università degli Studi di Palermo, Palermo, Italy, Storer, James A., Bilgin, Ali, Serra-Sagrista, Joan, Marcellin, Michael W., Ayad L.A.K., Badkobeh G., Fici G., Heliou A., Pissis S.P., and Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands
- Subjects
FOS: Computer and information sciences ,Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni ,Output sensitive algorithms ,String algorithms ,Physics ,Antidictionarie ,Settore INF/01 - Informatica ,Output sensitive algorithm ,0102 computer and information sciences ,Absent words ,Space (mathematics) ,01 natural sciences ,Antidictionaries ,Combinatorics ,010201 computation theory & mathematics ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Data compression ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,Computer Science::Symbolic Computation ,[INFO]Computer Science [cs] ,Absent word ,Alphabet ,Word (group theory) - Abstract
A word $x$ that is absent from a word $y$ is called minimal if all its proper factors occur in $y$. Given a collection of $k$ words $y_1,y_2,\ldots,y_k$ over an alphabet $\Sigma$, we are asked to compute the set $\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{k}}$ of minimal absent words of length at most $\ell$ of word $y=y_1\#y_2\#\ldots\#y_k$, $\#\notin\Sigma$. In data compression, this corresponds to computing the antidictionary of $k$ documents. In bioinformatics, it corresponds to computing words that are absent from a genome of $k$ chromosomes. This computation generally requires $\Omega(n)$ space for $n=|y|$ using any of the plenty available $\mathcal{O}(n)$-time algorithms. This is because an $\Omega(n)$-sized text index is constructed over $y$ which can be impractical for large $n$. We do the identical computation incrementally using output-sensitive space. This goal is reasonable when $||\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{N}}||=o(n)$, for all $N\in[1,k]$. For instance, in the human genome, $n \approx 3\times 10^9$ but $||\mathrm{M}^{12}_{y_{1}\#\ldots\#y_{k}}|| \approx 10^6$. We consider a constant-sized alphabet for stating our results. We show that all $\mathrm{M}^{\ell}_{y_{1}},\ldots,\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{k}}$ can be computed in $\mathcal{O}(kn+\sum^{k}_{N=1}||\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{N}}||)$ total time using $\mathcal{O}(\mathrm{MaxIn}+\mathrm{MaxOut})$ space, where $\mathrm{MaxIn}$ is the length of the longest word in $\{y_1,\ldots,y_{k}\}$ and $\mathrm{MaxOut}=\max\{||\mathrm{M}^{\ell}_{y_{1}\#\ldots\#y_{N}}||:N\in[1,k]\}$. Proof-of-concept experimental results are also provided confirming our theoretical findings and justifying our contribution., Comment: Version accepted to DCC 2019
- Published
- 2019
286. Fixed-Rate Zero-Delay Source Coding for Stationary Vector-Valued Gauss-Markov Sources
- Author
-
Photios A. Stavrou, Jan Ostergaard, Bilgin, Ali, Storer, James A., Serra-Sagrista, Joan, and Marcellin, Michael W.
- Subjects
Rate-distortion function ,0209 industrial biotechnology ,Stationary vectors ,Gaussian ,Zero delay ,Scalar (mathematics) ,Rate coding ,Gaussian distribution ,02 engineering and technology ,Upper and lower bounds ,Distortion factor ,Fixed rate coding ,Gauss-Markov ,symbols.namesake ,020901 industrial engineering & automation ,Reglerteknik ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Dither ,zero delay coding ,Mathematics ,vector sources ,Markov chain ,Communication Systems ,Gauss ,Multiplicative function ,Image coding ,Codes (symbols) ,Electric distortion ,Mean square error ,Information rates ,020206 networking & telecommunications ,Control Engineering ,Gauss Markov ,Data compression ,Signal distortion ,symbols ,Mean squared error ,Kommunikationssystem ,Decoding methods - Abstract
We consider a fixed-rate zero-delay source coding problem where a stationary vector-valued Gauss-Markov source is compressed subject to an average mean-squared error (MSE) distortion constraint. We address the problem by considering the Gaussian nonanticipative rate distortion function (NRDF) which is a lower bound to the zero-delay Gaussian RDF. Then, we use its corresponding optimal “test-channel” to characterize the stationary Gaussian NRDF and evaluate the corresponding information rates. We show that the Gaussian NRDF can be achieved by p-parallel fixed-rate scalar uniform quantizers of finite support with dithering signal up to a multiplicative distortion factor and a constant rate penalty. We demonstrate our framework with a numerical example. QC 20180108. QC 20191025
- Published
- 2018
287. Run Compressed Rank/Select for Large Alphabets
- Author
-
Dmitry Kosolobov, Juha Kärkkäinen, José Fuentes-Sepúlveda, Simon J. Puglisi, Bilgin, Ali, Marcellin, Michael W., Serra-Sagrista, Joan, Storer, James A., Department of Computer Science, Helsinki Institute for Information Technology, Bioinformatics, and Algorithmic Bioinformatics
- Subjects
FOS: Computer and information sciences ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,State of the art ,Data structures, Arbitrary constants ,Large alphabets ,Combinatorics ,Log-log plot ,020204 information systems ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Computer Science - Data Structures and Algorithms ,0202 electrical engineering, electronic engineering, information engineering ,Rank (graph theory) ,Data Structures and Algorithms (cs.DS) ,Run length ,succinct, Data compression ,String (computer science) ,State (functional analysis) ,rank select ,Predecessor problems ,Data structure ,Binary logarithm ,113 Computer and information sciences ,010201 computation theory & mathematics ,Optimal time ,Alphabet ,Constant (mathematics) ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Given a string of length $n$ that is composed of $r$ runs of letters from the alphabet $\{0,1,\ldots,\sigma{-}1\}$ such that $2 \le \sigma \le r$, we describe a data structure that, provided $r \le n / \log^{\omega(1)} n$, stores the string in $r\log\frac{n\sigma}{r} + o(r\log\frac{n\sigma}{r})$ bits and supports select and access queries in $O(\log\frac{\log(n/r)}{\log\log n})$ time and rank queries in $O(\log\frac{\log(n\sigma/r)}{\log\log n})$ time. We show that $r\log\frac{n(\sigma-1)}{r} - O(\log\frac{n}{r})$ bits are necessary for any such data structure and, thus, our solution is succinct. We also describe a data structure that uses $(1 + \epsilon)r\log\frac{n\sigma}{r} + O(r)$ bits, where $\epsilon > 0$ is an arbitrary constant, with the same query times but without the restriction $r \le n / \log^{\omega(1)} n$. By simple reductions to the colored predecessor problem, we show that the query times are optimal in the important case $r \ge 2^{\log^\delta n}$, for an arbitrary constant $\delta > 0$. We implement our solution and compare it with the state of the art, showing that the closest competitors consume 31-46% more space., Comment: This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Actions H2020-MSCA-RISE-2015 BIRDS GA No. 690941. 10 pages, 1 figure, 4 tables; published in DCC'2018
- Full Text
- View/download PDF
288. The Current Role of Image Compression Standards in Medical Imaging.
- Author
-
Liu F, Hernandez-Cabronero M, Sanchez V, Marcellin MW, and Bilgin A
- Abstract
With increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings.
- Published
- 2017
- Full Text
- View/download PDF
289. Correlation modeling for compression of computed tomography images.
- Author
-
Munoz-Gomez J, Bartrina-Rapesta J, Marcellin MW, and Serra-Sagristà J
- Subjects
- Algorithms, Humans, Models, Theoretical, Image Processing, Computer-Assisted methods, Tomography, X-Ray Computed methods
- Abstract
Computed tomography (CT) is a noninvasive medical test obtained via a series of X-ray exposures resulting in 3-D images that aid medical diagnosis. Previous approaches for coding such 3-D images propose to employ multicomponent transforms to exploit correlation among CT slices, but these approaches do not always improve coding performance with respect to a simpler slice-by-slice coding approach. In this paper, we propose a novel analysis which accurately predicts when the use of a multicomponent transform is profitable. This analysis models the correlation coefficient r based on image acquisition parameters readily available at acquisition time. Extensive experimental results from multiple image sensors suggest that multicomponent transforms are appropriate for images with correlation coefficient r in excess of 0.87.
- Published
- 2013
- Full Text
- View/download PDF
290. View compensated compression of volume rendered images for remote visualization.
- Author
-
Lalgudi HG, Marcellin MW, Bilgin A, Oh H, and Nadar MS
- Subjects
- Algorithms, Models, Theoretical, Telecommunications, Data Compression methods, Image Processing, Computer-Assisted methods
- Abstract
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
- Published
- 2009
- Full Text
- View/download PDF
291. Improved resolution scalability for bilevel image data in JPEG2000.
- Author
-
Raguram R, Marcellin MW, and Bilgin A
- Abstract
In this paper, we address issues concerning bilevel image compression using JPEG2000. While JPEG2000 is designed to compress both bilevel and continuous tone image data using a single unified framework, there exist significant limitations with respect to its use in the lossless compression of bilevel imagery. In particular, substantial degradation in image quality at low resolutions severely limits the resolution scalable features of the JPEG2000 code-stream. We examine these effects and present two efficient methods to improve resolution scalability for bilevel imagery in JPEG2000. By analyzing the sequence of rounding operations performed in the JPEG2000 lossless compression pathway, we introduce a simple pixel assignment scheme that improves image quality for commonly occurring types of bilevel imagery. Additionally, we develop a more general strategy based on the JPIP protocol, which enables efficient interactive access of compressed bilevel imagery. It may be noted that both proposed methods are fully compliant with Part 1 of the JPEG2000 standard.
- Published
- 2009
- Full Text
- View/download PDF
292. Joint source-channel rate allocation in parallel channels.
- Author
-
Pu L, Marcellin MW, Djordjevic I, Vasic B, and Bilgin A
- Subjects
- Numerical Analysis, Computer-Assisted, Reproducibility of Results, Sensitivity and Specificity, Algorithms, Computer Graphics, Data Compression methods, Image Enhancement methods, Image Interpretation, Computer-Assisted methods, Signal Processing, Computer-Assisted
- Abstract
A fast rate-optimal rate allocation algorithm is proposed for parallel transmission of scalable images in multichannel systems. Scalable images are transmitted via fixed-length packets. The proposed algorithm selects a subchannel, as well as a channel code rate for each packet, based on the signal-to-noise ratios (SNRs) of the subchannels. The resulting scheme provides unequal error protection of source bits and significant gains are obtained over equal error protection schemes. An application of the proposed algorithm to JPEG2000 transmission shows the advantages of exploiting differences in SNRs between subchannels. Multiplexing of multiple sources is also considered, and additional gains are achieved by exploiting information diversity among the sources.
- Published
- 2007
- Full Text
- View/download PDF
293. Joint source/channel coding for image transmission with JPEG2000 over memoryless channels.
- Author
-
Wu Z, Bilgin A, and Marcellin MW
- Subjects
- Computer Systems, Image Enhancement methods, Numerical Analysis, Computer-Assisted, Signal Processing, Computer-Assisted, Algorithms, Artificial Intelligence, Computer Graphics, Data Compression methods, Image Interpretation, Computer-Assisted methods, Multimedia, Video Recording methods
- Abstract
The high compression efficiency and various features provided by JPEG2000 make it attractive for image transmission purposes. A novel joint source/channel coding scheme tailored for JPEG2000 is proposed in this paper to minimize the end-to-end image distortion within a given total transmission rate through memoryless channels. It provides unequal error protection by combining the forward error correction capability from channel codes and the error detection/localization functionality from JPEG2000 in an effective way. The proposed scheme generates quality scalable and error-resilient codestreams. It gives competitive performance with other existing schemes for JPEG2000 in the matched channel condition case and provides more graceful quality degradation for mismatched cases. Furthermore, both fixed-length source packets and fixed-length channel packets can be efficiently formed with the same algorithm.
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.