74 results on '"Wenjun Zhang"'
Search Results
2. RIS-Aided LDM System: A New Prototype in Broadcasting System
- Author
-
Yizhe Zhang, Wen He, Dazhi He, Yin Xu, Yunfeng Guan, and Wenjun Zhang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2023
3. Hybrid-Mux Signal Structure and Resource Allocation for In-Band Distribution Link and ITND Transmission in SFN Environment
- Author
-
Lidie Liu, Yin Xu, Yiyan Wu, Yihang Huang, Dazhi He, and Wenjun Zhang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2023
4. Enhanced Nonuniform Constellations for High-Capacity Communications With Low-Complexity Demappers
- Author
-
Hanjiang Hong, Yin Xu, Yiyan Wu, Yihang Huang, Na Gao, Dazhi He, Haoyang Li, Yu Zhang, and Wenjun Zhang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
5. Ultra-Low Latency, Stable, and Scalable Video Transmission for Free-Viewpoint Video Services
- Author
-
Yu Dong, Li Song, Rong Xie, and Wenjun Zhang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
6. Performance Analysis and Optimization of LDM-Based Layered Multicast
- Author
-
Yiwei Zhang, Dazhi He, Yihang Huang, Yin Xu, and Wenjun Zhang
- Subjects
Media Technology ,Electrical and Electronic Engineering - Published
- 2022
7. StereoARS: Quality Evaluation for Stereoscopic Image Retargeting With Binocular Inconsistency Detection
- Author
-
Wenjun Zhang, Yabin Zhang, Weisi Lin, Qiuping Jiang, Ke Gu, Feng Shao, and Zhenyu Peng
- Subjects
Binocular rivalry ,Monocular ,Pixel ,Computer science ,business.industry ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Stereoscopy ,law.invention ,Seam carving ,law ,Quality Score ,Retargeting ,Media Technology ,Computer vision ,Quality (business) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
Many stereoscopic image retargeting (SIR) methods have been developed for automatically and intelligently resizing stereoscopic images and we cannot always rely on time-consuming subjective user studies to validate the performance of different SIR methods. It is therefore required to design reliable objective metrics for SIR quality evaluation. This paper extends our previous 2D aspect ratio similarity (ARS) metric to a stereo 3D version termed as StereoARS where the key idea is to investigate into retargeting inconsistency between the original stereo correspondences. Our proposed StereoARS operates via two stages: monocular quality estimation and binocular inconsistency detection. In the first stage, monocular quality estimation is performed by applying a modified ARS measure on the left and right views separately to quantify the quality degradation within each monocular view. In the second stage, binocular inconsistency detection is performed in both pixel-level and grid-level to characterize the influence of binocular rivalry and stereo visual discomfort on SIR quality. In addition, we also measure to what extent the original pixel visibility relation is preserved after SIR as another binocular quality factor. Finally, these monocular and binocular quality estimates are fused to produce an overall SIR quality score. Extensive experiments have demonstrated that StereoARS achieves better alignment with human subjective ratings than the existing metrics by a large margin.
- Published
- 2022
8. A Polygonal Line Min-Sum Decoding Scheme for Low Density Parity Check Codes
- Author
-
Yihang Huang, Wenjun Zhang, Na Gao, Dazhi He, Hao Ju, Yiyan Wu, and Yin Xu
- Subjects
business.industry ,Computer science ,Belief propagation ,symbols.namesake ,Additive white Gaussian noise ,Digital Video Broadcasting ,Media Technology ,symbols ,Digital television ,Electrical and Electronic Engineering ,Low-density parity-check code ,business ,Error detection and correction ,Algorithm ,Decoding methods ,Communication channel - Abstract
Low-density parity-check (LDPC) codes are widely used as error correction codes in new generation digital TV standards, such as the second generation of terrestrial digital video broadcasting standard (DVB-T2), Advanced Television Systems Committee (ATSC) 3.0, etc. The nonlinear belief propagation (BP) algorithm has excellent decoding performance for LDPC codes, but is often simplified in hardware implementations by linear min-sum (MS) algorithm due to its high complexity. This simplification also leads to over-estimation problems, which can be corrected by adding factors in conventional algorithms (e.g., normalized min-sum (NMS), offset min-sum (OMS), and variable scaling normalized min-sum (VMS) algorithms). However, the correction factors of these modified MS algorithms cannot adapt to different channels and modulations, and the performance needs further improvement. In this paper, the concepts of over-estimation value (OEV) and over-estimation rate (OER) are introduced to describe the over-estimation problem of the MS algorithm. Then, under the guidance of OEV and OER, a polygonal line min-sum (PMS) algorithm with correction factors adapted to different channels and modulations is proposed according to LLR distribution. Following the properties of OEV and OER, PMS algorithm is further simplified into Simplified PMS (SPMS) algorithm. LDPC codes from ATSC 3.0 are adopted in this paper to evaluate SPMS algorithm in comparison with the conventional algorithms. Extensive simulation results show that the SPMS algorithm for ATSC 3.0 LDPC decoder has at most 1.61dB, 0.24dB and 0.36dB gain over NMS, OMS and VMS algorithms respectively when frame error rate (FER) is at 10⁻⁴ level over additive white Gaussian noise (AWGN) channel with QPSK modulation. More importantly, the simulation results show that the SPMS algorithm can achieve much better performance than these modified MS algorithms over AWGN and Rayleigh channel with higher-order modulations or under limited maximum iteration number.
- Published
- 2022
9. An Elastic System Architecture for Edge Based Low Latency Interactive Video Applications
- Author
-
Wenjun Zhang, Yu Dong, Rong Xie, and Li Song
- Subjects
Flexibility (engineering) ,Computer architecture ,Computer science ,Interactive video ,Pipeline (computing) ,Cloud gaming ,Scalability ,Media Technology ,Systems architecture ,Electrical and Electronic Engineering ,Latency (engineering) ,Edge computing - Abstract
5G and edge computing have brought great changes to video industry. Interactive video is becoming an emerging application form of multimedia service, which provides attraction beyond typical scenarios like cloud gaming and remote virtual reality (VR), and puts forward great challenges in resource capacity, response latency, and function flexibility to its service system. In this paper, we propose an elastic system architecture with low latency features to accommodate generic interactive video applications on near user edges. To increase system flexibility, we firstly design a dynamic Directed Acyclic Graph (dDAG) model for efficient task representation. Secondly, based on the model, we present the elastic architecture together with its scalable workflow pipeline. Thirdly, we propose a set of novel latency measurement metrics to analyze and optimize the performance of an interactive video system. Based on the proposed approaches, we disassemble a real world free-viewpoint synthesis application and benchmark its performance with the metrics. Extensive experimental results show the flexibility of our system to handle the stochastic human interactions during a video service session, with less than 5 ms additional scheduling latency introduced. End to end latency is kept within 43 ms for complex functions, and 28 ms for simpler scenarios, which satisfies the restrictions of most interactive video applications provided by an edge. Client of the architecture serves as a pure video player, which is also friendly to power limited terminals such as 5G phones. Efficiency and stability analyses of the system show superiorities over existing work, and also reveal potential optimization directions for future research.
- Published
- 2021
10. MBSFN or SC-PTM: How to Efficiently Multicast/Broadcast
- Author
-
Yizhe Zhang, Wenjun Zhang, Dazhi He, Yunfeng Guan, and Yin Xu
- Subjects
Optimization problem ,Multicast ,business.industry ,Computer science ,Single-frequency network ,Throughput ,Multimedia Broadcast Multicast Service ,Synchronization ,Nonlinear programming ,Media Technology ,Electrical and Electronic Engineering ,business ,Computer network ,Power control - Abstract
In Multimedia Broadcast/Multicast Service(MBMS), Multicast/Broadcast Single Frequency Network (MBSFN) and Single-cell Point-to-multipoint Network (SC-PTM) are two essential ways to organize networks to provide multicast services. MBSFN shows its unique advantage of enhancing signals at the boundary of two cells, however, highly requiring synchronization of symbols. SC-PTM shows the advantage of the flexibility on the network deployment. The comparison and complementarity of these two modes are explored. In this paper, we analyze the reception performance of MBSFN and SC-PTM from multiple perspectives. We first compare the successful transmission probability (STP) of MBSFN with that of SC-PTM. Then, we consider optimal power control problems in MBSFN and SC-PTM. The corresponding power control optimization problems are with fraction and max-min forms and solved by the Dinkelbach algorithm. Furthermore, we propose a joint mode-selection and power-control method in the hybrid mode of MBSFN and SC-PTM, where each cell could select the multicast mode from MBSFN and SC-PTM. This is a mixed-integer nonlinear programming (MINLP) problem and solved by the concave-convex procedure (CCCP) and the Dinkelbach algorithm. To further improve the throughput, the appropriate successful reception proportion is selected under the opportunistic multicast. Finally, the numerical simulations show the performance of our algorithms and compare the throughput of two modes, which verify our analysis.
- Published
- 2021
11. Subjective and Objective Quality Assessment of Compressed Screen Content Videos
- Author
-
Guangtao Zhai, Li Teng, Yiling Xu, Wenjun Zhang, Heng Zhao, and Xiongkuo Min
- Subjects
Measure (data warehouse) ,Similarity (geometry) ,Computer science ,Image quality ,business.industry ,Distortion (optics) ,media_common.quotation_subject ,Frame (networking) ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Video quality ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Quality (business) ,Data mining ,Electrical and Electronic Engineering ,business ,computer ,media_common - Abstract
With the widespread of application scenarios such as remote office and cloud collaboration, Screen Content Video (SCV) and its processing which show different characteristics from Natural Scene Video (NSV) and its processing, are increasingly attracting researcher’s attention. Among these processing techniques, quality evaluation plays an important role in various media processing systems. Despite extensive research on general Image Quality Assessment (IQA) and Video Quality Assessment (VQA), quality assessment of SCVs remains undeveloped. In particular, SCVs always suffer from compression degradations in all kinds of application scenarios. In this article, we first study subjective SCV quality assessment. Specifically, we first construct a Compressed Screen Content Video Quality (CSCVQ) database with 165 distorted SCVs compressed from 11 most common screen application scenarios using the H.264, HEVC and HEVC-SCC formats. Twenty subjects were recruited to participate in the subjective test on the CSCVQ database. Then we study objective SCV quality assessment and propose a SCV quality measure. We observe that localized protruding information such as curves and dots can be well captured by the local relative standard deviation which then can be used to measure the intra-frame quality. Base on this observation, we develop a MutiScale Relative Standard Deviation Similarity (MS-RSDS) model for SCV quality evaluation. In our model, the relative standard deviation similarity between the reference and distorted SCVs is measured from frame differences between two adjacent frames, which can capture the spatiotemporal distortions accurately. A multiscale strategy is also applied to strengthen the original single-scale model. Extensive experiments are performed to compare the proposed model with the most popular and state-of-the-art quality assessment models on the CSCVQ database. Experimental results show that our proposed MS-RSDS model which has relatively low computation complexity, outperforms other IQA/VQA models.
- Published
- 2021
12. Backward Compatible Low-Complexity Demapping Algorithms for Two-Dimensional Non-Uniform Constellations in ATSC 3.0
- Author
-
Dazhi He, Wenjun Zhang, Hanjiang Hong, Yin Xu, Na Gao, and Yiyan Wu
- Subjects
Computer science ,business.industry ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Code rate ,Broadcasting ,Backward compatibility ,Digital terrestrial television ,Reduction (complexity) ,Limit (music) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Electrical and Electronic Engineering ,business ,Algorithm ,Quadrature amplitude modulation - Abstract
Non-uniform constellation (NUC) is an advanced technology in digital terrestrial television broadcasting (DTTB) systems to reduce the shapping gap of BICM capacity to Shannon theoretical limit and provide performance gain. Two-dimensional NUC (2D-NUC) is a kind of NUC providing more gain but bringing higher demapping complexity at the receiver, which hinders its application prospects, especially in power limited systems. This paper proposes three novel demapping algorithms with reduced complexity for low to medium code rate 2D-NUCs in Advanced Television Systems Committee 3rd Generation (ATSC 3.0) standard. The proposed algorithms are based on the introduction of virtual points, the strategy of condensed symbols reduction and some reasonable approximations. There is a trade-off between the demapping complexity and performance. These three algorithms have different degrees of reduction in complexity and performance degradation, so they accommodate for different practical requirements. Theoretical analysis and simulation results are also given in this paper to prove the efficiency of the proposed demapping algorithms with reduced complexity.
- Published
- 2021
13. Mode Selection Algorithm for Multicast Service Delivery
- Author
-
Yunfeng Guan, Wenjun Zhang, Yiwei Zhang, Yin Xu, and Dazhi He
- Subjects
Multicast ,Computer science ,Point-to-multipoint communication ,Single-frequency network ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Nonlinear programming ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Cellular network ,Electrical and Electronic Engineering ,Algorithm ,Selection (genetic algorithm) - Abstract
The ever-growing demands for data-hungry services make the multicast play an increasingly important role in service delivery. 3GPP has enabled multicast for cellular network, including the adoption of two distinct multicast modes, Multicast/Broadcast Single Frequency Network (MBSFN) and Single Cell Point to Multipoint (SC-PTM). However, the lack of flexible selection between different multicast modes limits the transmission capacity. In order to further increase system throughput, this article focuses on exploiting the complementarity between MBSFN mode and SC-PTM mode, namely selecting the appropriate multicast mode for each cell in the network. This approach benefits from the trade-off between the utilization of user diversity via SC-PTM and the extra SFN gain from MBSFN, which increases the system throughput from a perspective of enhancing configuration flexibility. By constructing the analytical model of the network comprising cells with different multicast modes, we formulate a multicast mode selection problem aiming to maximize system throughput. Then, the formulated mixed-integer nonlinear programming problem is converted to a Difference of Convex (DC) programming, which is solved by the proposed algorithm based on the concave-convex procedure. Considering potential massive scale networks and high selection frequency, alternative mode selection algorithms with lower complexity are also designed.
- Published
- 2021
14. NUC Optimization-Aided Hierarchical Modulation to Achieve Comparable Capacity as Layered Division Multiplexing
- Author
-
Dazhi He, Wenjun Zhang, Yin Xu, Lidie Liu, and Yiyan Wu
- Subjects
Physics ,Particle swarm optimization ,020206 networking & telecommunications ,Hierarchical modulation ,02 engineering and technology ,Division (mathematics) ,Topology ,Multiplexing ,Signal-to-noise ratio ,Single antenna interference cancellation ,Modulation ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Electrical and Electronic Engineering ,Capacity loss - Abstract
This article investigates the non-uniform constellation (NUC) optimization adapted for Hierarchical modulation (HM) without using Successive Interference Cancellation (SIC). This approach is promising to reduce system demod/decode delay in comparison to Layered Division Multiplexing (LDM). By maximizing the constellation constrained capacity of Enhanced Layer (EL) service in HM, while the capacity of Core Layer (CL) service are approximately the same in HM and LDM, the capacity achieved by HM using NUCs is comparable to LDM. Particle Swarm Optimization (PSO) algorithm is used to resolve this problem. To accelerate the optimization, initial constellation is selected from regular NUCs or the combination of CL and EL constellations of LDM in ATSC 3.0. The results imply that under certain capacity demands, especially when there is a large difference between the SNR thresholds of CL and EL or the power ratio of CL to EL is high (for example, 10 dB or higher), HM, with lower delay compared to LDM, can achieve the capacity comparable or even better than LDM with the help of NUCs. Even if the power ratio of CL to EL is relatively low (for example, 3 dB), the capacity loss can be reduced with properly designed NUCs and the SNR threshold loss of EL can be approximately 0.4 dB with respect to LDM. However, LDM is still superior to HM when the difference between the SNR thresholds of CL and EL is relatively low.
- Published
- 2021
15. Min-Sum Algorithm Using Multi-Edge-Type Normalized Scheme for ATSC 3.0 LDPC Decoders
- Author
-
Wenjun Zhang, Hanjiang Hong, Chang Wen Chen, Dazhi He, Yin Xu, Sung-Ik Park, Hao Ju, and Na Gao
- Subjects
Offset (computer science) ,Computer science ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Edge type ,symbols.namesake ,Additive white Gaussian noise ,Frame error rate ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,symbols ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Decoding methods ,Density evolution - Abstract
To offer commercial LDPC decoders with better performance, in this paper, multi-edge-type (MET) normalized scheme is proposed to improve the performance of traditional min-sum algorithm (MSA) and its modified versions. With respect to sum-product algorithm (SPA), we firstly analyze and find out that the degradations of convergence in different edge types are distinct in MSA theoretically. To compensate the different degradations above, proposed MET-normalized scheme is used in normalized min-sum algorithm (MET-NMSA). In addition, MET-based density evolution (MET-DE) is also presented to search optimized MET-scaling factors for MET-NMSA. To further verify the validity of MET-normalized scheme, Advanced Television Systems Committee (ATSC) 3.0 LDPC codes are used to evaluate MET-NMSA in comparison with the conventional algorithms, i.e., normalized min-sum algorithm (NMSA), offset min-sum algorithm (OMSA) and variable-scaling normalized min-sum algorithm (VS-NMSA). Extensive simulation results show that MET-NMSA for ATSC 3.0 LDPC decoders has at most 1.53dB, 0.21dB and 0.35dB gain over NMSA, OMSA and VS-NMSA respectively when frame error rate (FER) is at 10-4 level over additive white Gaussian noise (AWGN) channels using QPSK modulation.
- Published
- 2020
16. Enhancements on Coding and Modulation Schemes for LTE-Based 5G Terrestrial Broadcast System
- Author
-
Dazhi He, Yang Cai, Wenjun Zhang, Hanjiang Hong, Yin Xu, Na Gao, Yiyan Wu, and Xiaohan Duan
- Subjects
business.industry ,Broadband networks ,Computer science ,3rd Generation Partnership Project 2 ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Spectral efficiency ,Broadcasting ,QAM ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Electronic engineering ,Turbo code ,Electrical and Electronic Engineering ,Low-density parity-check code ,business ,Quadrature amplitude modulation - Abstract
Broadcasting and broadband network is moving towards integration and LTE-based 5G terrestrial broadcast is now researched in Release 16 in Third Generation Partnership Project (3GPP) standardization meetings. However, the work scope of LTE-based 5G terrestrial broadcast focuses on specifying new numerologies and some minor improvement on cell acquisition subframe, which is insufficient. In this paper, limitations in coding and modulation schemes of LTE-based 5G terrestrial broadcast system, e.g., Turbo codes and Quadrature Amplitude Modulation (QAM), are detailedly analyzed. To further enhance the spectrum efficiency of LTE-based 5G terrestrial broadcast system, LDPC (Low Density Parity Check) codes from 5G new radio (NR) standard and newly designed non-uniform constellations (NUCs) are adopted in this paper to replace Turbo codes and QAM respectively. Extensive simulations and complexity analysis show that the proposed LDPC coding and NUC modulation scheme, either standalone or combined, can provide significant performance gain over Additive White Gaussian Noise (AWGN) and Tapped Delayline (TDL) channels, without additional complexity. To summarize, this paper investigates the weakness of the coding and modulation schemes in current systems and provides potential alternatives for the enhanced future broadcast in 3GPP standard.
- Published
- 2020
17. Modeling the Screen Content Image Quality via Multiscale Edge Attention Similarity
- Author
-
Yiling Xu, Wenjun Zhang, Zhan Ma, Le Yang, Jun Sun, and Qi Yang
- Subjects
Mean squared error ,Correlation coefficient ,Image quality ,business.industry ,Computer science ,Gaussian ,Mean opinion score ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Luminance ,symbols.namesake ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,symbols ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Laplace operator - Abstract
Screen content image (SCI) prevails because of the explosive growth of screen oriented applications. This leads to extensive studies on SCI quality assessment and modeling for application optimization. In this paper, we propose a full reference multiscale edge attention (MSEA) similarity index to efficiently measure the perceptual quality of a screen image. This model considers the perceptual impacts of fixation attention, edge structure and edge contrast jointly, to accurately capture the masking phenomena (e.g., frequency selectivity, luminance, contrast, etc.) of our human visual system (HVS) when viewing a screen image. Specifically, we decompose the images using Gaussian and Laplacian pyramids which are then used to derive the edge structure, and edge contrast feature maps. Together with the fixation attention map generated by weighted luminance difference between the reference and distorted SCIs, we could eventually offer a MSEA similarity map for final index score. We have evaluated this model using a publicly accessible screen image database. Simulation results have shown that the MSEA similarity index correlates with the collected subjective mean opinion score (MOS) very well. In fact, it is ranked at the first place for both Pearson linear correlation coefficient (PLCC) and Root mean squared error (RMSE), and ranked at the second place for Spearman rank-order correlation coefficient (SROCC) measurements, among existing quality metrics.
- Published
- 2020
18. Overview of Physical Layer Enhancement for 5G Broadcast in Release 16
- Author
-
Yiwei Zhang, Yihang Huang, Dazhi He, Hao Cheng, Yin Xu, Wenjun Zhang, Hanjiang Hong, Xiaohan Duan, Wang Wanting, and Huang Xiuxuan
- Subjects
Standardization ,Computer science ,business.industry ,Point-to-multipoint communication ,Physical layer ,020206 networking & telecommunications ,02 engineering and technology ,Multimedia Broadcast Multicast Service ,Telecommunications link ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Link level ,Electrical and Electronic Engineering ,Unicast ,business ,5G ,Computer network - Abstract
3GPP is now carrying forward the development of the point to multipoint transmission mode of 5G on the basis of the cellular infrastructure and standard. Since the dedicated proposal for multimedia broadcast multicast system (MBMS) was approved, the MBMS technologies are evolving with the update of the requirements from 3G to 5G. This paper reviews the latest progress during the evolved MBMS standardization, which focuses on the LTE-based 5G terrestrial broadcast mode. Some agreements in terms of two widely discussed topics during 3GPP meetings, cell acquisition subframe (CAS) enhancement and numerology refinement, are presented. The enhancement of CAS and the update of numerology aim to deal with the service outage issue in the case of covering large area and serving high-mobility users. To verify the improvement brought by the CAS enhancement and the new numerology, simulations from system level and link level are carried out by several organizations and the representative results are presented and analyzed in this paper.
- Published
- 2020
19. A Wavelet-Predominant Algorithm Can Evaluate Quality of THz Security Image and Identify Its Usability
- Author
-
Xiongkuo Min, Rong Xie, Qingli Li, Wenjun Zhang, Xiaokang Yang, Menghan Hu, and Guangtao Zhai
- Subjects
Mean squared error ,Computer science ,Image quality ,Mean opinion score ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Noise ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Image noise ,Electrical and Electronic Engineering ,Image resolution ,Algorithm - Abstract
This paper presents an aggregate wavelet-predominant algorithm to measure the distortions in THz security images. The algorithm integrates a spectral-based sharpness estimator, a noise estimator derived alpha-stable model and an overall viewing experience estimator based on free-energy principle. Among them, the greater weight is assigned to the spectral-based sharpness estimator considering that the main quality factor in THz security image is sharpness. To verify the feasibility of the proposed metric, we construct the THz security image dataset including a total of 181 THz security images, and each image has the mean opinion score (MOS) collected via subjective quality evaluation experiment. Quantitative experimental results on the constructed THz security image dataset show that the aggregate wavelet-predominant estimator produces the promising overall performance for the estimation of MOS values, with PLCC, SROCC, and RMSE of 0.900, 0.873, and 0.386, respectively. This performance is superior to other opinion-unaware approaches, viz. , FISBLIM, SISBLIM, NIQE, CPBD, SINE, S3, FISH, and noise estimator. The determination coefficient ( ${R} ^{{2}}$ ) of linear regression between reference and predicted MOSs is 0.81. The result of Bland–Altman analysis further validates that the aggregate wavelet-predominant estimator can substitute for the subjective IQA of THz security image, with approximately 94.5% of data points locating within the limits of agreement. For usability identification, the wavelet-predominant estimator gives the satisfactory results, with accuracy, precision, recall rate, and false positive rate of 84.0%, 79.8%, 95.0%, and 29.6%, respectively. Furthermore, the potential application perspectives of the proposed metric can refer to commercial applications (guarantee THz security images of good quality) and scientific researches (assist in software development for THz security image analysis). The dataset is available at https://doi.org/10.6084/m9.figshare.7700123.v3 . Possible researches on this dataset may include the development of THz quality standards, the selection of the best display mode, the enhancement of images, the modeling of image noise, and the detection of prohibited goods.
- Published
- 2020
20. A Low Complexity Decoding Scheme for Raptor-Like LDPC Codes
- Author
-
Wenjun Zhang, Jun Sun, Dazhi He, Yin Xu, Yiyan Wu, Hao Ju, and Genning Zhang
- Subjects
Offset (computer science) ,Computer science ,business.industry ,Approximation algorithm ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Code rate ,Belief propagation ,Likelihood-ratio test ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Digital television ,Electrical and Electronic Engineering ,Low-density parity-check code ,business ,Algorithm ,Decoding methods - Abstract
Recently, a new structure of low density parity check (LDPC) code named raptor-like LDPC code has attracted much attention. It has better performance at low code rate. In this paper, a novel decoding scheme for raptor-like LDPC code is proposed. First, the Gaussian approximation density evolution (GADE) algorithm is used to track and analyze the message transmission during the decoding process. It is found that certain log likelihood ratio (LLR) messages can be approximated by “zero” setting in the early iteration of raptor-like LDPC decoding. In other words, some column and row operations could be eliminated without compromise the performance. Next, we propose a new decoding scheme, which can skip unnecessary column and row operations. In comparison with the traditional belief propagation (BP)-based LDPC decoding scheme, the proposed scheme can reduce the decoding complexity. Additionally, a new algorithm is developed to facilitate the selection of the early iteration number. With this novel design, the proposed decoding scheme performs almost the same as the traditional BP-based scheme. To proof the concept, the raptor-like LDPC codes in the ATSC3.0 digital TV system are used to evaluate the proposed scheme, in comparison with the traditional BP-based schemes, i.e., sum-product algorithm (SPA), offset min-sum algorithm (OMSA), and normalized min-sum algorithm (NMSA). The simulation results confirm that the proposed scheme can reduce the decoding complexity without sacrifice in performance for all SPA, OMSA, and NMSA methods. About 10% complexity reduction can be achieved. This will reduce the buttery consumption for Internet of Things (IoT) and handheld devices.
- Published
- 2019
21. Adaptive Bootstrap Design for Hybrid Terrestrial Broadcast and Mobile Communication Networks
- Author
-
Dazhi He, Wenjun Zhang, Yihang Huang, Yin Xu, and Yunfeng Guan
- Subjects
Computer science ,Bandwidth (signal processing) ,Fast Fourier transform ,020206 networking & telecommunications ,02 engineering and technology ,Frequency domain ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Frequency offset ,Time domain ,Electrical and Electronic Engineering ,Algorithm ,Multipath propagation ,Decoding methods ,Communication channel - Abstract
The bootstrap in ATSC 3.0 is expected to act as a universal wake-up signal for various wireless systems in addition to broadcast network. However, the signaling decoding performance of the standardized bootstrap degrades significantly in channels of fast time-variation and strong multipath. Furthermore, part of available bandwidth is reserved to ensure the compatibility with mobile communication network (MCN), which puts a limitation on the performance improvement. In this paper, to make the best of the available bandwidth, we introduce the bandwidth-concerned version information to enable an adaptive bandwidth configuration for the proposed bootstrap. Moreover, a 2-D signaling scheme is used to increase signaling capacity by selecting different gold sequences in frequency domain (FD) and simultaneously applying cyclic shift in time domain (TD). At the receiver side, we first provide improved estimators of symbol timing offset (STO) and fine frequency offset (FFO) for the bootstrap with special TD structure. Meanwhile, a learning-based binary classifier taking the output of STO estimator as training data is provided to generate an SNR-independent threshold for spectrum sensing without requiring the knowledge of channel conditions nor noise estimator. Afterwards, an inverse fast Fourier transform (IFFT)-based algorithm is used to decode FD signaling in the presence of unknown TD signaling, which allows 3-bit higher signaling capacity. Numerical analysis and simulation results demonstrate that the proposed adaptive bootstrap design as well as corresponding receiver algorithms significantly outperform the standardized one in terms of synchronization, detection and signaling decoding.
- Published
- 2019
22. Performance Analysis of LDPC-BICM System Based on Gaussian Approximation
- Author
-
Yin Xu, Jun Sun, Wenjun Zhang, Dazhi He, and Genning Zhang
- Subjects
Discretization ,Computer science ,Gaussian ,Approximation algorithm ,020206 networking & telecommunications ,Probability density function ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Gaussian approximation ,symbols.namesake ,Additive white Gaussian noise ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,symbols ,Demodulation ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Computer Science::Information Theory - Abstract
In this paper, we propose a performance analysis algorithm of low density parity check-bit interleaved coded modulation (LDPC-BICM) system. First, we introduce the Gaussian mixture approximation method of the LLR messages output from the demodulator over the AWGN channel. Then, we analyze the density evolution based on these Gaussian mixture approximation LLR messages. During the analysis, the Gaussian approximation is applied to simplify the calculation. Some LDPC-BICM systems (which are adopted by ATSC3.0 standard) are used to evaluate the proposed algorithm. We also employ the actual performances and the thresholds obtained by the multi-edge type (MET) discretized density evolution algorithm as the references. The simulation results prove that our proposed algorithm can work well in most cases. Furthermore, the proposed algorithm is simpler than the MET algorithm because of the Gaussian or Gaussian mixture approximations.
- Published
- 2019
23. Transmitter identification with watermark signal in DVB-H signal frequency network
- Author
-
Feng, Yang, Ling, Na Hu, Lin, Gui, Zhe, Wang, and WenJun Zhang
- Subjects
Radio transmitters -- Analysis ,Radio transmitters -- Electric properties ,Telecommunication systems -- Analysis ,Telecommunication systems -- Electric properties ,Business ,Electronics ,Mass communications - Published
- 2009
24. Improved Bootstrap Design for Frequency-Domain Signaling Transmission
- Author
-
Yanfeng Wang, Mingmin Wang, Wenjun Zhang, Dazhi He, Yihang Huang, and Yin Xu
- Subjects
Sequence ,Computer science ,Orthogonal frequency-division multiplexing ,020302 automobile design & engineering ,020206 networking & telecommunications ,02 engineering and technology ,Synchronization ,0203 mechanical engineering ,Transmission (telecommunications) ,Frequency domain ,Distortion ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Electronic engineering ,Electrical and Electronic Engineering ,Algorithm ,Multipath propagation ,Decoding methods - Abstract
In ATSC 3.0, a bootstrap is used to enable initial synchronization and carry transmission parameter signaling. The bootstrap consists of multiple orthogonal frequency division multiplexing symbols which enables variable signaling capacity. Signaling information is conveyed by applying cyclic shift in time-domain. However, the bootstrap suffers from performance loss in the channels of strong multipath and fast time-variation. In this paper, an improved bootstrap enabling separate design of synchronization part and signaling transmission part is proposed. Zero correlation zone sequence is employed in frequency-domain (FD) for signaling transmission by cyclically shifting the sequence in FD. At the receiver, a universal signaling decoding method regardless of the synchronization part design is presented. Besides, to eliminate the performance degradation introduced by non-linear distortion of channel transfer function, signaling validation and correction operations on the previous symbol are required. Numerical simulation indicates that the proposed design achieves better signaling transmission performance than current one in some corner cases.
- Published
- 2017
25. Using object multiplex technique in data broadcast on Digital CATV channel
- Author
-
Zhiqi, Gu, Songyu, Yu, and Wenjun, Zhang
- Subjects
Cable television ,Cable television/data services ,Business ,Electronics ,Mass communications - Abstract
Data broadcast is a new kind of value-add service of DTV broadcasting and some data broadcast protocols have already been established. However, these protocols only describe the method for locating files in data streams, method on how to distribute large collection of files in one or more data streams is not provided. Research on this problem mainly focus on how to decrease the wait time and some methods of allocating files on multiple streams based on access probability are proposed, but how to assign the file with a reasonable bandwidth is ignored. In this paper, we introduced an object multiplex algorithm to optimize the allocation of objects on DTV channel. This method assigns different bandwidth statistically to different object according to its size and access probability. In this method, both download time and wait time are considered. It adopts a modified Virtual Clock (VC) scheduling algorithm to multiplex files accurately and smoothly. Index Terms--Data broadcast, DTV, virtual clock.
- Published
- 2004
26. Media Transmission by Cooperation of Cellular Network and Broadcasting Network
- Author
-
Wang Yao, Yanfeng Wang, Lianghui Ding, Dazhi He, Wei Li, Yiyan Wu, Wenjun Zhang, and Ning Liu
- Subjects
Network architecture ,Radio access network ,Multicast ,Computer science ,business.industry ,0211 other engineering and technologies ,Physical layer ,020206 networking & telecommunications ,02 engineering and technology ,Digital terrestrial television ,Broadcast communication network ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Cellular network ,Electrical and Electronic Engineering ,Unicast ,Telecommunications ,business ,021101 geological & geomatics engineering ,Computer network - Abstract
Future media transmission as a consistent target is pursued by both next mobile communication system (5G) and advanced digital terrestrial television system. High data rate and flexibility are major considerations for future media transmission. Broadcasting network facilitates high-efficiency transmission of popular or live video in large area, and while, cellular network tends to provide personalized and localized services with a unicast/multicast model. The broadcast-like scheme emerges in 5G to resolve the high demand for bandwidth. However, it requires high deployment cost and imposes much interference on unicast/multicast services. In this paper, a cooperative structure of cellular network and broadcasting network using cloud radio access network (C-RAN) is proposed. The expenses of constructing hybrid network can be significantly cut down by applying the centralization and virtualization of C-RAN. Besides, technical approaches for 3GPP and ATSC cooperation in physical layer is detailed. Dedicated return channel (DRC) of broadcasting network is proposed to enable seamless interaction between broadcasters and few users in a remote area with high expense of cellular tower deployment. To loosen the real-time physical layer pipes period restriction of DRC system, three alternative periods are investigated to provide more flexibility to broadcasters.
- Published
- 2017
27. Bidirectional Broadcasting in Next Generation Broadcasting-Wireless (NGB-W) Network
- Author
-
Wenjun Zhang, Yihe Dai, Dazhi He, Yunfeng Guan, and Jun Liu
- Subjects
Service (systems architecture) ,Computer science ,Wireless network ,business.industry ,Triple play (telecommunications) ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,0508 media and communications ,Broadcasting (networking) ,Terrestrial television ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Digital broadcasting ,Wireless ,Digital television ,Electrical and Electronic Engineering ,Telecommunications ,business ,Computer network - Abstract
With the increasing video on demand service need in broadcasting television and the development of triple play strategy, the next generation wireless network television system has been presented. With the advantages of interactive capability, next generation broadcasting-wireless (NGB-W) experiment network can cooperate with digital terrestrial multimedia broadcasting and China mobile multimedia broadcasting downlink systems. In this paper, the key technologies of NGB-W are described in detail. Application test and network deployment of NGB-W in Shanghai are presented to validate its extensive use.
- Published
- 2017
28. Using LDM to Achieve Seamless Local Service Insertion and Local Program Coverage in SFN Environment
- Author
-
Wei Li, Jae-Young Lee, Manuel Velez, Khalil Salehian, Wang Yao, Heung-Mook Kim, Sung-Ik Park, Wenjun Zhang, Liang Zhang, Pablo Angueira, Yiyan Wu, Sebastien Lafleche, Dazhi He, Jon Montalban, and Yunfeng Guan
- Subjects
Engineering ,Service quality ,Boosting (machine learning) ,business.industry ,Orthogonal frequency-division multiplexing ,Transmitter ,MIMO ,Physical layer ,Single-frequency network ,020206 networking & telecommunications ,02 engineering and technology ,Multiplexing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Electronic engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,business - Abstract
Layered division multiplexing (LDM) is a spectrum efficient non-orthogonal multiplexing technology that has been adopted in the Advanced Television Systems Committee (ATSC) 3.0 Physical Layer Standard as a baseline technology. This paper studies a two-layer LDM with one layer used for providing a global service through a single frequency network (SFN), and the other for providing local coverage/services, such as location targeted advertising or local content insertion. The pilot boosting effect on SNR and co-channel interference is also analyzed. The LDM upper layer can be used to deliver time-division multiplexed mobile-HD and 4k-UHD services. The LDM lower layer with a negative SNR threshold can reliably provide seamless local coverage/service from each SFN transmitter without coverage gaps among adjacent SFN transmitter service areas. No directional receiving antenna is required for the local service reception and receivers simply tune into the stronger signal. In such LDM systems, while the upper layer is operating in a traditional SFN mode, the lower layer operates in a special form of distributed MIMO or gap-filler mode to provide targeted local coverage. For implementing the two-layer system introduced in this paper, only ATSC 3.0 baseline technologies are used, i.e., there is no need to modify the ATSC 3.0 standard. Given the upper and lower layers’ data rate requirements and the SNR thresholds, the lower layer power, with respect to the upper layer (injection level), can be optimized to maximize upper and lower layer performance and coverage. Since the advertisement time of the local service is typically less than 20% of the program time, nonreal time could be used to play-back the local content at five times the transmission bit rate for better (audio/video) service quality.
- Published
- 2017
29. Perceptual Reduced-Reference Visual Quality Assessment for Contrast Alteration
- Author
-
Wenjun Zhang, Guangtao Zhai, Ke Gu, Min Liu, Patrick Le Callet, Shandong Agricultural University (SDAU), Laboratoire des Sciences du Numérique de Nantes (LS2N), Université de Nantes - UFR des Sciences et des Techniques (UN UFR ST), Université de Nantes (UN)-Université de Nantes (UN)-École Centrale de Nantes (ECN)-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Image Perception Interaction (IPI), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Nantes - UFR des Sciences et des Techniques (UN UFR ST), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), and Université de Nantes (UN)-Université de Nantes (UN)-École Centrale de Nantes (ECN)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
business.industry ,Image quality ,Computer science ,media_common.quotation_subject ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Visualization ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Histogram ,Metric (mathematics) ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Contrast (vision) ,020201 artificial intelligence & image processing ,Quality (business) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Divergence (statistics) ,ComputingMilieux_MISCELLANEOUS ,media_common - Abstract
In image/video systems, contrast adjustment which manages to enhance visual quality is nowadays an important research topic. Yet very limited struggles have been made to the exploration of visual quality assessment for contrast adjustment. To tackle the issue, this paper proposes a novel reduced-reference (RR) quality metric with the integration of bottom-up and top-down strategies. The former one stems from the recently revealed free energy principle that tells that the human visual system seeks to comprehend an input image via uncertainty removal, while the latter one is toward using the symmetric Kullback–Leibler divergence to compare the histogram of the contrast-altered image with that of the pristine image. The bottom-up and top-down strategies are lastly incorporated to derive the RR contrast-altered image quality measure. A comparison using numerous existing IQA models is carried out on five contrast related databases/subsets in CID2013, CCID2014, CSIQ, TID2008, and TID2013, and experimental results validate the superiority of the proposed technique.
- Published
- 2017
30. Analysis of Distortion Distribution for Pooling in Image Quality Prediction
- Author
-
Shiqi Wang, Ke Gu, Wenjun Zhang, Guangtao Zhai, Xiaokang Yang, and Weisi Lin
- Subjects
Image quality ,Pooling ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Visualization ,Distribution (mathematics) ,Position (vector) ,Histogram ,Distortion ,Quality Score ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Data mining ,Electrical and Electronic Engineering ,computer ,Mathematics - Abstract
Image quality assessment (IQA) has been an active research area during last decades. Many existing objective IQA models share a similar two-step structure with measuring local distortion before pooling. Compared with the rapid development for local distortion measurement, seldom effort has been made dedicated to effective pooling schemes. In this paper, we design a new pooling model via the analysis of distortion distribution affected by image content and distortion. That is, distributions of distortion position, distortion intensity, frequency changes, and histogram changes are comprehensively considered to infer an overall quality score. Experimental results conducted on four large-scale image quality databases (LIVE, TID2008, CSIQ, and CCID2014) concluded with three valuable findings. First, the proposed technique leads to consistent improvement in the IQA performance for studied local distortion measures. Second, relative to the traditional pooling, the performance gain of our algorithm is beyond 15% on average. Third, the best overall performance made by the proposed strategy outperforms state-of-the-art competitors.
- Published
- 2016
31. Dedicated Return Channel for ATSC 3.0
- Author
-
Lianghui Ding, Feng Yang, Dazhi He, Wenjun Zhang, Yiyan Wu, and Yunfeng Guan
- Subjects
Network architecture ,Computer science ,business.industry ,Physical layer ,020302 automobile design & engineering ,020206 networking & telecommunications ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Return channel ,0203 mechanical engineering ,Link budget ,Next-generation network ,Digital Video Broadcasting ,Telecommunications link ,Hardware_INTEGRATEDCIRCUITS ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Bit error rate ,Electrical and Electronic Engineering ,business ,Hardware_LOGICDESIGN ,Computer network - Abstract
To support emerging interactive services, a consensus to include an optional, in-band dedicated return channel (DRC) for the next-generation terrestrial broadcast system (ATSC3.0) has been reached. This paper introduces the design of DRC as well as several implementation concerns. First, the network architecture, link budget, and radiation feature of DRC are described in detail. Then the requirements and detailed design of both the physical layer and the MAC layer of DRC are presented respectively. Meanwhile, the performance of DRC is evaluated and comparisons with typical techniques are carried out in terms of bit error rate, access probability, resource utilization, etc. Furthermore, the cooperation between downlink broadcast and DRC is analyzed.
- Published
- 2016
32. Quality Assessment Considering Viewing Distance and Image Resolution
- Author
-
Min Liu, Wenjun Zhang, Guangtao Zhai, Ke Gu, and Xiaokang Yang
- Subjects
Discrete wavelet transform ,Computer science ,Image quality ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Visualization ,Clipping (photography) ,Media Technology ,Preprocessor ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution ,Transform coding ,Sub-pixel resolution - Abstract
Viewing distance and image resolution have substantial influences on image quality assessment (IQA), but this issue has been highly overlooked in the literature so far. In this paper, we examine the problem of optimal resolution adjustment as a preprocessing step for IQA. In general, the sampling of visual information by human eyes’ optics is approximately a low-pass process. For a given visual scene, the amount of the extractable information greatly depends on the viewing distance and image resolution. We first introduce a novel dedicated viewing distance-changed image database (VDID2014) with two groups of typical viewing distances and image resolutions to promote the IQA study for this issue. Then we design a new effective optimal scale selection (OSS) model in dual-transform domains, in which a cascade of adaptive high-frequency clipping in the discrete wavelet transform domain and adaptive resolution scaling in the spatial domain is used. Validation of our technique is conducted on five image databases (LIVE, IVC, Toyama, VDID2014, and TID2008). Experimental results show that the performance of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) can be substantially improved by applying these metrics to OSS model preprocessed images, superior to classical multi-scale-PSNR/SSIM and comparable to the state-of-the-art competitors.
- Published
- 2015
33. Hybrid No-Reference Quality Metric for Singly and Multiply Distorted Images
- Author
-
Xiaokang Yang, Wenjun Zhang, Ke Gu, and Guangtao Zhai
- Subjects
Standard test image ,Computer science ,Image quality ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Communications system ,Digital image processing ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Image warping ,Quantization (image processing) ,business ,Image restoration - Abstract
In a typical image communication system, the visual signal presented to the end users may undergo the steps of acquisition, compression and transmission which cause the artifacts of blurring, quantization and noise. However, the researches of image quality assessment (IQA) with multiple distortion types are very limited. In this paper, we first introduce a new multiply distorted image database (MDID2013), which is composed of 324 images that are simultaneously corrupted by blurring, JPEG compression and noise injection. We then propose a new six-step blind metric (SISBLIM) for quality assessment of both singly and multiply distorted images. Inspired by the early human visual model and recently revealed free energy based brain theory, our method works to systematically combine the single quality prediction of each emerging distortion type and joint effects of different distortion sources. Comparative studies of the proposed SISBLIM with popular full-reference IQA approaches and start-of-the-art no-reference IQA metrics are conducted on five singly distorted image databases (LIVE, TID2008, CSIQ, IVC, Toyama) and two newly released multiply distorted image databases (LIVEMD, MDID2013). Experimental results confirm the effectiveness of our blind technique. MATLAB codes of the proposed SISBLIM algorithm and MDID2013 database will be available online at http://gvsp.sjtu.edu.cn/.
- Published
- 2014
34. An Urban-Rural Dual Structure for the Digital Terrestrial Television Broadcasting System of FOBTV
- Author
-
Wang Yao, Yanfeng Wang, Dazhi He, Wenjun Zhang, Hui Hui, Yunfeng Guan, and Ya Zhang
- Subjects
Multimedia ,Commercial broadcasting ,GeneralLiterature_INTRODUCTORYANDSURVEY ,business.industry ,Broadcast law ,Internet television ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Broadcasting ,computer.software_genre ,Digital terrestrial television ,law.invention ,Terrestrial television ,law ,ComputerApplications_MISCELLANEOUS ,Media Technology ,Digital broadcasting ,Digital television ,Electrical and Electronic Engineering ,business ,Telecommunications ,computer - Abstract
In this paper, the requirements of digital terrestrial television broadcasting are analyzed. One structure, named as urban-rural dual structure, is emphasized in the future broadcasting system, especially in those developing countries, such as China, Russia, and Brazil. A comparison between Shanghai and Fengyang in China is given to help understanding this unique structure. Some technical consideration on the structure is presented for the future of broadcast television.
- Published
- 2014
35. You Are What You Watch and When You Watch: Inferring Household Structures From IPTV Viewing Data
- Author
-
Rong Xie, Hongyuan Zha, Jun Du, Hongteng Xu, Wenjun Zhang, Xiaokang Yang, and Dixin Luo
- Subjects
education.field_of_study ,Ground truth ,Information retrieval ,Multimedia ,Computer science ,Population ,Inference ,IPTV ,computer.software_genre ,law.invention ,law ,Internet Protocol ,Classifier (linguistics) ,Media Technology ,Graph (abstract data type) ,Electrical and Electronic Engineering ,education ,computer ,TRACE (psycholinguistics) - Abstract
What you watch and when you watch say a lot about you, and such information at the aggregated level across a user population obviously provides significant insights for social and commercial applications. In this paper, we propose a model for inferring household structures based on analyzing users' viewing behaviors in Internet Protocol Television (IPTV) systems. We emphasize extracting features of viewing behaviors based on the dynamic of watching time and TV programs and training a classifier for inferring household structures according to the features. In the training phase, instead of merely using the limited labeled samples, we apply semisupervised learning strategy to obtain a graph-based model for classifying household structures from users' features. We test the proposed model on China Telecom IPTV data and demonstrate its utility in census research and system simulation. The demographic characteristics inferred by our approach match well with the population census data of Shanghai, and the inference of household structures of IPTV users gives encouraging results compared with the ground truth obtained by surveys, which opens the door for leveraging IPTV viewing data as a complementary way for time- and resource-consuming census tracking. On the other hand, the proposed model can also synthesize trace data for the simulations of IPTV systems, which provides us with a new strategy for system simulation.
- Published
- 2014
36. DTMB Application in Shanghai SFN Transmission Network
- Author
-
Wenjun Zhang, Yunfeng Guan, Dingxiang Lin, Dazhi He, and Yihe Dai
- Subjects
business.industry ,Computer science ,Single-frequency network ,Digital terrestrial multimedia broadcasting ,Digital multimedia broadcasting ,Multipath channels ,Transmission network ,Digital Video Broadcasting ,Media Technology ,Mobile telephony ,Digital television ,Electrical and Electronic Engineering ,business ,Computer network - Abstract
In this paper, a single frequency network (SFN) application of the digital terrestrial multimedia broadcasting (DTMB) system in Shanghai is presented. A three-stage implementation practice is described in details. Computer simulations of DTMB receiver in severe dynamic multipath channels are presented to verify the method of SFN adjustments.
- Published
- 2013
37. Improve the Performance of LDPC Coded QAM by Selective Bit Mapping in Terrestrial Broadcasting System
- Author
-
Bo Liu, Bo Rong, Yin Xu, Yiyan Wu, Wenjun Zhang, Liang Gong, and Lin Gui
- Subjects
QAM ,Computer science ,Orthogonal frequency-division multiplexing ,Media Technology ,Electronic engineering ,Bit error rate ,Data_CODINGANDINFORMATIONTHEORY ,Code rate ,Electrical and Electronic Engineering ,Low-density parity-check code ,Error detection and correction ,Quadrature amplitude modulation ,Decoding methods - Abstract
In this paper, we employ selective bit mapping to improve the performance of the LDPC coded QAM scheme for terrestrial DTV broadcasting system. The threshold of message-passing decoding can be considerably lowered by selectively mapping the binary components of LDPC codeword to the positions in the m-tuples to be mapped into 2mQAM symbols. In our approach, the mapping pattern is described by bit-mapping polynomials, based on which density evolution can be applied. The optimization algorithm is developed with two implementation concerns, using the Chinese DTMB standard as an example. Numerical results illustrate that our proposed approach can improve the decoding threshold by 0.05 dB to 0.499 dB depending on the code rate and the order of QAM modulation. Simulation results show that the actual BER improvement varies from 0.09 dB to 0.6 dB with different code-modulation combinations in both single-carrier and OFDM modes.
- Published
- 2011
38. Transmitter Identification With Watermark Signal in DVB-H Signal Frequency Network
- Author
-
Ling Na Hu, Feng Yang, Zhe Wang, Lin Gui, and Wenjun Zhang
- Subjects
Signal processing ,Computer science ,Transmitter ,Media Technology ,Electronic engineering ,Single-frequency network ,Watermark ,Detection theory ,Electrical and Electronic Engineering ,Digital watermarking ,Signal ,Data transmission - Abstract
Transmitter identification (TxID) technique is used to diagnose the operating status of radio transmitters in DTV distributed transmission network. In this paper, a new kind of TxID method for DVB-H SFN(single frequency network)system is proposed. We embed a signal (e.g. watermark) in the DVB-H signal to form the composite signal. The embedded signal will not alternate the system spectral efficiency. By watermarking theory, we demonstrate the required embedded level for watermarking signal to achieve a given bit error probability in different circumstance. Simulation results show that the receiver can distinguish the watermarking signal with low embedding strength even in wireless situation. At that embedding strength, BER performance degradation for the receiver can be ignored.
- Published
- 2009
39. On Channel Estimation Method Using Time Domain Sequences in OFDM Systems
- Author
-
Lin Gui, Wenjun Zhang, Bowei Song, and Bo Liu
- Subjects
Multipath interference ,Computer science ,Estimation theory ,Orthogonal frequency-division multiplexing ,Synchronization ,Delay spread ,Least mean squares filter ,Media Technology ,Electronic engineering ,Time domain ,Electrical and Electronic Engineering ,Algorithm ,Computer Science::Information Theory ,Communication channel - Abstract
In this paper, we propose a channel estimation method using time domain training sequences. Most recent works generally use a correlation scheme for channel estimation in OFDM systems with time domain sequences. However, the performance of this method is affected dramatically by the correlation characteristics and suffers from multipath interference. Rather than correlation analysis, we provide a new method based on least mean square (LMS) theory. The selection of parameter and corresponding schemes in time-varying channels as well as multipath channels with long time delay spreads are discussed too. Simulations of the new scheme and comparisons show its superiority over the conventional method.
- Published
- 2008
40. An Implementation of the Decision Feedback Equalizer Based on Delay-Matched Structure
- Author
-
Yunfeng Guan, Jun Sun, Feng Ju, Wenjun Zhang, and Dazhi He
- Subjects
Computer science ,business.industry ,Stability (learning theory) ,Adaptive equalizer ,Least mean squares filter ,Filter (video) ,Control theory ,Media Technology ,Electronic engineering ,Fading ,Time domain ,Digital television ,Electrical and Electronic Engineering ,business ,Data transmission - Abstract
This paper presents a new equalizer structure named Delay-Matched Decision-Feedback Equalizer (DMDFE) which is an effective solution to the hardware implementation of time domain adaptive Decision-Feedback Equalizer (DFE) in single carrier modulated DTV system. The transposed structure combined with the conventional transversal filter is employed in DMDFE to reduce the ghost estimation delay of feedback filter in DFE. And the DMDFE provides a one-symbol period feedback ghost estimation which enables the new structure to have the same performance with ideal Least Mean Square (LMS) DFE. The DMDFE can cope with faster fading channel and longer post-echo than transposed DFE. Meanwhile the DMDFE saves more hardware resources than the transposed DFE.
- Published
- 2008
41. Combined NR Decoding With Decision Feedback Equalizer for Chinese DTTB Receiver
- Author
-
Dazhi He, Weiqiang Liang, Wenjun Zhang, Feng Ju, and Yunfeng Guan
- Subjects
Mean squared error ,Computer science ,Equalization (audio) ,Data_CODINGANDINFORMATIONTHEORY ,Sequential decoding ,Intersymbol interference ,Transmission (telecommunications) ,Single antenna interference cancellation ,Media Technology ,Electronic engineering ,Electrical and Electronic Engineering ,Error detection and correction ,Algorithm ,Decoding methods - Abstract
In this paper, the effect of the Minimum-Mean-Square Error Decision Feedback Equalizer (MMSE-DFE) when combined with decoding that is applied to the coded transmission, is studied. When the length of MMSE-DFE is large, it is more realizable to use the decoding value as the decision value. Here, the DFE using the Nordstrom-Robinson (NR) decoding value is proposed. Through simulation and by testing results, the DFE combined with NR decoding shows better performance than the traditional DFE. Moreover, this paper supports an approach to cancel partial residual Inter-Symbol Interference (ISI) that is caused by decision delay when DFE is combined with NR decoding.
- Published
- 2008
42. Three Dimensional Scalable Video Adaptation via User-End Perceptual Quality Assessment
- Author
-
Wenjun Zhang, Guangtao Zhai, Weisi Lin, Xiaokang Yang, and Jianfei Cai
- Subjects
Computer science ,Wireless network ,Image quality ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video quality ,Scalable Video Coding ,Video tracking ,Human visual system model ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Adaptation (computer science) ,business ,Wireless sensor network - Abstract
For wireless video streaming, the three dimensional scalabilities (spatial, temporal and SNR) provided by the advanced scalable video coding (SVC) technique can be directly utilized to adapt video streams to dynamic wireless network conditions and heterogeneous wireless devices. However, the question is how to optimally trade off among the three dimensional scalabilities so as to maximize the perceived video quality, given the available resource. In this paper, we propose a low-complexity algorithm that executes at resource-limited user end to quantitatively and perceptually assess video quality under different spatial, temporal and SNR combinations. Based on the video quality measures, we further propose an efficient adaptation algorithm, which dynamically adapts scalable video to a suitable three dimension combination. Experimental results demonstrate the effectiveness of our proposed perceptual video adaptation framework.
- Published
- 2008
43. Configurable Multi-Rate Decoder Architecture for QC-LDPC Codes Based Broadband Broadcasting System
- Author
-
Lin Gui, Youyun Xu, Wenjun Zhang, and Luoming Zhang
- Subjects
Computer science ,business.industry ,Broadband networks ,Data_CODINGANDINFORMATIONTHEORY ,Code rate ,Soft-decision decoder ,Media Technology ,Electronic engineering ,Algorithm design ,Electrical and Electronic Engineering ,Low-density parity-check code ,business ,Error detection and correction ,Throughput (business) ,Computer hardware ,Decoding methods - Abstract
In this paper we present a Base-matrix based decoder architecture for multi-rate QC-LDPC codes proposed in broadband broadcasting system. We use the Modified Min-Sum Algorithm (MMSA) as the decoding algorithm in this architecture, which lowers the complexity of the LDPC decoder while keeping almost the same performance or even better. Based on this algorithm, we designed a novel check node processing unit to reduce the complexity of the decoder and facilitate the multiplex of the processing units. The decoder designed with hardware constraints is not only scalable in throughput, but also easily configurable to support different QC-LDPC codes flexible in code rate and code length.
- Published
- 2008
44. A Robust and Adaptive Carrier Recovery Method for Chinese DTTB Receiver
- Author
-
Yunfeng Guan, Jing Chai, Wenjun Zhang, Dazhi He, and Weiqiang Liang
- Subjects
Pilot signal ,Engineering ,business.industry ,Real-time computing ,Pseudorandom noise ,Robustness (computer science) ,Digital Video Broadcasting ,Media Technology ,Electronic engineering ,Digital television ,Electrical and Electronic Engineering ,Carrier recovery ,business ,Data transmission ,Jitter - Abstract
This paper presents a robust and adaptive carrier recovery method for Chinese digital terrestrial television broadcasting (DTTB) system in which pilot signal and pseudonoise (PN) sequence are adopted to help carrier recovery. The conventional methods utilize pilot or PN sequence respectively. In this paper, we try to combine the advantage of each method together and propose a well designed state machine to control system state automatically. Moreover, as for using PN sequence, a fine PN tracking state is introduced to ensure the robustness of the proposed method. Software simulations show that the proposed method can provide large acquisition range, short acquisition time and small tracking jitter in severely distorted static and dynamic channels. Lab tests and field trials also prove its good performance in real propagation environments.
- Published
- 2008
45. An Introduction of the Chinese DTTB Standard and Analysis of the PN595 Working Modes
- Author
-
Jun Sun, Feng Ju, Wenjun Zhang, Yunfeng Guan, Dazhi He, and Weiqiang Liang
- Subjects
High-definition television ,business.industry ,Computer science ,Standard-definition television ,Single-frequency network ,Broadcasting ,Digital terrestrial television ,Frequency allocation ,Digital Video Broadcasting ,Media Technology ,Electronic engineering ,Digital television ,Electrical and Electronic Engineering ,business - Abstract
A digital terrestrial television broadcasting (DTTB) standard named "Frame structure, channel coding and modulation for digital television terrestrial broadcasting system" was published in China in August 2006. This is the first paper of a series that provide a complete and in depth description to the standard including laboratory and field measurement results, detailed analysis on technologies in achieving stable fixed reception and fast mobile reception, as well as methodologies in spectrum allocation, and principles and technologies in performing single frequency network operation. Among the hundreds of operation modes to support multi-program SDTV/HDTV terrestrial services in the standard, the key features and major applications of a series of modes named "PN595 + C1" are described in detail. Measurement results of PN595 + C1 are also presented to demonstrate the satisfactory performances in fixed reception and high speed mobile reception
- Published
- 2007
46. Comb Type Pilot Aided Channel Estimation in OFDM Systems With Transmit Diversity
- Author
-
Wenjun Zhang, Lin Gui, and Bowei Song
- Subjects
Engineering ,business.industry ,Orthogonal frequency-division multiplexing ,Spectral efficiency ,Transmit diversity ,Frequency domain ,Media Technology ,Electronic engineering ,Time domain ,Electrical and Electronic Engineering ,business ,Space–time code ,Communication channel ,Block (data storage) - Abstract
Transmit diversity can be applied to OFDM systems by adopting space time code. Since the received signal is the overlapped signals transmitted from different transmit antennas, channel estimation is a rather challenging task for space time coded OFDM (ST-OFDM) systems. Pilot structure can help the receiver to effectively separate the overlapped signals and perform accurate channel estimation. In this paper, we propose three different channel estimation algorithms based on specially designed comb type pilots inserted in frequency domain. One of our proposed algorithms is performed in frequency domain and the other two are performed in time domain. Such comb type pilot based algorithms can provide higher bandwidth efficiency than common significant-tap-catching algorithm using training block pilots. Numerical analyzes and computational simulation show that our proposed estimation schemes have the same good performance while the time domain methods have relatively simple structure.
- Published
- 2006
47. Rate-Distortion Optimized Unequal Loss Protection for FGS Compressed Video
- Author
-
Wenjun Zhang, Lianji Cheng, and Li Chen
- Subjects
Network packet ,Computer science ,Quality of service ,Real-time computing ,Signal compression ,Data_CODINGANDINFORMATIONTHEORY ,computer.software_genre ,Videoconferencing ,Packet switching ,Rate–distortion optimization ,Packet loss ,Media Technology ,Electrical and Electronic Engineering ,computer ,Data compression - Abstract
Video communication with quality of service (QoS) is an important and challenging task. The transmitted video stream must be able to afford the bandwidth variance and unavoidable packet loss in the Internet. In particular, fine-granular-scalability (FGS) video coding has been adopted by the MPEG-4 standard as the core video-compression method for streaming applications. From this inception, the FGS scalability structure was designed to be packet resilient especially under unequal loss protection (ULP). In this paper, we use ULP to protect FGS compressed video, and under the restriction of the network bandwidth, joint source-channel rate-distortion based optimization is performed in bit allocation to minimize the end-to-end distortion. Simulation results demonstrate effectiveness of our approach.
- Published
- 2004
48. Obtaining Diversity Gain for DTV by Using MIMO Structure in SFN
- Author
-
Lin Gui, Yantao Qiao, Lijun Zhang, and Wenjun Zhang
- Subjects
Orthogonal frequency-division multiplexing ,Computer science ,business.industry ,Transmitter ,MIMO ,Single-frequency network ,Broadcasting ,Antenna diversity ,Diversity gain ,Media Technology ,Electronic engineering ,Electrical and Electronic Engineering ,business ,Multipath propagation ,Computer Science::Information Theory - Abstract
In Digital TV Broadcasting, the Scheme of Single Frequency Network (SFN) has nontrivial advantages. By forming a SFN, a broadcasting system is able to serve an arbitrary large area with the same program within the same frequency block. At the mean time, the SFN structure provides the receiver with a potential of yielding the space diversity gain, while the power in every single transmitter is not increased. However, there are heavy artificial multipath propagation in the area covered by the SFN broadcasting. Traditionally, a transversal equalizer is used at the receiver to remove the SFN interference. The equalizer always cannot converge properly due to the over-long time delay and the over-large magnitude of the different paths from each transmitter of the SFN. To solve the problem, a new model based on the MIMO structure of the SFN is proposed in this paper, where the signal's space information is exploited. With the model in mind, a new receiving scheme is derived. By using a beamformer, signals with different incident angles are separated, so the problem caused by the over-long delay and the over-large magnitude is avoided. A bank of parallel sub-filters are used to remove the residual multipath spread. The space diversity gain is obtained at the output of a combiner.
- Published
- 2004
49. Temporal compensated motion estimation with simple block-based prediction
- Author
-
Wenjun Zhang, Jisheng Wang, and Dong Wang
- Subjects
Motion compensation ,Computational complexity theory ,Computer science ,business.industry ,Video processing ,Quarter-pixel motion ,Motion field ,Motion estimation ,Convergence (routing) ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Block (data storage) - Abstract
Motion compensated video format conversion (MC-VFC) needs true motion vectors, which have such features as spatial consistency and temporal extension. Based on these features, a temporally compensated motion estimation algorithm with simple block-based prediction (TC-SBP) is presented. Making use of the spatial-temporal correlation, the simple block-based prediction needs only three neighboring candidate vectors for each block, so that its computational complexity is highly reduced. At the same time, three temporally compensated updating candidates are used to accelerate the convergence of the algorithm efficiently for the global motion field. Measured using criteria relevant to MC-VFC and video processing applications, the new TC-SBP algorithm is shown to have a superior performance over alternative ones.
- Published
- 2003
50. Implementation of hdtv pes combiner based on horizontal six-block segmentation
- Author
-
Wenjun Zhang, Feng Wang, and Songyu Yu
- Subjects
High-definition television ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Constant bitrate ,computer.file_format ,Complex programmable logic device ,Programmable logic array ,Packetized elementary stream ,Programmable logic device ,MPEG-2 ,Media Technology ,Electronic engineering ,Electrical and Electronic Engineering ,business ,computer ,Encoder ,Computer hardware - Abstract
We present the hardware design of the packetized elementary stream (PES) combiner in the third generation HDTV encoder in China, which is a key part in the HDTV encoding system. In our design, the input HDTV video signal is divided into six sub-images, and a horizontal six-block segmentation method is implemented in the HDTV encoder. Each of the sub-images is coded by one MP@ML encoding ASIC, which works at a different bitrate. The PES combiner combines all the output bit streams into one HDTV PES. The coding parameters and timing stamps are modified according to the requirements of MPEG-2 MP@HL. All these PES combiner functions are implemented by one complex programmable logic device (CPLD), which makes the whole encoding system compact and stable. The detailed discussions of hardware design are also presented in this paper. Experimental results show that the quality of decoded image is improved over the past two generation encoders, and constant bitrate can be maintained at the same time.
- Published
- 2003
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.