9 results on '"Sixin Lin"'
Search Results
2. A blockchain-based collaborative training method for multi-party data sharing
- Author
-
Zhiqiang Cao, Zhe Sun, Sixin Lin, Lihua Yin, and Jiyuan Feng
- Subjects
Service (systems architecture) ,Blockchain ,Computer Networks and Communications ,Computer science ,business.industry ,media_common.quotation_subject ,Aggregate (data warehouse) ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Encryption ,News aggregator ,Data sharing ,Upload ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business ,Function (engineering) ,computer ,Computer network ,media_common - Abstract
In recent years, the construction of Space–Ground Integrated Network has been accelerated, connecting different types of networks in remote regions. The various devices are connected together, so that data that was difficult to communicate before can be used to train particular models together, giving birth to new service models. Privacy issues, however, remain a substantial concern affecting data sharing among multiple parties. Cooperative training methods such as federated learning usually require a centralized aggregator to aggregate the dispersed sub-models. In general, various privacy-preserving methods assume the aggregator as an honest-but-curious (HBC) role and cannot guarantee that the program can be executed correctly. In this paper, we propose a blockchain-based collaborative training method that uses the decentralized accounting technology of the blockchain to solve the trust problem between different participants. Through the anti-repudiation nature of the blockchain, it is ensured that the aggregation of the model is executed correctly. We designed a function encryption-based privacy preserving method in which the aggregator can only obtain the results of the aggregation model, and cannot access the models uploaded to the blockchain from other participants. Subsequently, a prototype system based on blockchain is developed to analyze and evaluate the time consumption of our proposed cooperative training method and function encryption module. The result of our experiments shows the feasibility of our cooperative training model.
- Published
- 2021
3. NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
- Author
-
Ming Lu, Qi Zhang, Minyi Zhao, Jiang Li, Jing Liu, Pavel Ostyakov, Zhan Ma, Fu Li, Xiaopeng Sun, Pablo Navarrete Michelini, Fanglong Liu, Kelvin C.K. Chan, Yuanzhi Zhang, Wenming Yang, Junlin Li, Wei Gao, Dongliang He, Fenglong Song, Yan Liu, Yucong Wang, Chen Change Loy, Lyn Jiang, Gen Zhan, Mengxi Guo, Yiyun Chen, Shangchen Zhou, Minjie Cai, Liangyan Li, Di You, Qing Wang, Chong Mou, Dewang Hou, Qingqing Dang, Ren Yang, Jiayu Yang, Wang Liu, Xiangyu Xu, Jingyu Guo, Hai Wang, Sixin Lin, Zhongqian Fu, Xiaoyu Zhang, Cheng Li, Ru Wang, Liliang Zhang, Yiting Liao, Zhenyu Zhang, Jia Hao, Shuai Xiao, Li Xin, Thomas Tanay, Yibin Huang, Xiaochao Qu, He Zheng, Iaroslav Koshelev, Yi Xu, Jun Chen, Wei Hao, Andrey Somov, Qiang Guo, Xinjian Zhang, Radu Timofte, Kangdi Shi, Sijung Kim, Syehoon Oh, Linjie Zhou, Matteo Maggioni, Shijie Zhao, Wentao Chao, Shuigeng Zhou, Xueyi Zou, and Lielin Jiang
- Subjects
FOS: Computer and information sciences ,Quality assessment ,business.industry ,Computer science ,media_common.quotation_subject ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Fidelity ,Electrical Engineering and Systems Science - Image and Video Processing ,Video quality ,Quality enhancement ,Pattern recognition (psychology) ,FOS: Electrical engineering, electronic engineering, information engineering ,Quality (business) ,Computer vision ,Artificial intelligence ,Test phase ,Focus (optics) ,business ,media_common - Abstract
This paper reviews the first NTIRE challenge on quality enhancement of compressed video, with a focus on the proposed methods and results. In this challenge, the new Large-scale Diverse Video (LDV) dataset is employed. The challenge has three tracks. Tracks 1 and 2 aim at enhancing the videos compressed by HEVC at a fixed QP, while Track 3 is designed for enhancing the videos compressed by x265 at a fixed bit-rate. Besides, the quality enhancement of Tracks 1 and 3 targets at improving the fidelity (PSNR), and Track 2 targets at enhancing the perceptual quality. The three tracks totally attract 482 registrations. In the test phase, 12 teams, 8 teams and 11 teams submitted the final results of Tracks 1, 2 and 3, respectively. The proposed methods and solutions gauge the state-of-the-art of video quality enhancement. The homepage of the challenge: https://github.com/RenYang-home/NTIRE21_VEnh, Comment: Corrected the MOS values in Table 2, and corrected some minor typos
- Published
- 2021
- Full Text
- View/download PDF
4. Learning a No Reference Quality Assessment Metric for Encoded 4K-UHD Video
- Author
-
Rong Xie, Li Song, Sixin Lin, Jingwen Xu, Yaqing Li, and Yu Dong
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Video quality ,Transmission (telecommunications) ,Distortion ,Metric (mathematics) ,Computer vision ,Quality (business) ,Artificial intelligence ,business ,Coding (social sciences) ,media_common - Abstract
4K-UHD videos have become popular since they significantly improve user’s visual experience. As video coding, transmission and enhancement technology developing fast, existing quality assessing metrics are not suitable for 4K-UHD scenario because of the expanded resolution and lack of training data. In this paper, we present a no-reference video quality assessment model achieving high performance and suitability for 4K-UHD scenario by simulating full-reference metric VMAF. Our approach extract deep spatial features and optical flow based temporal features from cropped frame patches. Overall score for video clip is obtained from weighted average of patch results to fully reflect the content of high-resolution video frames. The model is trained on automatically generated HEVC encoded 4K-UHD dataset which is labeled by VMAF. The strategy of constructing dataset can be easily extended to other scenarios such as HD resolution and other distortion types by modifying dataset and adjusting network. With the absence of reference video, our proposed model achieves considerable accuracy on VMAF labels and high correlation with human rating, as well as relatively fast processing speed.
- Published
- 2021
5. Efficient degradation of H 2 S over transition metal modified TiO 2 under VUV irradiation: Performance and mechanism
- Author
-
Peng Hu, Sixin Lin, Gaoyuan Liu, Haibao Huang, and Jian Ji
- Subjects
Ozone ,Materials science ,Photodissociation ,General Physics and Astronomy ,02 engineering and technology ,Surfaces and Interfaces ,General Chemistry ,010501 environmental sciences ,021001 nanoscience & nanotechnology ,Condensed Matter Physics ,Photochemistry ,01 natural sciences ,Surfaces, Coatings and Films ,Catalysis ,chemistry.chemical_compound ,Catalytic oxidation ,chemistry ,Transition metal ,Photocatalysis ,Degradation (geology) ,Irradiation ,0210 nano-technology ,0105 earth and related environmental sciences - Abstract
Odor pollution causes great harm to the atmospheric environment and human health. H2S, as an odor gas, is highly toxic and corrosive and thus requires removal efficiently. In this study, TiO2 catalysts modified by transition metals including Mn, Cu, Ni and Co, were prepared using a modified sol-gelatin method and tested under UV-PCO or VUV-PCO process. H2S degradation was great enhanced in VUV-PCO compared with conventional UV-PCO. Among the catalysts, 1 wt% Mn-TiO2 showed the highest removal efficiency of 89.9%, which is 30 times higher than that under 254 nm UV irradiation. Residual ozone in the outlet can be completely eliminated by Mn-TiO2. Photocatalytic oxidation, photolysis and ozone-assisted catalytic oxidation all involved in the VUV-PCO process and their contribution were determined by H2S removal efficiency.
- Published
- 2018
6. Microstructure and Properties of HVOF Spraying WC-10Co-4Cr Coatings
- Author
-
Zhigang Li, Xibin Wu, Luoxing Li, Qun Wang, and Sixin Lin
- Subjects
Kerosene ,Materials science ,Metallurgy ,Abrasive ,General Medicine ,engineering.material ,Microstructure ,Coating ,Fracture toughness ,Hardness ,Hardening (metallurgy) ,engineering ,HVOF ,Tempering ,Composite material ,Thermal spraying ,Abrasive wear ,Engineering(all) - Abstract
Three WC-10Co-4Cr coatings were deposited with various spraying parameters (6 and 6.5 GPH kerosene flux, 1850 and 1950 SCFH oxygen flux, and 330 and 380mm spray distance) by using Parxair JP-8000 HVOF thermal spray system. The coating properties such as hardness, crystalline phase, fracture toughness, thickness per pass and abrasive wear resistance have been investigated. The results indicates that kerosene flux, oxygen flux, and spray distance have remarkable effects on coatings microstructure and performance, but only slightly effect on phase of coatings. The hardness of WC-10Co-4Cr coatings increases with the kerosene flux and oxygen flux. Both the wear loss and the fracture toughness decrease with the increasing of coating hardness. Also, the WC-10Co-4Cr coatings exhibit excellent wear resistance compared to hardening and tempering 45 steel.
- Published
- 2012
- Full Text
- View/download PDF
7. Frame layer rate control for H.264/AVC with hierarchical B-frames
- Author
-
Fuzheng Yang, Shuai Wan, Ming Li, Lianhuan Xiong, Yilin Chang, and Sixin Lin
- Subjects
Computer science ,Quantization (signal processing) ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Code rate ,Coding gain ,Video compression picture types ,Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Software ,Harmonic Vector Excitation Coding ,Context-adaptive binary arithmetic coding ,Group of pictures ,Context-adaptive variable-length coding - Abstract
Hierarchical B-frames contribute to improvement of coding performance when introduced into H.264/AVC. However, the existing rate control schemes for H.264/AVC, which are mainly applied to IPPP and IBBP coding structures, cannot work efficiently for the coding structures with hierarchical B-frames. In this paper, a frame layer rate control scheme for hierarchical B-frames is proposed. Firstly, an adaptive starting quantization parameter (QP) determination method is implemented to derive the QP for the first coding frame based on the available channel bit rate and the content of the current video sequence. Then, the target bit budget for a group of pictures (GOP) is calculated based on the target bit rate and the buffer status. Afterwards, a temporal level (TL) layer rate control phase is introduced, and the GOP layer target bit budget is allocated to each TL. In the frame layer rate control phase, a method based on a rate-distortion model and the coding properties of the previous coded key frames is derived to determine the QP for the current key frame. For hierarchical B-frames, we introduce a typical weighting factor in the determination of their target bit budgets to address the features of the hierarchical coding structures. This weighting factor is calculated according to the target bit budget of the GOP layer and the knowledge obtained from the previous coded B-frames in each TL. Subsequently, the QP for coding the current B-frame is computed by a quadratic model with different model parameters for different TLs, and the computed QP is further adaptively adjusted according to the usage of the target bit budgets. After coding the current frame, an update stage, in which a threshold-based method is integrated to avoid model degradation, is invoked to update the parameters for rate control. Experimental results demonstrate that when the proposed rate control scheme is applied to the coding structure with hierarchical B-frames in H.264/AVC, the actual coding bit rates can match the target bit rates very well, and the encoding performance is also improved.
- Published
- 2009
8. Affine SKIP and MERGE modes for video coding
- Author
-
Sixin Lin, Fan Liang, and Huanbang Chen
- Subjects
Motion compensation ,Computer science ,business.industry ,Coding tree unit ,Quarter-pixel motion ,Motion estimation ,Computer vision ,Artificial intelligence ,Affine transformation ,Multiview Video Coding ,business ,Algorithm ,Context-adaptive binary arithmetic coding ,Context-adaptive variable-length coding - Abstract
The latest video coding standard HEVC has already improved 40% coding efficiency than H.264/AVC, but it still follows the hybrid coding structure with the translational motion compensation scheme. In this paper, an affine motion representation is described first. The proposed algorithms to improve the motion vector coding for Affine SKIP/MERGE modes are also presented. The experimental results show that these algorithms can achieve on average 3.5%, 2.2%, and 1.4% BD-rate gains for low delay P, low delay B, and random access configurations, respectively, compared to HM 12.0. The BD-rate saving is up to 17.1% for sequences with complex motion.
- Published
- 2015
9. A Flexible Reference Picture Selection Method for Spatial DIRECT Mode in Multiview Video Coding
- Author
-
Yilin Chang, Yanzhuo Ma, Sixin Lin, and Junyan Huo
- Subjects
Motion compensation ,Computer science ,business.industry ,Algorithmic efficiency ,Encoding (memory) ,Macroblock ,Direct mode ,Computer vision ,Visual communication ,Artificial intelligence ,Multiview Video Coding ,Quantization (image processing) ,business - Abstract
The motion information of the macroblock signaled as the spatial DIRECT mode can be derived from the neighboring blocks. The closest reference picture in the reference picture list used by the neighboring blocks is selected as the reference picture of the macroblock. Both temporal and interview reference pictures are put into the reference picture list in order to improve the coding efficiency in MVC. A flexible reference picture selection method taking the diversity of reference picture list into account is proposed in this paper. The experimental results show that the proposed method can improve coding efficiency especially when the interview reference pictures are put ahead of the temporal reference pictures.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.