9 results on '"Zhong, Bineng"'
Search Results
2. Robust feature learning for online discriminative tracking without large-scale pre-training
- Author
-
Zhang, Jun, Zhong, Bineng, Wang, Pengfei, Wang, Cheng, and Du, Jixiang
- Published
- 2018
- Full Text
- View/download PDF
3. Teacher-student knowledge distillation for real-time correlation tracking.
- Author
-
Chen, Qihuang, Zhong, Bineng, Liang, Qihua, Deng, Qingyong, and Li, Xianxian
- Subjects
- *
CONVOLUTIONAL neural networks , *MACHINE learning , *ARTIFICIAL satellite tracking , *TRANSFER students , *FEATURE extraction - Abstract
The performance of correlation filter (CF) based visual trackers has been greatly improved with pretrained deep convolutional neural networks. However, these networks limit the application scope of CF based trackers because of high feature dimension, high time consumption of feature extraction and huge memory storage. To alleviate this problem, we introduce a teacher-student knowledge distillation framework to obtain a lightweight network to speed up CF based trackers. Specifically, we take a pretrained deep convolutional neural network from the image classification task as a teacher network, and distill this teacher network into a lightweight student network. During offline distillation training process, we propose an attention transfer loss to ensure the lightweight student network maintains feature representation of the large-capacity teacher network. Meanwhile, we propose a correlation tracking loss to transfer the student network from image classification task to correlation tracking task, which improves the discriminant ability of the student network. Experiments on OTB, VOT2017 and Temple Color show that, using the learned lightweight network model as the feature extractor, the state-of-the-art CF based tracker achieves real-time speed on a single CPU, while maintaining almost the same tracking performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Kernel correlation filters for visual tracking with adaptive fusion of heterogeneous cues.
- Author
-
Bai, Bing, Zhong, Bineng, Ouyang, Gu, Wang, Pengfei, Liu, Xin, Chen, Ziyi, and Wang, Cheng
- Subjects
- *
DATABASES , *DESIGN templates , *ACCURACY , *KERNEL (Mathematics) , *STATISTICAL correlation - Abstract
Although the correlation filter-based trackers have achieved competitive results both on accuracy and robustness, the performance of trackers can still be improved because the most existing trackers either use a fixed scale or a sole filtering template to represent a target object. In this paper, to effectively handle the scale variation and the drifting problem, we propose a correlation filter-based tracker by adaptively fusing the heterogeneous cues. Firstly, to tackle the problems of the fixed template size, the scale of a target object is estimated from a set of possible scales. Secondly, an adaptive set of filtering templates is learned to alleviate the drifting problem by carefully selecting object candidates in different situations to jointly capture the target appearance variations. Finally, a variety of simple yet effective features (e.g., the HOG and color name features) are effectively integrated into the learning process of filters to further improve the discriminative power of the filters. Consequently, the proposed correlation filter-based tracker can simultaneous utilizes different types of cues to effectively estimate the target's location and scale while alleviating the drifting problem. We have done extensive experiments on the CVPR2013 tracking benchmark dataset with 50 challenging sequences. The proposed tracker successfully tracked the targets in about 90% videos and outperformed the state-of-the-art trackers. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. Online learning 3D context for robust visual tracking.
- Author
-
Zhong, Bineng, Shen, Yingju, Chen, Yan, Xie, Weibo, Cui, Zhen, Zhang, Hongbo, Chen, Duansheng, Wang, Tian, Liu, Xin, Peng, Shujuan, Gou, Jin, Du, Jixiang, Wang, Jing, and Zheng, Wenming
- Subjects
- *
THREE-dimensional imaging , *PATTERN recognition systems , *MACHINE learning , *BINOCULAR vision , *PROBABILITY theory , *SPATIOTEMPORAL processes - Abstract
In this paper, we study the challenging problem of tracking single object in a complex dynamic scene. In contrast to most existing trackers which only exploit 2D color or gray images to learn the appearance model of the tracked object online, we take a different approach, inspired by the increased popularity of depth sensors, by putting more emphasis on the 3D Context to prevent model drift and handle occlusion. Specifically, we propose a 3D context-based object tracking method that learns a set of 3D context key-points, which have spatial–temporal co-occurrence correlations with the tracked object, for collaborative tracking in binocular video data. We first learn 3D context key-points via the spatial–temporal constrain in their spatial and depth coordinates. Then, the position of the object of interest is determined by a probability voting from the learnt 3D context key-points. Moreover, with depth information, a simple yet effective occlusion handling scheme is proposed to detect occlusion and recovery. Qualitative and quantitative experimental results on challenging video sequences demonstrate the robustness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
6. Visual tracking via weakly supervised learning from multiple imperfect oracles.
- Author
-
Zhong, Bineng, Yao, Hongxun, Chen, Sheng, Ji, Rongrong, Chin, Tat-Jun, and Wang, Hanzi
- Subjects
- *
SUPERVISED learning , *COMPUTER vision , *ESTIMATION theory , *VIDEOS , *PROBABILITY theory , *HEURISTIC algorithms - Abstract
Abstract: Notwithstanding many years of progress, visual tracking is still a difficult but important problem. Since most top-performing tracking methods have their strengths and weaknesses and are suited for handling only a certain type of variation, one of the next challenges is to integrate all these methods and address the problem of long-term persistent tracking in ever-changing environments. Towards this goal, we consider visual tracking in a novel weakly supervised learning scenario where (possibly noisy) labels but no ground truth are provided by multiple imperfect oracles (i.e., different trackers). These trackers naturally have intrinsic diversity due to their different design strategies, and we propose a probabilistic method to simultaneously infer the most likely object position by considering the outputs of all trackers, and estimate the accuracy of each tracker. An online evaluation strategy of trackers and a heuristic training data selection scheme are adopted to make the inference more effective and efficient. Consequently, the proposed method can avoid the pitfalls of purely single tracking methods and get reliably labeled samples to incrementally update each tracker (if it is an appearance-adaptive tracker) to capture the appearance changes. Extensive experiments on challenging video sequences demonstrate the robustness and effectiveness of the proposed method. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
7. Robust tracking via patch-based appearance model and local background estimation.
- Author
-
Zhong, Bineng, Chen, Yan, Shen, Yingju, Chen, Yewang, Cui, Zhen, Ji, Rongrong, Yuan, Xiaotong, Chen, Duansheng, and Chen, Weibin
- Subjects
- *
ROBUST statistics , *BIOLOGICALLY inspired computing , *AUTOMATIC tracking , *CAMERAS , *ROBUST control , *ESTIMATION theory - Abstract
Abstract: In this paper, to simultaneously address the tracker drift and occlusion problem, we propose a robust visual tracking algorithm via a patch-based adaptive appearance model driven by local background estimation. Inspired by human visual mechanisms (i.e., context-awareness and attentional selection), an object is represented with a patch-based appearance model, in which each patch outputs a confidence map during the tracking. Then, these confidence maps are combined via a robust estimator to finally get more robust and accurate tracking results. Moreover, we present a local spatial co-occurrence based background modeling approach to automatically estimate the local context background model of an interested object captured from a single camera, which may be stationary or moving. Finally, we utilize local background estimation to provide supervision to an analysis of possible occlusions and the adaption of patch-based appearance model of an object. Qualitative and quantitative experimental results on challenging videos demonstrate the robustness of the proposed method. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
8. Coarse-to-fine visual tracking with PSR and scale driven expert-switching.
- Author
-
Chen, Yan, Wang, Pengfei, Zhong, Bineng, Ouyang, Gu, Bai, Bing, and Du, Jixiang
- Subjects
- *
FAST Fourier transforms , *MATRICES (Mathematics) , *COMPUTATIONAL complexity , *STATISTICAL sampling , *KERNEL functions - Abstract
Correlation filters are playing an important role in state-of-the-art visual tracking. To achieve superior speed and great discriminative power, correlation filter based trackers are equipped with circulant matrix and fast Fourier transform (FFT), which implicitly use a large amount of samples to efficiently train the correlation filters and contribute significantly to model complexity. However, the number of training samples is significantly reduced when a target's size is small. Consequently, the performance of the resulting correlation filter based trackers may be inhibited. Moreover, how to cope with the scale variations and the tracker drift is still an open problem. In this paper, to address the above problems, we propose a coarse-to-fine tracker which integrates a kernelized correlation filter (KCF) based tracker with detection proposal and a multi-expert based tracker via a simple yet effective Peak to Sidelobe Ratio (PSR) and scale driven schema. Specifically, in the coarse level, the KCF based tracker with detection proposal is used to estimate a target's state in each frame. Then, the PSR and scale variations are analyzed according to the tracking results obtained by the simple KCF based tracker with detection proposal. In the fine level, the complicated multi-expert based tracker will be started for tracking when PSR or scale is lower than a threshold. Based on the simple yet effective PSR and scale driven expert-switching scheme, the proposed coarse-to-fine tracker can select the most reliable tracker in the tracking process. Extensive experimental results on the OTB-50 benchmark demonstrate the efficiency and effectiveness of the proposed tracking method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Network in network based weakly supervised learning for visual tracking.
- Author
-
Chen, Yan, Yang, Xiangnan, Zhong, Bineng, Zhang, Huizhen, and Lin, Changlong
- Subjects
- *
SUPERVISED learning , *TRACKING algorithms , *SEMANTICS (Philosophy) , *HEURISTIC , *IMAGE processing , *ROBUST statistics - Abstract
One of the key limitations of the many existing visual tracking method is that they are built upon low-level visual features and have limited predictability power of data semantics. To effectively fill the semantic gap of visual data in visual tracking with little supervision, we propose a tracking method which constructs a robust object appearance model via learning and transferring mid-level image representations using a deep network, i.e., Network in Network (NIN). First, we design a simple yet effective method to transfer the mid-level features learned from NIN on the source tasks with large scale training data to the tracking tasks with limited training data. Then, to address the drifting problem, we simultaneously utilize the samples collected in the initial and most previous frames. Finally, a heuristic schema is used to judge whether updating the object appearance model or not. Extensive experiments show the robustness of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.