Back to Search
Start Over
Motion estimation and multi-stage association for tracking-by-detection.
- Source :
- Complex & Intelligent Systems; Apr2024, Vol. 10 Issue 2, p2445-2458, 14p
- Publication Year :
- 2024
-
Abstract
- Multi-object tracking (MOT) aims to locate and identify objects in videos. As deep learning brings excellent performances to object detection, the tracking-by-detection (TBD) has gradually become a mainstream tracking framework. However, some drawbacks still exist in the current TBD framework: (1) inaccurate prediction of the bounding boxes would occur in the detection part, which is caused by overlooking the actual pedestrian ratio in the surveillance scene. (2) The width of the bounding boxes in the next frame might be indirectly predicted by the aspect ratio, which increases the error of width prediction in the motion prediction part. (3) Association is only performed for high-confidence detection boxes, and the low-confidence boxes caused by occlusion are discarded in the data association part, resulting in fragmentation of trajectories. To address the above issues, we propose a multi-target tracking model incorporating motion estimation and multi-stage association (MEMA). First, the aspect ratio of the ground-true bounding box is introduced to improve the fit of the detection and the ground-true bounding box, and we design the elliptical Gaussian kernel to improve the positioning accuracy of the object center point. Then, the prediction state vector of the Kalman filter is modified to predict the width and its corresponding velocity directly. It can reduce the width error of the prediction box and eliminate the velocity error of the motion estimation, which leads to a more pedestrian-friendly prediction bounding box. Finally, we propose a multi-stage association strategy to correlate different confidence boxes. Without using the appearance feature, the strategy can reduce the impact of occlusion and improve the tracking performance. On the MOT17 test set, the method proposed in this paper achieves a MOTA of 74.3% and an IDF1 of 72.4%, outperforming the current SOTA. [ABSTRACT FROM AUTHOR]
- Subjects :
- OBJECT recognition (Computer vision)
KALMAN filtering
DEEP learning
MOTION
Subjects
Details
- Language :
- English
- ISSN :
- 21994536
- Volume :
- 10
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Complex & Intelligent Systems
- Publication Type :
- Academic Journal
- Accession number :
- 176339009
- Full Text :
- https://doi.org/10.1007/s40747-023-01273-3