ProTracker: Probabilistic Integration for Robust and Accurate Point Tracking
Abstract
In this paper, we propose ProTracker, a novel framework for robust and accurate long-term dense tracking of arbitrary points in videos. The key idea of our method is incorporating probabilistic integration to refine multiple predictions from both optical flow and semantic features for robust short-term and long-term tracking. Specifically, we integrate optical flow estimations in a probabilistic manner, producing smooth and accurate trajectories by maximizing the likelihood of each prediction. To effectively re-localize challenging points that disappear and reappear due to occlusion, we further incorporate long-term feature correspondence into our flow predictions for continuous trajectory generation. Extensive experiments show that ProTracker achieves the state-of-the-art performance among unsupervised and self-supervised approaches, and even outperforms supervised methods on several benchmarks. Our code and model will be publicly available upon publication.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MFTIQ: Multi-Flow Tracker with Independent Matching Quality Estimation (2024)
- DATAP-SfM: Dynamic-Aware Tracking Any Point for Robust Structure from Motion in the Wild (2024)
- TAPTRv3: Spatial and Temporal Context Foster Robust Tracking of Any Point in Long Video (2024)
- Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency (2024)
- RoMo: Robust Motion Segmentation Improves Structure from Motion (2024)
- Event-based Tracking of Any Point with Motion-Robust Correlation Features (2024)
- T-3DGS: Removing Transient Objects for 3D Scene Reconstruction (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper