The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DVGFormer: Learning Camera Movement Control from Real-World Drone Videos
Paper | Project Page | GitHub | Data
Official implementation of our paper:
Learning Camera Movement Control from Real-World Drone Videos
Yunzhong Hou, Liang Zheng, Philip Torr
"To record as is, not to create from scratch."
Abstract: This study seeks to automate camera movement control for filming existing subjects into attractive videos, contrasting with the creation of non-existent content by directly generating the pixels. We select drone videos as our test case due to their rich and challenging motion patterns, distinctive viewing angles, and precise controls. Existing AI videography methods struggle with limited appearance diversity in simulation training, high costs of recording expert operations, and difficulties in designing heuristic-based goals to cover all scenarios. To avoid these issues, we propose a scalable method that involves collecting real-world training data to improve diversity, extracting camera trajectories automatically to minimize annotation costs, and training an effective architecture that does not rely on heuristics. Specifically, we collect 99k high-quality trajectories by running 3D reconstruction on online videos, connecting camera poses from consecutive frames to formulate 3D camera paths, and using Kalman filter to identify and remove low-quality data. Moreover, we introduce DVGFormer, an auto-regressive transformer that leverages the camera path and images from all past frames to predict camera movement in the next frame. We evaluate our system across 38 synthetic natural scenes and 7 real city 3D scans. We show that our system effectively learns to perform challenging camera movements such as navigating through obstacles, maintaining low altitude to increase perceived speed, and orbiting tower and buildings, which are very useful for recording high-quality videos.
The DroneMotion-99k Dataset
We provide the Colmap 3D reconstruction results and the filtered camera movement sequences in our DroneMotion-99k dataset. You can download either a minimal dataset with 10 videos and 129 sequences or the full dataset with 13,653 videos and 99,003 camera trajectories.
Note that due to the file size limit, the full dataset is stored as four tar.gz
parts,
dataset_full.part1.tar.gz
dataset_full.part2.tar.gz
dataset_full.part3.tar.gz
dataset_full.part4.tar.gz
You will need to first combine them into a full version
cat dataset_full_* > reassembled.tar.gz
and then extract the contents to retrieve the original HDF5 file dataset_full.h5
.
tar -xvzf reassembled.tar.gz
After downloading the training data, your folder should look like this
dvgformer/
├── youtube_drone_videos/
│ ├── dataset_full.h5
│ └── dataset_mini.h5
├── src/
├── README.md
...
Due to the YouTube policy, we cannot share the video MP4s or the frames. As an alternative, we include a python script download_videos.py
in our GitHub repo that can help you automatically download the videos and extract the frames.
python download_videos.py --hdf5_fpath youtube_drone_videos/dataset_mini.h5
python download_videos.py --hdf5_fpath youtube_drone_videos/dataset_full.h5
This should update your downloaded HDF5 dataset file with the video frames.
For more details, please refer to this guide.
- Downloads last month
- 33