AdityaNG's picture
Update README.md
8155f1b
---
license: mit
tags:
- video
- driving
- Bengaluru
- disparity maps
- depth dataset
homepage: https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/
---
# Bengaluru Semantic Occupancy Dataset
<img src="https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/index_files/BDD_Iterator_Demo-2023-08-30_08.25.17.gif" >
## Dataset Summary
We gathered a dataset spanning 114 minutes and 165K frames in Bengaluru, India. Our dataset consists of video data from a calibrated camera sensor with a resolution of 1920×1080 recorded at a framerate of 30 Hz. We utilize a Depth Dataset Generation pipeline that only uses videos as input to produce high-resolution disparity maps.
- Dataset Iterator: https://github.com/AdityaNG/bdd_dataset_iterator
- Project Page: https://adityang.github.io/AdityaNG/BengaluruDrivingDataset/
- Dataset Download: https://huggingface.co/datasets/AdityaNG/BengaluruSemanticOccupancyDataset
## Paper
[Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios](https://arxiv.org/abs/2307.10934)
## Citation
```bibtex
@misc{analgund2023octran,
title={Bengaluru Driving Dataset: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios},
author={Ganesh, Aditya N and Pobbathi Badrinath, Dhruval and
Kumar, Harshith Mohan and S, Priya and Narayan, Surabhi
},
year={2023},
howpublished={Spotlight Presentation at the Transformers for Vision Workshop, CVPR},
url={https://sites.google.com/view/t4v-cvpr23/papers#h.enx3bt45p649},
note={Transformers for Vision Workshop, CVPR 2023}
}