|
# EPIC-KITCHENS-100 Dataset - Extension Video Release |
|
Release Date: May 2020 |
|
|
|
## Authors |
|
Dima Damen (1) |
|
Hazel Doughty (1) |
|
Giovanni Maria Farinella (2) |
|
Antonino Furnari (2) |
|
Evangelos Kazakos (1) |
|
Jian Ma (1) |
|
Davide Moltisanti (1) |
|
Jonathan Munro (1) |
|
Toby Perrett (1) |
|
Will Price (1) |
|
Michael Wray (1) |
|
|
|
* (1 University of Bristol) |
|
* (2 University of Catania) |
|
|
|
|
|
## Citing |
|
When using the dataset, kindly reference: |
|
|
|
Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2020). Rescaling Egocentric Vision. check publication [here](http://epic-kitchens.github.io) |
|
|
|
Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2020). The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). |
|
|
|
Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2018). Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. European Conference on Computer Vision (ECCV). |
|
|
|
|
|
## Dataset Details |
|
This deposit contains additional videos for the dataset EPIC-KITCHENS-100. These contain 45 recorded hours, and together with videos released at: http://dx.doi.org/10.5523/bris.3h91syskeag572hl6tvuovwv4d form a total of 100 hours of egocentric footage, by 37 participants. |
|
|
|
This readme contains information about additionally recorded video files. Please see the |
|
[github](https://github.com/epic-kitchens/annotations) for the latest annotations |
|
and [epic-kitchens website](http://epic-kitchens.github.io) for all and open challenges. |
|
|
|
## Folder Details |
|
We have a folder per participant PXX (e.g. P01). Within each folder, you can find the following sub-folders: |
|
|
|
* **videos:** contains the raw videos. |
|
|
|
We use `P##/P##_1**` with `##` denoting the participant number and `1**` to denote |
|
the video number. We use 1 to indicate an extension video compared to videos released in 2018. |
|
|
|
Videos are recordied using the GoPro Hero 7 at 50 FPS with stabilisation. |
|
|
|
* **rgb_frames:** contains the RGB frames extracted at 50fps. |
|
|
|
Frames from a video have been grouped into a tar file, labelled as `P##_***.tar` |
|
and can be found inside the participant directory, i.e. The RGB frames from, |
|
`P01_101` can be found in `rgb_frames/P01_101.tar`. Each tar file contains a flat directory |
|
of frames named `frame_xxxxxxxxxx.jpg`. |
|
|
|
RGB frames are extracted using the command |
|
|
|
``` |
|
ffmpeg \ |
|
-hwaccel cuvid \ |
|
-c:v "hevc_cuvid" \ |
|
-i PXX_1YY.MP4 \ |
|
-vf 'scale_npp=-2:256:interp_algo=super,hwdownload,format=nv12' \ |
|
-q:v 4 \ |
|
-r 50 \ |
|
PXX_1YY/frame_%010d.jpg |
|
``` |
|
|
|
* **flow_frames:** contains the flow frames used as input for action recognition |
|
|
|
Similar to rgb_frames, flow_frames are grouped into a tar file, and follow the same formatting. |
|
|
|
Optical flow was extracted using a fork of |
|
[`gpu_flow`](https://github.com/feichtenhofer/gpu_flow) made |
|
[available on github](https://github.com/dl-container-registry/furnari-flow). |
|
We set the parameters: stride = 1, dilation = 1, bound = 8 and size = 256. |
|
|
|
* **meta_data:** |
|
|
|
We publish meta data containing accelerometers and gyroscope data. These have been collected using the public [GoPro-utilities repo](https://github.com/JuanIrache/gopro-utils). Please refer to that repo for the formats of the released meta_data files. Note that GPS information is suppressed for anonymity and were not recorded. |
|
|
|
* **recording_times.csv** |
|
|
|
This file contains the exact recording time for each file (in date, hours and minutes), based on the local time at the city/country of recording. |
|
|
|
## License |
|
All files in this dataset are copyright by us and published under the |
|
Creative Commons Attribution-NonCommerial 4.0 International License, found |
|
[here](https://creativecommons.org/licenses/by-nc/4.0/). |
|
This means that you must give appropriate credit, provide a link to the license, |
|
and indicate if changes were made. You may do so in any reasonable manner, |
|
but not in any way that suggests the licensor endorses you or your use. You |
|
may not use the material for commercial purposes. |
|
|