# EPIC-KITCHENS-100 Dataset - Extension Video Release Release Date: May 2020 ## Authors Dima Damen (1) Hazel Doughty (1) Giovanni Maria Farinella (2) Antonino Furnari (2) Evangelos Kazakos (1) Jian Ma (1) Davide Moltisanti (1) Jonathan Munro (1) Toby Perrett (1) Will Price (1) Michael Wray (1) * (1 University of Bristol) * (2 University of Catania) ## Citing When using the dataset, kindly reference: - Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2020). Rescaling Egocentric Vision. check publication [here](http://epic-kitchens.github.io) - Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2020). The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). - Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray (2018). Scaling Egocentric Vision: The EPIC-KITCHENS Dataset. European Conference on Computer Vision (ECCV). ## Erratum **Important:** We have recently detected an error in our pre-extracted RGB and Optical flow frames for two videos in our dataset. This does not affect the videos themselves or any of the annotations in this github. However, if you've been using our pre-extracted frames, we below detail how you can fix the error at your end, until we publish replacement frames for downloading. Download the videos `P01_109.MP4` and `P27_103.MP4`. Then set up a directory like so: $ mkdir -p rgb/{P01_109,P27_103} $ mkdir -p flow/{P01_109,P27_103} $ mkdir videos $ mv /path/to/{P01_109,P27_103}.MP4 videos You will need docker setup on your machine to extract the frames and flow. **RGB** $ docker run --gpus "device=0" \ -it \ --rm \ -v "$PWD:/workspace" \ willprice/nvidia-ffmpeg \ -hwaccel cuvid \ -c:v hevc_cuvid \ -i /workspace/videos/P27_103.MP4 \ -vf 'scale_npp=-2:256:interp_algo=super,hwdownload,format=nv12' \ -qscale:v 4 \ -r 50 /workspace/rgb/P27_103/frame_%010d.jpg $ docker run --gpus "device=0" \ -it \ --rm \ -v "$PWD:/workspace" \ willprice/nvidia-ffmpeg \ -hwaccel cuvid \ -c:v hevc_cuvid \ -i /workspace/videos/P01_109.MP4 \ -vf 'scale_npp=-2:256:interp_algo=super,hwdownload,format=nv12' \ -qscale:v 4 \ -r 50 /workspace/rgb/P01_109/frame_%010d.jpg **Flow** $ docker run --gpus "device=0" \ -it \ --rm \ -v "$PWD/rgb/P01_109:/input" \ -v "$PWD/flow/P01_109:/output" \ willprice/furnari-flow \ frame_%010d.jpg -g 0 -s 1 -d 1 -b 8 $ docker run --gpus "device=0" \ -it \ --rm \ -v "$PWD/rgb/P27_103:/input" \ -v "$PWD/flow/P27_103:/output" \ willprice/furnari-flow \ frame_%010d.jpg -g 0 -s 1 -d 1 -b 8 ## Dataset Details This deposit contains additional videos for the dataset EPIC-KITCHENS-100. These contain 45 recorded hours, and together with videos released at: http://dx.doi.org/10.5523/bris.3h91syskeag572hl6tvuovwv4d form a total of 100 hours of egocentric footage, by 37 participants. This README contains information about additionally recorded video files. Please see the [github](https://github.com/epic-kitchens/annotations) for the latest annotations and [epic-kitchens website](http://epic-kitchens.github.io) for all and open challenges. ## Folder Details We have a folder per participant `P##` (e.g. `P01`). Within each folder, you can find the following sub-folders: * **videos**: contains the raw videos. We use `P##/P##_1**` with `##` denoting the participant number and `1**` to denote the video number. We use 1 to indicate an extension video compared to videos released in 2018. Videos are recorded using the GoPro Hero 7 at 50 FPS with stabilisation. * **rgb_frames**: contains the RGB frames extracted from the videos using the following command ([docker container with NVIDIA-accelerated FFmpeg](https://hub.docker.com/r/willprice/nvidia-ffmpeg/)): ``` $ ffmpeg \ -hwaccel cuvid \ -c:v "hevc_cuvid" \ -i "P##_1**.MP4" \ -vf 'scale_npp=-2:256:interp_algo=super,hwdownload,format=nv12' \ -q:v 4 \ -r 50 \ "P##_1**/frame_%010d.jpg" ``` Frames from a video have been grouped into a tar file, labelled as `P##_***.tar` and can be found inside the participant directory, i.e. The RGB frames from, `P01_101` can be found in `rgb_frames/P01_101.tar`. Each tar file contains a flat directory of frames named `frame_xxxxxxxxxx.jpg`. * **flow_frames**: contains the flow frames used as input for action recognition Similar to `rgb_frames`, `flow_frames` are grouped into a tar file, with a `u` and `v` directory and frames follow the same naming convention. Optical flow was extracted using a fork of [`gpu_flow`](https://github.com/feichtenhofer/gpu_flow) made [available on github](https://github.com/dl-container-registry/furnari-flow). We use the parameters: - stride = 1 - dilation = 1 - bound = 8 - size = 256 * **meta_data**: We publish meta data containing accelerometer and gyroscope data. These have been collected using the public [GoPro-utilities repo](https://github.com/JuanIrache/gopro-utils). Please refer to that repo for the formats of the released meta_data files. Note that GPS information is suppressed for anonymity and were not recorded. * **recording_times.csv** This file contains the exact recording time for each file (in date, hours and minutes), based on the local time at the city/country of recording. ## License All files in this dataset are copyright by us and published under the Creative Commons Attribution-NonCommerial 4.0 International License, found [here](https://creativecommons.org/licenses/by-nc/4.0/). This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.