Datasets:
File size: 9,437 Bytes
0bd02e7 f8e83e8 e2d62f9 0bd02e7 4033082 3aba462 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
---
dataset_info:
features:
- name: video_id
dtype: string
- name: asr_raw
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
- name: words
list:
- name: confidence
dtype: float64
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
- name: asr_grouped
list:
list: string
- name: ocr
list:
list: string
- name: blip2_annotations
struct:
- name: actions
list: string
- name: captions
list: string
- name: objects
list: string
- name: replay_graphs
struct:
- name: original_marker_duration
dtype: float64
- name: processed_marker_duration
dtype: float64
- name: multiplier
dtype: float64
- name: markers
list:
- name: start
dtype: float64
- name: end
dtype: float64
- name: replay_score
dtype: float64
- name: likes
dtype: float64
- name: views
dtype: float64
- name: metadata
struct:
- name: title
dtype: string
- name: description
dtype: string
- name: length
dtype: float64
- name: date
dtype: string
- name: channel_data
struct:
- name: channel_id
dtype: string
- name: company_name
dtype: string
- name: subscribers
dtype: float64
splits:
- name: train
num_bytes: 396758465
num_examples: 22569
- name: test
num_bytes: 35343326
num_examples: 2026
download_size: 135245985
dataset_size: 432101791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: mit
pretty_name: Content Behavior Corpus
language:
- en
tags:
- youtube
- content
- behavior
- likes
- views
- transcript
- captions
- OCR
- replay
---
# Dataset Card for Content Behavior Corpus
The Content Behavior Corpus (CBC) dataset, consisting of content and the corresponding receiver behavior.
## Dataset Details
<img src="./content-behavior-five-factors.png" alt="content-behavior-five-factors" width="1000"/>
The progress of Large Language Models (LLMs) has largely been driven by the availability of large-scale unlabeled text data for unsupervised learning. This work focuses on modeling both content and the corresponding receiver behavior in the same space. Although existing datasets have trillions of content tokens (text, images, audio, and videos), they lack information on receiver effects. To address this, the paper utilizes YouTube, a large publicly available source of content-behavior data, which includes:
Communicator Data: Channel name, and number of subscribers.
Message: Youtube video ids, extracted speech, scene-wise captions, on screen text, video description, video length, upload date.
Receiver Effect: Video likes, views, and replay graphs.
This covers all five factors of communication, with the channel being fixed (YouTube) and receivers being average channel subscribers and viewers.
- **Website:** https://behavior-in-the-wild.github.io/LCBM
- **Paper:** https://arxiv.org/abs/2309.00359
<!-- - **License:** [More Information Needed] -->
<!-- ## Uses -->
<!-- Address questions around how the dataset is intended to be used. -->
<!-- ### Direct Use -->
<!-- This section describes suitable use cases for the dataset. -->
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
## Dataset Structure
### Fields
- **video_id** (`string`): Unique identifier for each video.
- **asr_raw** (`list of objects`): Raw Automatic Speech Recognition (ASR) data.
- **start** (`float64`): Start time of the ASR segment.
- **end** (`float64`): End time of the ASR segment.
- **text** (`string`): Transcription of the ASR segment.
- **words** (`list of objects`): Word-level ASR details.
- **confidence** (`float64`): Confidence score of the ASR word.
- **start** (`float64`): Start time of the word.
- **end** (`float64`): End time of the word.
- **text** (`string`): Transcription of the word.
- **asr_grouped** (`list of lists of strings`): ASR transcriptions grouped by replay segments.
- **ocr** (`list of lists of strings`): Optical Character Recognition (OCR) data for each replay segment.
- **blip2_annotations** (`object`): BLIP-2 annotations for the video's replay segments.
- **actions** (`list of strings`): List of actions identified in each replay segment.
- **captions** (`list of strings`): List of image captions generated for each replay segment.
- **objects** (`list of strings`): List of objects identified in each replay segment.
- **replay_graphs** (`object`): Data related to video replay behavior.
- **original_marker_duration** (`float64`): Original duration for replay segments.
- **multiplier** (`float64`): Number of original replay segments combined to create processed replay segments.
- **processed_marker_duration** (`float64`): Processed duration for replay segments.
- **markers** (`list of objects`): Replay segments.
- **start** (`float64`): Start time of the replay segment.
- **end** (`float64`): End time of the replay segment.
- **replay_score** (`float64` Score `[0, 1]`): indicating replay behavior.
- **likes** (`float64`): Number of likes the video received.
- **views** (`float64`): Number of views the video received.
- **metadata** (`object`): Metadata associated with the video.
- **title** (`string`): Title of the video.
- **description** (`string`): Description of the video.
- **length** (`float64`): Length of the video in seconds.
- **date** (`string`): Publication date of the video.
- **channel_data** (`object`): Information about the YouTube channel.
- **channel_id** (`string`): Unique identifier for the channel.
- **company_name** (`string`): Name of the company or individual owning the channel.
- **subscribers** (`float64`): Number of subscribers to the channel.
### Data Collection and Processing
<!-- ### Data Collection and Processing -->
- **videos**: Videos were downloaded using [pytube](https://github.com/pytube/pytube).
- **asr_raw**: Extracted using [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) and the [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) library.
- **asr_grouped**: Extracted words from **asr_raw** are grouped by the replay segment that they fall into. A word may fall into multiple replay segments if its duration intersects with multiple replay segments.
- **ocr**: OCR extracted using [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR).
- **blip2_annotations**: Annotations extracted using [blip2-flan-t5-xxl](https://huggingface.co/Salesforce/blip2-flan-t5-xxl).
- **replay_graphs**: Extracted by directly parsing a video page's HTML content. Original replay segments are combined to have a duration >= 1 second, giving processed replay segments. `processed_duration = multiplier * original_duration`.
- **likes**: Extracted by directly parsing a video page's HTML content.
- **views**: Extracted by directly parsing a video page's HTML content.
- **metadata**: Extracted by directly parsing a video page's HTML content.
- **channel_data**: Extracted by directly parsing a video page's HTML content.
<!-- #### Who are the source data producers? -->
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
<!-- #### Personal and Sensitive Information -->
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
<!-- ## Bias, Risks, and Limitations -->
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- ### Recommendations -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
## Citation
**BibTeX:**
```
@inproceedings{
khandelwal2024large,
title={Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior},
author={Ashmit Khandelwal and Aditya Agrawal and Aanisha Bhattacharyya and Yaman Kumar and Somesh Singh and Uttaran Bhattacharya and Ishita Dasgupta and Stefano Petrangeli and Rajiv Ratn Shah and Changyou Chen and Balaji Krishnamurthy},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://arxiv.org/abs/2309.00359}
}
```
**APA:**
Khandelwal, A., Agrawal, A., Bhattacharyya, A., Kumar, Y., Singh, S., Bhattacharya, U., Dasgupta, I., Petrangeli, S., Shah, R. R., Chen, C., & Krishnamurthy, B. (2024). Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior. The Twelfth International Conference on Learning Representations. https://arxiv.org/abs/2309.00359
## Contact
Contact [email protected] for questions and suggestions. |