This dataset contains the first 40k prompts from LAION/CC/SBU BLIP-Caption Concept-balanced 558K which we use for rapid testing of the Robin model setup on new compute.
This is based on the data used in LLaVA: https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md
This does about 150 iterations with a batch size of 256 to check checkpointing and final model save.