Lillianwei commited on
Commit
374594b
·
1 Parent(s): bdb6c4c

feat: adapt to MMIE

Browse files
Files changed (1) hide show
  1. src/about.py +2 -1
src/about.py CHANGED
@@ -28,7 +28,8 @@ TITLE = """<h1 align="center" id="space-title">MMIE</h1>"""
28
  # What does your leaderboard evaluate?
29
  INTRODUCTION_TEXT = """
30
  # MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
31
- [Website](https://github.com/richard-peng-xia/MMIE) | [Code](https://github.com/richard-peng-xia/MMIE) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Eval Model](https://huggingface.co/MMIE/MMIE-Eval) | [Paper]()
 
32
  """
33
 
34
  # Which evaluations are you running? how can people reproduce what you have?
 
28
  # What does your leaderboard evaluate?
29
  INTRODUCTION_TEXT = """
30
  # MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models
31
+ We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and generation capabilities of LVLMs across diverse fields, supported by reliable automated metrics.
32
+ [Website](https://mmie-bench.github.io) | [Code](https://github.com/Lillianwei-h/MMIE-Eval) | [Dataset](https://huggingface.co/datasets/MMIE/MMIE) | [Results](https://huggingface.co/spaces/MMIE/Leaderboard) | [Eval Model](https://huggingface.co/MMIE/MMIE-Eval) | [Paper]()
33
  """
34
 
35
  # Which evaluations are you running? how can people reproduce what you have?