File size: 1,742 Bytes
3bb9ab2 0ddf71a 3bb9ab2 8d17d5f 3bb9ab2 52450cc 3bb9ab2 604768b 6fbd796 ef71be6 3bb9ab2 8d17d5f 352a037 3bb9ab2 8d17d5f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
tags:
- Automated Peer Reviewing
- SFT
datasets:
- ECNU-SEA/SEA_data
---
## Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis
Paper Link: https://arxiv.org/abs/2407.12857
Project Page: https://ecnu-sea.github.io/
## π₯ News
- π₯π₯π₯ SEA is accepted by EMNLP2024 !
- π₯π₯π₯ We have made SEA series models (7B) public !
## Model Description
β οΈ **_This is the SEA-S model for content standardization, and the review model SEA-E can be found [here](https://huggingface.co/ECNU-SEA/SEA-E)._**
The SEA-S model aims to integrate all reviews for each paper into one to eliminate redundancy and errors, focusing on the major advantages and disadvantages of the paper. Specifically, we first utilize GPT-4 to integrate multiple reviews of a paper into one (From [ECNU-SEA/SEA_data](https://huggingface.co/datasets/ECNU-SEA/SEA_data)) that is in a unified format and criterion with constructive contents, and form an instruction dataset for SFT. After that, we fine-tune [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) to distill the knowledge of GPT-4. Therefore, SEA-S provides a novel paradigm for integrating peer review data in an unified format across various conferences.
```bibtex
@inproceedings{yu2024automated,
title={Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis},
author={Yu, Jianxiang and Ding, Zichen and Tan, Jiaqi and Luo, Kangyang and Weng, Zhenmin and Gong, Chenghua and Zeng, Long and Cui, RenJing and Han, Chengcheng and Sun, Qiushi and others},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},
pages={10164--10184},
year={2024}
}
``` |