Collections
Discover the best community collections!
Collections including paper arxiv:2310.06825
-
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 340 -
Qwen2.5-Coder Technical Report
Paper • 2409.12186 • Published • 140 -
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
Paper • 2409.12122 • Published • 3 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 41
-
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 16 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 107
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12
-
Mistral 7B
Paper • 2310.06825 • Published • 47 -
Instruction Tuning with Human Curriculum
Paper • 2310.09518 • Published • 3 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 69 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 26