-
SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning
Paper • 2409.05556 • Published • 2 -
Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers
Paper • 2409.04109 • Published • 43 -
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor?
Paper • 2409.15277 • Published • 35 -
Learning Task Decomposition to Assist Humans in Competitive Programming
Paper • 2406.04604 • Published • 4
Collections
Discover the best community collections!
Collections including paper arxiv:2409.15700
-
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
Paper • 2408.16672 • Published • 7 -
Precise Zero-Shot Dense Retrieval without Relevance Labels
Paper • 2212.10496 • Published • 2 -
Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent
Paper • 2304.09542 • Published • 4 -
Making Text Embedders Few-Shot Learners
Paper • 2409.15700 • Published • 30
-
LLM Pruning and Distillation in Practice: The Minitron Approach
Paper • 2408.11796 • Published • 57 -
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
Paper • 2408.09174 • Published • 51 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 41 -
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Paper • 2408.11878 • Published • 53
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 66 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 126 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 53 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 87
-
CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
Paper • 2404.15653 • Published • 26 -
MoDE: CLIP Data Experts via Clustering
Paper • 2404.16030 • Published • 12 -
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper • 2405.12130 • Published • 46 -
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Paper • 2405.12981 • Published • 28
-
Compression Represents Intelligence Linearly
Paper • 2404.09937 • Published • 27 -
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
Paper • 2404.06395 • Published • 21 -
Long-context LLMs Struggle with Long In-context Learning
Paper • 2404.02060 • Published • 36 -
Are large language models superhuman chemists?
Paper • 2404.01475 • Published • 16
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 82 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 61 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 29 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 56
-
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Paper • 2403.03507 • Published • 183 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 62 -
RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs
Paper • 2407.02552 • Published • 4 -
OpenDevin: An Open Platform for AI Software Developers as Generalist Agents
Paper • 2407.16741 • Published • 68