-
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Paper • 2407.14507 • Published • 46 -
New Desiderata for Direct Preference Optimization
Paper • 2407.09072 • Published • 10 -
Self-Recognition in Language Models
Paper • 2407.06946 • Published • 24 -
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?
Paper • 2407.04842 • Published • 53
Collections
Discover the best community collections!
Collections including paper arxiv:2406.14629
-
Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs
Paper • 2407.00653 • Published • 11 -
Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs
Paper • 2406.20086 • Published • 5 -
UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI
Paper • 2407.00106 • Published • 5 -
MIRAI: Evaluating LLM Agents for Event Forecasting
Paper • 2407.01231 • Published • 16
-
Instruction Pre-Training: Language Models are Supervised Multitask Learners
Paper • 2406.14491 • Published • 86 -
Pre-training Small Base LMs with Fewer Tokens
Paper • 2404.08634 • Published • 34 -
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
Paper • 2405.15319 • Published • 25 -
Can LLMs Learn by Teaching? A Preliminary Study
Paper • 2406.14629 • Published • 19
-
Unlocking Continual Learning Abilities in Language Models
Paper • 2406.17245 • Published • 28 -
Can Few-shot Work in Long-Context? Recycling the Context to Generate Demonstrations
Paper • 2406.13632 • Published • 5 -
Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding
Paper • 2406.19263 • Published • 9 -
Can LLMs Learn by Teaching? A Preliminary Study
Paper • 2406.14629 • Published • 19
-
Large Language Model Unlearning via Embedding-Corrupted Prompts
Paper • 2406.07933 • Published • 7 -
Block Transformer: Global-to-Local Language Modeling for Fast Inference
Paper • 2406.02657 • Published • 37 -
Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning
Paper • 2406.12050 • Published • 19 -
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Paper • 2406.11813 • Published • 30
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 66 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 126 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 53 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 87
-
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Paper • 2307.15337 • Published • 36 -
DiTFastAttn: Attention Compression for Diffusion Transformer Models
Paper • 2406.08552 • Published • 23 -
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
Paper • 2406.02540 • Published • 2 -
Can LLMs Learn by Teaching? A Preliminary Study
Paper • 2406.14629 • Published • 19
-
A Language Model's Guide Through Latent Space
Paper • 2402.14433 • Published • 1 -
The Hidden Space of Transformer Language Adapters
Paper • 2402.13137 • Published -
Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models
Paper • 2402.16438 • Published -
AtP*: An efficient and scalable method for localizing LLM behaviour to components
Paper • 2403.00745 • Published • 12
-
Suppressing Pink Elephants with Direct Principle Feedback
Paper • 2402.07896 • Published • 9 -
Policy Improvement using Language Feedback Models
Paper • 2402.07876 • Published • 5 -
Direct Language Model Alignment from Online AI Feedback
Paper • 2402.04792 • Published • 29 -
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Paper • 2401.01335 • Published • 64