-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 9 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 9 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 25
Collections
Discover the best community collections!
Collections including paper arxiv:2401.06855
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 3 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
Enhancing Automated Interpretability with Output-Centric Feature Descriptions
Paper • 2501.08319 • Published • 10 -
Open Problems in Machine Unlearning for AI Safety
Paper • 2501.04952 • Published • 1 -
Towards scientific discovery with dictionary learning: Extracting biological concepts from microscopy foundation models
Paper • 2412.16247 • Published • 1 -
Inferring Functionality of Attention Heads from their Parameters
Paper • 2412.11965 • Published • 2
-
vectara/hallucination_evaluation_model
Text Classification • Updated • 224k • 238 -
notrichardren/HaluEval
Viewer • Updated • 35k • 150 -
TRUE: Re-evaluating Factual Consistency Evaluation
Paper • 2204.04991 • Published • 1 -
Fine-grained Hallucination Detection and Editing for Language Models
Paper • 2401.06855 • Published • 4