bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 238
833
| abstract
stringlengths 649
2.54k
| title
stringlengths 31
135
| authors
sequencelengths 1
31
| id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
10
| upvotes
int64 -1
72
| num_comments
int64 -1
5
| n_authors
int64 -1
27
| Models
sequencelengths 0
28
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
9
| paper_page_exists_pre_conf
int64 0
1
| unique_id
int64 0
298
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=KqK5XcgEhR | @inproceedings{
zhao2024empowering,
title={Empowering Large Language Model Agents through Action Learning},
author={Haiteng Zhao and Chang Ma and Guoyin Wang and Jing Su and Lingpeng Kong and Jingjing Xu and Zhi-Hong Deng and Hongxia Yang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=KqK5XcgEhR}
} | Large Language Model (LLM) Agents have recently garnered increasing interest yet they are limited in their ability to learn from trial and error, a key element of intelligent behavior. In this work, we argue that the capacity to learn new actions from experience is fundamental to the advancement of learning in LLM agents. While humans naturally expand their action spaces and develop skills through experiential learning, LLM agents typically operate within fixed action spaces, limiting their potential for growth. To address these challenges, our study explores open-action learning for language agents. We introduce a framework LearnAct with an iterative learning strategy to create and improve actions in the form of Python functions. In each iteration, LLM revises and updates the currently available actions based on the errors identified in unsuccessful training tasks, thereby enhancing action effectiveness. Our experimental evaluations across Robotic Planning and Alfworld environments reveal that after learning on a few training task instances, our approach to open-action learning markedly improves agent performance for the type of task (by 32 percent in AlfWorld compared to ReAct+Reflexion, for instance) highlighting the importance of experiential action learning in the development of more intelligent LLM agents. | Empowering Large Language Model Agents through Action Learning | [
"Haiteng Zhao",
"Chang Ma",
"Guoyin Wang",
"Jing Su",
"Lingpeng Kong",
"Jingjing Xu",
"Zhi-Hong Deng",
"Hongxia Yang"
] | Conference | Poster | 2402.15809 | [
"https://github.com/zhao-ht/learnact"
] | https://huggingface.co/papers/2402.15809 | 0 | 0 | 0 | 8 | [] | [] | [] | 1 | 200 |
null | https://openreview.net/forum?id=KidynPuLNW | @inproceedings{
peng2024on,
title={On Limitations of the Transformer Architecture},
author={Binghui Peng and Srini Narayanan and Christos Papadimitriou},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=KidynPuLNW}
} | What are the root causes of hallucinations in large language models (LLMs)? We use Communication Complexity to prove that the Transformer layer is incapable of composing functions (e.g., identify a grandparent of a person in a genealogy) if the domains of the functions are large enough; we show through examples that this inability is already empirically present when the domains are quite small. We also point out that several mathematical tasks that are at the core of the so-called compositional tasks thought to be hard for LLMs are unlikely to be solvable by Transformers, for large enough instances and assuming that certain well accepted conjectures in the field of Computational Complexity are true. | On Limitations of the Transformer Architecture | [
"Binghui Peng",
"Srini Narayanan",
"Christos Papadimitriou"
] | Conference | Poster | 2402.08164 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 201 |
|
null | https://openreview.net/forum?id=KZd1EErRJ1 | @inproceedings{
fu2024isobench,
title={IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations},
author={Deqing Fu and Ruohao Guo and Ghazal Khalighinejad and Ollie Liu and Bhuwan Dhingra and Dani Yogatama and Robin Jia and Willie Neiswanger},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=KZd1EErRJ1}
} | Current foundation models exhibit impressive capabilities when prompted either with text only or with both image and text inputs. But do their capabilities change depending on the input modality? In this work, we propose **IsoBench**, a benchmark dataset containing problems from four major areas: math, science, algorithms, and games. Each example is presented with multiple **isomorphic representations** of inputs, such as visual, textual, and mathematical presentations. IsoBench provides fine-grained feedback to diagnose performance gaps caused by the form of the representation. Across various foundation models, we observe that on the same problem, models have a consistent preference towards textual representations. Most prominently, when evaluated on all IsoBench problems, Claude-3 Opus performs 28.66 points worse when provided with images instead of text; similarly, GPT-4 Turbo is 18.71 points worse and Gemini Pro is 14.87 points worse. Finally, we present two prompting techniques, *IsoCombination* and *IsoScratchPad*, which improve model performance by considering combinations of, and translations between, different input representations. | IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations | [
"Deqing Fu",
"Ruohao Guo",
"Ghazal Khalighinejad",
"Ollie Liu",
"Bhuwan Dhingra",
"Dani Yogatama",
"Robin Jia",
"Willie Neiswanger"
] | Conference | Poster | 2404.01266 | [
""
] | https://huggingface.co/papers/2404.01266 | 3 | 1 | 0 | 7 | [] | [
"isobench/IsoBench"
] | [] | 1 | 202 |
null | https://openreview.net/forum?id=K1M3gLW0MX | @inproceedings{
ding2024on,
title={On Fairness of Low-Rank Adaptation of Large Models},
author={Zhoujie Ding and Ken Liu and Pura Peetathawatchai and Berivan Isik and Sanmi Koyejo},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=K1M3gLW0MX}
} | Low-rank adaptation of large models, particularly LoRA, has gained traction due to its computational efficiency. This efficiency, contrasted with the prohibitive costs of full-model fine-tuning, means that practitioners often turn to LoRA without a complete understanding of its ramifications. In this study, we focus on fairness and ask whether LoRA has an unexamined impact on utility, calibration, and resistance to membership inference across different subgroups (e.g., genders, races, religions) compared to a full-model fine-tuning baseline. We present extensive experiments across vision and language domains and across classification and generation tasks using ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B. Intriguingly, experiments suggest that while one can isolate cases where LoRA exacerbates model bias across subgroups, the pattern is inconsistent---in many cases, LoRA has equivalent or even improved fairness compared to the base model or its full fine-tuning baseline. We also examine the complications of evaluating fine-tuning fairness relating to task design and model token bias, calling for more careful fairness evaluations in future work. | On Fairness of Low-Rank Adaptation of Large Models | [
"Zhoujie Ding",
"Ken Liu",
"Pura Peetathawatchai",
"Berivan Isik",
"Sanmi Koyejo"
] | Conference | Poster | 2405.17512 | [
"https://github.com/kenziyuliu/lora-fairness"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 203 |
|
null | https://openreview.net/forum?id=Jd0bCD12DS | @inproceedings{
chua2024mind,
title={Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning},
author={Lynn Chua and Badih Ghazi and Yangsibo Huang and Pritish Kamath and Ravi Kumar and Daogao Liu and Pasin Manurangsi and Amer Sinha and Chiyuan Zhang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Jd0bCD12DS}
} | Large language models (LLMs) have emerged as powerful tools for tackling complex tasks across diverse domains, but they also raise privacy concerns when fine-tuned on sensitive data due to potential memorization. While differential privacy (DP) offers a promising solution by ensuring models are “almost indistinguishable” with or without any particular privacy unit, current evaluations on LLMs mostly treat each example (text record) as the privacy unit. This leads to uneven user privacy guarantees when contributions per user vary. We therefore study user-level DP motivated by applications where it is necessary to ensure uniform privacy protection across users. We present a systematic evaluation of user-level DP for LLM fine-tuning on natural language generation tasks. Focusing on two mechanisms for achieving user-level DP guarantees, Group Privacy and User-wise DP-SGD, we investigate design choices like data selection strategies and parameter tuning for the best privacy-utility tradeoff. | Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning | [
"Lynn Chua",
"Badih Ghazi",
"Yangsibo Huang",
"Pritish Kamath",
"Ravi Kumar",
"Daogao Liu",
"Pasin Manurangsi",
"Amer Sinha",
"Chiyuan Zhang"
] | Conference | Poster | 2406.14322 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 204 |
|
null | https://openreview.net/forum?id=JXcXnJJSuL | @inproceedings{
jung2024informationtheoretic,
title={Information-Theoretic Distillation for Reference-less Summarization},
author={Jaehun Jung and Ximing Lu and Liwei Jiang and Faeze Brahman and Peter West and Pang Wei Koh and Yejin Choi},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=JXcXnJJSuL}
} | The current winning recipe for automatic summarization is using proprietary large-scale language models (LLMs) such as ChatGPT as is, or imitation learning from them as teacher models. While increasingly ubiquitous dependence on such large-scale language models is convenient, there remains an important question of whether small-scale models could have achieved competitive results, if we were to seek an alternative learning method---that allows for a more cost-efficient, controllable, yet powerful summarizer. We present InfoSumm, a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization, without relying on either the LLM's capability or human-written references. To achieve this, we first propose a novel formulation of the desiderata of summarization (saliency, faithfulness and brevity) through the lens of mutual information between the original document and the summary. Based on this formulation, we start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization, then self-train the model to optimize for the information-centric measures of ideal summaries. Distilling from the improved teacher, we arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT, without ever relying on ChatGPT's capabilities. Extensive analysis demonstrates that our approach outperforms in-domain supervised models in human evaluation, let alone state-of-the-art unsupervised methods, and wins over ChatGPT in controllable summarization. | Information-Theoretic Distillation for Reference-less Summarization | [
"Jaehun Jung",
"Ximing Lu",
"Liwei Jiang",
"Faeze Brahman",
"Peter West",
"Pang Wei Koh",
"Yejin Choi"
] | Conference | Poster | 2403.13780 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 205 |
|
null | https://openreview.net/forum?id=IW1PR7vEBf | @inproceedings{
behnamghader2024llmvec,
title={{LLM}2Vec: Large Language Models Are Secretly Powerful Text Encoders},
author={Parishad BehnamGhader and Vaibhav Adlakha and Marius Mosbach and Dzmitry Bahdanau and Nicolas Chapados and Siva Reddy},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=IW1PR7vEBf}
} | Large decoder-only language models (LLMs) are the state-of-the-art models on most of today's NLP tasks and benchmarks. Yet, the community is only slowly adopting these models for text embedding tasks, which require rich contextualized representations. In this work, we introduce LLM2Vec, a simple unsupervised approach that can transform any decoder-only LLM into a strong text encoder. LLM2Vec consists of three simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. We demonstrate the effectiveness of LLM2Vec by applying it to 4 popular LLMs ranging from 1.3B to 8B parameters and evaluate the transformed models on English word- and sequence-level tasks. We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB). Moreover, when combining LLM2Vec with supervised contrastive learning, we achieve state-of-the-art performance on MTEB among models that train only on publicly available data (as of May 24, 2024). Our strong empirical results and extensive analysis demonstrate that LLMs can be effectively transformed into universal text encoders in a parameter-efficient manner without the need for expensive adaptation or synthetic GPT-4 generated data. | LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders | [
"Parishad BehnamGhader",
"Vaibhav Adlakha",
"Marius Mosbach",
"Dzmitry Bahdanau",
"Nicolas Chapados",
"Siva Reddy"
] | Conference | Poster | 2404.05961 | [
"https://github.com/mcgill-nlp/llm2vec"
] | https://huggingface.co/papers/2404.05961 | 5 | 64 | 5 | 6 | [
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised",
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-supervised",
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse",
"knowledgator/Qwen-encoder-0.5B",
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp",
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-supervised",
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse",
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-supervised",
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-unsup-simcse",
"knowledgator/Sheared-LLaMA-encoder-1.3B",
"knowledgator/Qwen-encoder-1.5B",
"macadeliccc/dolphin-2.9-llama3-8b-emb",
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp",
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-unsup-simcse",
"knowledgator/Llama-encoder-1.0B",
"uzabase/LLM2Vec-Llama-2-7b-hf-mntp",
"uzabase/LLM2Vec-Llama-2-7b-hf-wikipedia-jp-mntp",
"uzabase/LLM2Vec-Swallow-7b-hf-wikipedia-jp-mntp",
"uzabase/LLM2Vec-Llama-2-7b-hf-mntp-unsup-simcse",
"uzabase/LLM2Vec-Llama-2-7b-hf-wikipedia-jp-mntp-unsup-simcse",
"uzabase/LLM2Vec-Swallow-7b-hf-wikipedia-jp-mntp-unsup-simcse",
"RichardErkhov/knowledgator_-_Llama-encoder-1.0B-gguf",
"RichardErkhov/knowledgator_-_Qwen-encoder-0.5B-gguf",
"RichardErkhov/knowledgator_-_Qwen-encoder-1.5B-gguf"
] | [] | [
"mteb/leaderboard",
"mteb/arena",
"Nymbo/MTEB-Arena",
"Abhijit-192-168-1-1/example_LLM2Vec"
] | 1 | 206 |
null | https://openreview.net/forum?id=IPZ28ZqD4I | @inproceedings{
yee2024faithful,
title={Faithful and Unfaithful Error Recovery in Chain of Thought},
author={Evelyn Yee and Alice Li and Chenyu Tang and Yeon Ho Jung and Ramamohan Paturi and Leon Bergen},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=IPZ28ZqD4I}
} | Large language models (LLMs) often improve their performance in downstream tasks when they generate Chain of Thought reasoning text before producing an answer. We investigate how LLMs recover from errors in Chain of Thought. Through analysis of error recovery behaviors, we find evidence for unfaithfulness in Chain of Thought, which occurs when models arrive at the correct answer despite invalid reasoning text. We identify factors that shift LLM recovery behavior: LLMs recover more frequently from obvious errors and in contexts that provide more evidence for the correct answer. Critically, these factors have divergent effects on faithful and unfaithful recoveries.
Our results indicate that there are distinct mechanisms driving faithful and unfaithful error recoveries. Selective targeting of these mechanisms may be able to drive down the rate of unfaithful reasoning and improve model interpretability. | Faithful and Unfaithful Error Recovery in Chain of Thought | [
"Evelyn Yee",
"Alice Li",
"Chenyu Tang",
"Yeon Ho Jung",
"Ramamohan Paturi",
"Leon Bergen"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 207 |
||
null | https://openreview.net/forum?id=INivcBeIDK | @inproceedings{
zhu2024autodan,
title={Auto{DAN}: Interpretable Gradient-Based Adversarial Attacks on Large Language Models},
author={Sicheng Zhu and Ruiyi Zhang and Bang An and Gang Wu and Joe Barrow and Zichao Wang and Furong Huang and Ani Nenkova and Tong Sun},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=INivcBeIDK}
} | Red-teaming Large Language Models (LLMs) requires jailbreak attacks that can comprehensively characterize the vulnerabilities of LLMs. Current blackbox attacks are limited by predefined jailbreak strategies, while whitebox attacks can only generate gibberish attack prompts detectable by perplexity filters. In this paper, we propose a new whitebox attack, named AutoDAN, that merges gradient-based token-wise optimization with controllable text generation. AutoDAN can generate coherent attack prompts on various LLMs that bypass any perplexity filter while having high attack success rates. Notably, these attack prompts spontaneously exhibit jailbreak strategies commonly seen in manual jailbreaks, such as hypothetical scenarios and non-English languages, without any prior knowledge of them. These interpretable attack prompts also generalize better to unseen harmful behaviors and transfer better to blackbox LLMs than gibberish ones. Moreover, we apply AutoDAN to two other red-teaming tasks: prompt leaking and generating falsely censored harmless user requests, demonstrating its flexibility over blackbox attacks. Our work offers an additional tool for red-teaming and understanding jailbreak mechanisms via interpretability. | AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models | [
"Sicheng Zhu",
"Ruiyi Zhang",
"Bang An",
"Gang Wu",
"Joe Barrow",
"Zichao Wang",
"Furong Huang",
"Ani Nenkova",
"Tong Sun"
] | Conference | Poster | 2310.15140 | [
"https://github.com/rotaryhammer/code-autodan"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 208 |
|
null | https://openreview.net/forum?id=IBCBMeAhmC | @inproceedings{
liu2024evaluating,
title={Evaluating Language Models for Efficient Code Generation},
author={Jiawei Liu and Songrun Xie and Junhao Wang and Yuxiang Wei and Yifeng Ding and LINGMING ZHANG},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=IBCBMeAhmC}
} | We introduce Differential Performance Evaluation (DPE), a framework designed to reliably evaluate Large Language Models (LLMs) for efficient code generation. Traditional coding benchmarks often fail to provide reliable insights into code efficiency, due to their reliance on simplistic test inputs and the absence of effective compound metrics. DPE addresses these issues by focusing on efficiency-demanding programming tasks and establishing an insightful compound metric for performance evaluation. DPE operates in two phases: To curate efficiency datasets, it selects efficiency-demanding tasks from existing coding benchmarks and generates computationally expensive inputs to stress the efficiency of LLM solutions. To assess the code efficiency, DPE profiles the new solution and compares it globally against a set of reference solutions that exhibit distinct efficiency levels, where the matched level defines its efficiency score. As a proof of concept, we use DPE to create EvalPerf, a benchmark with 121 performance-challenging coding tasks. Our comprehensive evaluation draws interesting findings on the efficiency impact of model sizes, instruction tuning, and prompting. For example, while the scaling law fails to account for code efficiency, general instruction tuning benefits both code correctness and efficiency. We also evaluate the evaluation by examining the effectiveness of DPE, showing that EvalPerf is reliable and convenient to use even across platforms. | Evaluating Language Models for Efficient Code Generation | [
"Jiawei Liu",
"Songrun Xie",
"Junhao Wang",
"Yuxiang Wei",
"Yifeng Ding",
"LINGMING ZHANG"
] | Conference | Poster | 2408.06450 | [
""
] | https://huggingface.co/papers/2408.06450 | 1 | 0 | 0 | 6 | [] | [] | [] | 1 | 209 |
null | https://openreview.net/forum?id=IA8CWtNkUr | @inproceedings{
sanyal2024early,
title={Early Weight Averaging meets High Learning Rates for {LLM} Pre-training},
author={Sunny Sanyal and Atula Tejaswi Neerkaje and Jean Kaddour and Abhishek Kumar and sujay sanghavi},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=IA8CWtNkUr}
} | Training Large Language Models (LLMs) incurs significant cost; hence, any strategy that accelerates model convergence is helpful. In this paper, we investigate the ability of a simple idea – checkpoint averaging along the trajectory of a training run – to improve both convergence and generalization quite early during training. Here we show that models trained with high learning rates observe higher gains due to checkpoint averaging. Furthermore, these gains are amplified when checkpoints are sampled with considerable spacing in training steps. Our training recipe outperforms conventional training and popular checkpoint averaging baselines such as exponential moving average (EMA) and stochastic moving average (SWA). We evaluate our training recipe by pre-training LLMs, where high learning rates are inherently preferred due to extremely large batch sizes. Specifically, we pre-trained nanoGPT-2 models of varying sizes—small (125M), medium (335M), and large (770M)—on the OpenWebText dataset, comprised of 9B tokens. Additionally, we present results for publicly available Pythia LLMs, ranging from 1B to 12B, which were trained on the PILE-deduped dataset containing 207B tokens. | Early Weight Averaging meets High Learning Rates for LLM Pre-training | [
"Sunny Sanyal",
"Atula Tejaswi Neerkaje",
"Jean Kaddour",
"Abhishek Kumar",
"sujay sanghavi"
] | Conference | Poster | 2306.03241 | [
"https://github.com/sanyalsunny111/early_weight_avg"
] | https://huggingface.co/papers/2306.03241 | 0 | 2 | 0 | 5 | [] | [] | [] | 1 | 210 |
null | https://openreview.net/forum?id=Hvq9RtSoHG | @inproceedings{
hu2024chainofsymbol,
title={Chain-of-Symbol Prompting For Spatial Reasoning in Large Language Models},
author={Hanxu Hu and Hongyuan Lu and Huajian Zhang and Yun-Ze Song and Wai Lam and Yue Zhang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Hvq9RtSoHG}
} | While conventional Chain-of-Thought prompting shows promising performance on various language tasks for LLMs, the spatial scenarios are nearly unexplored. In this paper, we first investigate the performance of LLMs on complex spatial planning and understanding tasks that require LLMs to understand a virtual spatial environment simulated via natural language and act or reason correspondingly in text. By evaluating on classic spatial planning scenarios through natural language descriptions, we found that current popular LLMs still lack abilities to handle spatial relationships in texts. This arises a question -- do the natural language is the best way to represent complex spatial environments for LLMs, or maybe other alternatives such as symbolic representations are both more efficient and effective for LLMs? To this end, we propose a novel method called CoS (Chain-of-Symbol Prompting) that represents the spatial relationships with condensed symbols during the chained intermediate thinking steps. CoS is easy to use and does not need additional training on LLMs. Extensive experiments indicate that CoS clearly surpasses the performance of the Chain-of-Thought (CoT) Prompting described in natural langauge in all three spatial planning tasks and existing spatial QA benchmark, with even fewer tokens used in the inputs compared with CoT. The performance gain is strong, by up to 60.8% accuracy (from 31.8% to 92.6%) on Brick World scenarios for GPT-3.5-Turbo. CoS also reduces the number of tokens in the prompt obviously, by up to 65.8% of the tokens (from 407 to 139) for the intermediate steps from demonstrations on the Brick World task. Interestingly, we also observed emergent ability of abstract symbols understanding when the size of models scales up. | Chain-of-Symbol Prompting For Spatial Reasoning in Large Language Models | [
"Hanxu Hu",
"Hongyuan Lu",
"Huajian Zhang",
"Yun-Ze Song",
"Wai Lam",
"Yue Zhang"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 211 |
||
null | https://openreview.net/forum?id=Hi8jKh4HE9 | @inproceedings{
he2024what,
title={What is in Your Safe Data? Identifying Benign Data that Breaks Safety},
author={Luxi He and Mengzhou Xia and Peter Henderson},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Hi8jKh4HE9}
} | Current Large Language Models (LLMs), even those tuned for safety and alignment, are susceptible to jailbreaking. Some have found that just further fine-tuning an aligned model with benign data (i.e., data without harmful content) surprisingly leads to substantial degradation in safety. We delve into the data-centric aspects of why benign fine-tuning inadvertently contributes to jailbreaking. First, we represent fine-tuning data through two lenses: representation and gradient spaces. Additionally, we propose a bi-directional anchoring method that, during the selection process, prioritizes data points that are close to harmful examples and far from benign ones. Our approach effectively identifies subsets of benign data that are more likely to degrade the model's safety after fine-tuning.
Training on just 100 of these seemingly benign datapoints surprisingly leads to the fine-tuned model affirmatively responding to >70% of tested harmful requests, compared to <20% after fine-tuning on randomly selected data. We also observe that the selected data frequently appear as lists, bullet points, or math questions, indicating a systematic pattern in fine-tuning data that contributes to jailbreaking. | What is in Your Safe Data? Identifying Benign Data that Breaks Safety | [
"Luxi He",
"Mengzhou Xia",
"Peter Henderson"
] | Conference | Poster | 2404.01099 | [
"https://github.com/princeton-nlp/benign-data-breaks-safety"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 212 |
|
null | https://openreview.net/forum?id=HVK6nl3i97 | @inproceedings{
sun2024triforce,
title={TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding},
author={Hanshi Sun and Zhuoming Chen and Xinyu Yang and Yuandong Tian and Beidi Chen},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=HVK6nl3i97}
} | With large language models (LLMs) widely deployed in long content generation recently, there has emerged an increasing demand for efficient long-sequence inference support. However, key-value (KV) cache, which is stored to avoid re-computation, has emerged as a critical bottleneck by growing linearly in size with the sequence length. Due to the auto-regressive nature of LLMs, the entire KV cache will be loaded for every generated token, resulting in low utilization of computational cores and high latency. While various compression methods for KV cache have been proposed to alleviate this issue, they suffer from degradation in generation quality. We introduce TriForce, a hierarchical speculative decoding system that is scalable for long sequence generation. This approach leverages the original model weights and dynamic sparse KV cache via retrieval as a draft model, which serves as an intermediate layer in the hierarchy and is further speculated by a smaller model to reduce its drafting latency. TriForce not only facilitates impressive speedups for Llama2-7B-128K, achieving up to 2.31$\times$ on an A100 GPU but also showcases scalability in handling even longer contexts. For the offloading setting on two RTX 4090 GPUs, TriForce achieves 0.108s/token—only half as slow as the auto-regressive baseline on an A100, which attains 7.78$\times$ on our optimized offloading system. Additionally, TriForce performs 4.86$\times$ than DeepSpeed-Zero-Inference on a single RTX 4090 GPU. TriForce's robustness is highlighted by its consistently outstanding performance across various temperatures. The code is available at https://github.com/Infini-AI-Lab/TriForce. | TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding | [
"Hanshi Sun",
"Zhuoming Chen",
"Xinyu Yang",
"Yuandong Tian",
"Beidi Chen"
] | Conference | Poster | 2404.11912 | [
"https://github.com/Infini-AI-Lab/TriForce"
] | https://huggingface.co/papers/2404.11912 | 4 | 16 | 1 | 5 | [] | [] | [] | 1 | 213 |
null | https://openreview.net/forum?id=HLoWN6m4fS | @inproceedings{
bordt2024elephants,
title={Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models},
author={Sebastian Bordt and Harsha Nori and Vanessa Cristiny Rodrigues Vasconcelos and Besmira Nushi and Rich Caruana},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=HLoWN6m4fS}
} | While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over. In this work, we address this concern for tabular data. Specifically, we introduce a variety of different techniques to assess whether a language model has seen a tabular dataset during training. This investigation reveals that LLMs have memorized many popular tabular datasets verbatim. We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training. We find that LLMs perform better on datasets seen during training, indicating that memorization leads to overfitting. At the same time, LLMs show non-trivial performance on novel datasets and are surprisingly robust to data transformations. We then investigate the in-context statistical learning abilities of LLMs. While LLMs are significantly better than random at solving statistical classification problems, the sample efficiency of few-shot learning lags behind traditional statistical learning algorithms, especially as the dimension of the problem increases. This suggests that much of the observed few-shot performance on novel real-world datasets is due to the LLM's world knowledge. Overall, our results highlight the importance of testing whether an LLM has seen an evaluation dataset during pre-training. We release the https://github.com/interpretml/LLM-Tabular-Memorization-Checker Python package to test LLMs for memorization of tabular datasets. | Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models | [
"Sebastian Bordt",
"Harsha Nori",
"Vanessa Cristiny Rodrigues Vasconcelos",
"Besmira Nushi",
"Rich Caruana"
] | Conference | Poster | 2404.06209 | [
"https://github.com/interpretml/llm-tabular-memorization-checker"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 214 |
|
null | https://openreview.net/forum?id=HDkNbfLQgu | @inproceedings{
golovneva2024reverse,
title={Reverse Training to Nurse the Reversal Curse},
author={Olga Golovneva and Zeyuan Allen-Zhu and Jason E Weston and Sainbayar Sukhbaatar},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=HDkNbfLQgu}
} | Large language models (LLMs) have a surprising failure: when trained on ``A has a feature B``, they do not generalize to ``B is a feature of A``, which is termed the Reversal Curse. Even when training with trillions of tokens this issue still appears due to Zipf's law -- hence even if we train on the entire internet. This work proposes an alternative training scheme, called $reverse$ $training$, whereby all words are used twice, doubling the amount of available tokens. The LLM is trained in both forward and reverse directions by reversing training strings while preserving (i.e., not reversing) chosen substrings, such as entities. We show that data matched reverse-trained models provide superior performance to standard models on standard tasks, and compute matched reverse-trained models provide far superior performance on reversal tasks, helping resolve the reversal curse issue. | Reverse Training to Nurse the Reversal Curse | [
"Olga Golovneva",
"Zeyuan Allen-Zhu",
"Jason E Weston",
"Sainbayar Sukhbaatar"
] | Conference | Poster | 2403.13799 | [
""
] | https://huggingface.co/papers/2403.13799 | 4 | 13 | 1 | 4 | [] | [] | [] | 1 | 215 |
null | https://openreview.net/forum?id=H1Edd5d2JP | @inproceedings{
jiang2024llmcausal,
title={{LLM}4Causal: Democratized Causal Tools for Everyone via Large Language Model},
author={Haitao Jiang and Lin Ge and Yuhe Gao and Jianian Wang and Rui Song},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=H1Edd5d2JP}
} | Large Language Models (LLMs) have shown their success in language understanding and reasoning on general topics. However, their capability to perform inference based on user-specified structured data and knowl- edge in corpus-rare concepts, such as causal decision-making is still limited. In this work, we explore the possibility of fine-tuning an open-sourced LLM into LLM4Causal, which can identify the causal task, execute a cor- responding function, and interpret its numerical results based on users’ queries and the provided dataset. Meanwhile, we propose a data gen- eration process for more controllable GPT prompting and present two instruction-tuning datasets: (1) Causal-Retrieval-Bench for causal problem identification and input parameter extraction for causal function calling and (2) Causal-Interpret-Bench for in-context causal interpretation. By conducting end-to-end evaluations and two ablation studies, we showed that LLM4Causal can deliver end-to-end solutions for causal problems and provide easy-to-understand answers, which significantly outperforms the baselines. | LLM4Causal: Democratized Causal Tools for Everyone via Large Language Model | [
"Haitao Jiang",
"Lin Ge",
"Yuhe Gao",
"Jianian Wang",
"Rui Song"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 216 |
||
null | https://openreview.net/forum?id=GqDntYTTbk | @inproceedings{
zhu2024starlingb,
title={Starling-7B: Improving Helpfulness and Harmlessness with {RLAIF}},
author={Banghua Zhu and Evan Frick and Tianhao Wu and Hanlin Zhu and Karthik Ganesan and Wei-Lin Chiang and Jian Zhang and Jiantao Jiao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=GqDntYTTbk}
} | This paper presents Starling-7B, the current best-performing 7B chat model on Chatbot Arena, along with its training dataset Nectar, a high-quality preference dataset collected by prompting GPT-4 to rank responses. We propose an internal pairwise rating technique, where the model considers all pairings before providing a ranking decision, leveraging the proven pairwise rating capability of LLMs without the cost of individual pairwise calls. The resulting Nectar dataset comprises 182,954 chat prompts, each with seven responses from various models, ranked by GPT-4, equating to 3.8 million high-quality pairwise comparisons. We introduce Starling-RM-7B and Starling-RM-34B, the reward model suites trained with a K-wise preference loss on Nectar, outperforming pairwise counterparts. We benchmark reward model training pipelines across metrics such as human preference, truthfulness, and safety. Using Nectar and our new training pipeline, we fine-tuned Openchat-3.5 to create Starling-LM-7B, achieving significant performance enhancements on MT-Bench, AlpacaEval, and human evaluation metrics. To facilitate research and understanding of RLHF mechanisms, we open-source the Nectar dataset, the reward models, and the language models. | Starling-7B: Improving Helpfulness and Harmlessness with RLAIF | [
"Banghua Zhu",
"Evan Frick",
"Tianhao Wu",
"Hanlin Zhu",
"Karthik Ganesan",
"Wei-Lin Chiang",
"Jian Zhang",
"Jiantao Jiao"
] | Conference | Oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 217 |
||
null | https://openreview.net/forum?id=GMalvQu0XL | @inproceedings{
huang2024raven,
title={{RAVEN}: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models},
author={Jie Huang and Wei Ping and Peng Xu and Mohammad Shoeybi and Kevin Chang and Bryan Catanzaro},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=GMalvQu0XL}
} | In this paper, we investigate the in-context learning ability of retrieval-augmented encoder-decoder language models. We first conduct a comprehensive analysis of existing models and identify their limitations in in-context learning, primarily due to a mismatch between pretraining and inference, as well as a restricted context length. To address these issues, we propose RAVEN, a model that combines retrieval-augmented masked language modeling and prefix language modeling. We further introduce Fusion-in-Context Learning to enhance the few-shot performance by enabling the model to leverage more in-context examples without requiring additional training. Through extensive experiments, we demonstrate that our simple yet effective design significantly improves performance, achieving results comparable to the most advanced language models in certain scenarios, despite having substantially fewer parameters. Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning and encourages further research in this direction. | RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models | [
"Jie Huang",
"Wei Ping",
"Peng Xu",
"Mohammad Shoeybi",
"Kevin Chang",
"Bryan Catanzaro"
] | Conference | Poster | 2308.07922 | [
"https://github.com/jeffhj/raven"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 218 |
|
null | https://openreview.net/forum?id=GC4mXVfquq | @inproceedings{
luo2024jailbreakv,
title={JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks},
author={Weidi Luo and Siyuan Ma and Xiaogeng Liu and Xiaoyu Guo and Chaowei Xiao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=GC4mXVfquq}
} | With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while align- ing them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we in- troduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to MLLMs, thereby evaluat- ing the robustness of MLLMs against diverse jailbreak attacks. Utilizing a dataset of 2, 000 malicious queries that is also proposed in this paper, we generate 20, 000 text-based jailbreak prompts using advanced jailbreak attacks on LLMs, alongside 8, 000 image-based jailbreak inputs from recent MLLMs jailbreak attacks, our comprehensive dataset includes 28, 000 test cases across a spectrum of adversarial scenarios. Our evaluation of 10 open- source MLLMs reveals a notably high Attack Success Rate (ASR) for attacks transferred from LLMs, highlighting a critical vulnerability in MLLMs that stems from their text-processing capabilities. Our findings underscore the urgent need for future research to address alignment vulnerabilities in MLLMs from both textual and visual inputs. | JailBreakV: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks | [
"Weidi Luo",
"Siyuan Ma",
"Xiaogeng Liu",
"Xiaoyu Guo",
"Chaowei Xiao"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 219 |
||
null | https://openreview.net/forum?id=G8LaO1P0xv | @inproceedings{
singhal2024a,
title={A Long Way to Go: Investigating Length Correlations in {RLHF}},
author={Prasann Singhal and Tanya Goyal and Jiacheng Xu and Greg Durrett},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=G8LaO1P0xv}
} | Great success has been reported using Reinforcement Learning from Human Feedback (RLHF) to align large language models, with open preference datasets enabling wider experimentation, particularly for "helpfulness" in tasks like dialogue and web question answering. Alongside these improvements, however, RLHF also often drives models to produce longer outputs. This paper demonstrates, on three diverse settings, that optimizing for response length is, much more than previously thought, a significant factor behind RLHF. Studying the strategies RL optimization uses to maximize reward, we find improvements in reward to largely be driven by increasing response length, instead of other features. Indeed, we find that even a *purely* length-based reward reproduces most downstream RLHF improvements over supervised fine-tuned models. Testing a comprehensive set of length-countering interventions, we identify the dominant source of these biases to be reward models, which, by studying training dynamics, we find are non-robust and easily influenced by length biases in preference data. | A Long Way to Go: Investigating Length Correlations in RLHF | [
"Prasann Singhal",
"Tanya Goyal",
"Jiacheng Xu",
"Greg Durrett"
] | Conference | Oral | 2310.03716 | [
"https://github.com/prasanns/rlhf-length-biases"
] | https://huggingface.co/papers/2310.03716 | 3 | 9 | 1 | 4 | [] | [] | [] | 1 | 220 |
null | https://openreview.net/forum?id=FmhPg4UJ9K | @inproceedings{
yang2024counting,
title={Counting Like Transformers: Compiling Temporal Counting Logic Into Softmax Transformers},
author={Andy Yang and David Chiang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=FmhPg4UJ9K}
} | Deriving formal bounds on the expressivity of transformers, as well as studying transformers that are constructed to implement known algorithms, are both effective methods for better understanding the computational power of transformers. Towards both ends, we introduce the temporal counting logic $\textbf{K}_t$[#] alongside the RASP variant $\textbf{C-RASP}$. We show they are equivalent to each other, and that together they are the best-known lower bound on the formal expressivity of future-masked soft attention transformers with unbounded input size. We prove this by showing all $\textbf{K}_t$[#] formulas can be compiled into these transformers without any additional positional embeddings. | Counting Like Transformers: Compiling Temporal Counting Logic Into Softmax Transformers | [
"Andy Yang",
"David Chiang"
] | Conference | Poster | 2404.04393 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 221 |
|
null | https://openreview.net/forum?id=Fkr1yVUb9G | @inproceedings{
teehan2024college,
title={Co{LLEG}e: Concept Embedding Generation for Large Language Models},
author={Ryan Teehan and Brenden Lake and Mengye Ren},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Fkr1yVUb9G}
} | Current language models are unable to quickly learn new concepts on the fly, often requiring a more involved finetuning process to learn robustly. Prompting in-context is not robust to context distractions, and often fails to confer much information about the new concepts. Classic methods for few-shot word learning in NLP, relying on global word vectors, are less applicable to large language models. In this paper, we introduce a novel approach named **CoLLEGe** (**Co**ncept **L**earning with **L**anguage **E**mbedding **Ge**neration) to modernize few-shot concept learning. CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions. Our primary meta-learning objective is simply to facilitate a language model to make next word predictions in forthcoming sentences, making it compatible with language model pretraining. We design a series of tasks to test new concept learning in challenging real-world scenarios, including new word acquisition, definition inference, and verbal reasoning, and demonstrate that our method succeeds in each setting **without task-specific training**. Code and data for our project can be found at [https://college-concept-learning.github.io/](https://college-concept-learning.github.io/). | CoLLEGe: Concept Embedding Generation for Large Language Models | [
"Ryan Teehan",
"Brenden Lake",
"Mengye Ren"
] | Conference | Poster | 2403.15362 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 222 |
|
null | https://openreview.net/forum?id=FgHpT6u7pk | @inproceedings{
gao2024coca,
title={Co{CA}: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration},
author={Jiahui Gao and Renjie Pi and Tianyang Han and Han Wu and Lanqing HONG and Lingpeng Kong and Xin Jiang and Zhenguo Li},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=FgHpT6u7pk}
} | The deployment of multimodal large language models (MLLMs) has demonstrated remarkable success in engaging in conversations involving visual inputs, thanks to the superior power of large language models (LLMs). Those MLLMs are typically built based on the LLMs, with an image encoder to process images into the token embedding space of the LLMs. However, the integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs and prone to generating sensitive or harmful responses, even though the LLM has been trained on textual dataset to align with human value. In this paper, we first raise the following question: ``Do the MLLMs possess safety-awareness against malicious image inputs?". We find that after adding a principle that specifies the safety requirement into the input of the MLLM, the model's safety awareness becomes boosted. This phenomenon verifies the existence of MLLM's safety-awareness against image inputs, it is only weakened by the modality gap. We then introduce a simple yet effective technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution. Our proposed strategy helps the model reclaim its original safety awareness without losing its original capabilities. We verify the effectiveness of our approach on both multimodal safety and understanding benchmarks. | CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration | [
"Jiahui Gao",
"Renjie Pi",
"Tianyang Han",
"Han Wu",
"Lanqing HONG",
"Lingpeng Kong",
"Xin Jiang",
"Zhenguo Li"
] | Conference | Poster | 2409.11365 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 223 |
|
null | https://openreview.net/forum?id=FbhjirzvJG | @inproceedings{
ankner2024hydra,
title={Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding},
author={Zachary Ankner and Rishab Parthasarathy and Aniruddha Nrusimha and Christopher Rinard and Jonathan Ragan-Kelley and William Brandon},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=FbhjirzvJG}
} | To combat the memory bandwidth-bound nature of autoregressive LLM inference, previous research has proposed the speculative decoding framework. To perform speculative decoding, a small draft model proposes candidate continuations of the input sequence that are then verified in parallel by the base model. One way to specify the draft model, as used in the recent Medusa decoding framework, is as a collection of lightweight heads, called draft heads, that operate on the base model's hidden states. To date, all existing draft heads have been sequentially independent, meaning that they speculate tokens in the candidate continuation independently of any preceding tokens in the candidate continuation. In this work, we propose Hydra heads: a sequentially-dependent drop-in replacement for standard draft heads that significantly improves the accuracy of draft head speculation. We further explore the design space of Hydra head training objectives and architectures, and propose a carefully tuned Hydra head recipe, which we call Hydra++, that improves decoding throughput by up to 1.31x and 2.70x compared to Medusa decoding and autoregressive decoding respectively. Overall, Hydra heads are a simple and well-motivated intervention on standard draft heads that significantly improve the end-to-end speed of draft head-based speculative decoding. We make our code publicly available at https://github.com/zankner/Hydra. | Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding | [
"Zachary Ankner",
"Rishab Parthasarathy",
"Aniruddha Nrusimha",
"Christopher Rinard",
"Jonathan Ragan-Kelley",
"William Brandon"
] | Conference | Poster | 2402.05109 | [
""
] | https://huggingface.co/papers/2402.05109 | 2 | 0 | 1 | 6 | [] | [] | [] | 1 | 224 |
null | https://openreview.net/forum?id=FX4fUThO9H | @inproceedings{
yang2024model,
title={Model Autophagy Analysis to Explicate Self-consumption within Human-{AI} Interactions},
author={Shu Yang and Muhammad Asif Ali and Lu Yu and Lijie Hu and Di Wang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=FX4fUThO9H}
} | The increasing significance of large models and their multi-modal variants
in societal information processing has ignited debates on social safety and
ethics. However, there exists a paucity of comprehensive analysis for: (i)
the interactions between human and artificial intelligence systems, and
(ii) understanding and addressing the associated limitations. To bridge
this gap, we present Model Autophagy Analysis for large models’ selfconsumption explanation. We employ two distinct autophagous loops
(referred to as “self-consumption loops”) to elucidate the suppression of
human-generated information in the exchange between human and AI
systems. Through comprehensive experiments on diverse datasets, we
evaluate the capacities of generated models as both creators and disseminators of information. Our key findings reveal (i) A progressive prevalence of
model-generated synthetic information over time within training datasets
compared to human-generated information; (ii) The discernible tendency
of large models, when acting as information transmitters across multiple
iterations, to selectively modify or prioritize specific contents; and (iii) The
potential for a reduction in the diversity of socially or human-generated
information, leading to bottlenecks in the performance enhancement of
large models and confining them to local optima. | Model Autophagy Analysis to Explicate Self-consumption within Human-AI Interactions | [
"Shu Yang",
"Muhammad Asif Ali",
"Lu Yu",
"Lijie Hu",
"Di Wang"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 225 |
||
null | https://openreview.net/forum?id=F9tqgOPXH5 | @inproceedings{
zala2024envgen,
title={EnvGen: Generating and Adapting Environments via {LLM}s for Training Embodied Agents},
author={Abhay Zala and Jaemin Cho and Han Lin and Jaehong Yoon and Mohit Bansal},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=F9tqgOPXH5}
} | Recent state-of-the-art approaches for embodied learning via interaction directly employ large language models (LLMs) as agents to determine the next steps in an environment. Due to their world knowledge and reasoning capabilities, LLM agents achieve stronger performance than previous smaller agents based on reinforcement learning (RL); however, frequently calling LLMs is slow and expensive. This begs an interesting question: Instead of directly employing LLMs as embodied agents, can we use LLMs’ reasoning capabilities to adaptively create training environments to help smaller embodied RL agents learn useful skills that they are weak at? In this work, we propose EnvGen, a novel framework to address this question. First, we prompt an LLM to generate training environments that allow agents to quickly learn different tasks in parallel. Concretely, the LLM is given the task description and environment simulator objectives that the agents should learn and is then asked to generate a set of environment configurations (e.g., different terrains, items initially given to agents, chances of finding certain objects, etc.). Next, we train a small RL agent in a mixture of the original and LLM-generated environments. Then, we enable the LLM to continuously adapt the generated environments to progressively improve the skills that the agent is weak at, by providing feedback to the LLM in the form of the agent’s performance. We demonstrate the usefulness of EnvGen with comprehensive experiments in Crafter and Heist game environments. We find that a small RL agent trained with EnvGen can outperform SOTA methods, including a GPT-4 agent, and learns long-horizon tasks significantly faster. We also show that using an LLM to adapt environments dynamically outperforms curriculum learning approaches and how the LLM adapts training environments to help improve RL agents’ weaker skills over time. Additionally, EnvGen is substantially more efficient as it only uses a small number of LLM calls (e.g., 4 in total), whereas LLM agents require one or more LLM calls per step (resulting in thousands of LLM calls per episode). We also present detailed analyses of EnvGen’s design choices. | EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents | [
"Abhay Zala",
"Jaemin Cho",
"Han Lin",
"Jaehong Yoon",
"Mohit Bansal"
] | Conference | Poster | 2403.12014 | [
""
] | https://huggingface.co/papers/2403.12014 | 2 | 0 | 0 | 5 | [] | [] | [] | 1 | 226 |
null | https://openreview.net/forum?id=F7aAhfitX6 | @inproceedings{
sun2024massive,
title={Massive Activations in Large Language Models},
author={Mingjie Sun and Xinlei Chen and J Zico Kolter and Zhuang Liu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=F7aAhfitX6}
} | We observe an empirical phenomenon in Large Language Models (LLMs) -- very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of attention probabilities to their corresponding tokens, and further, implicit bias terms in the self-attention output. Last, we also study massive activations in Vision Transformers. | Massive Activations in Large Language Models | [
"Mingjie Sun",
"Xinlei Chen",
"J Zico Kolter",
"Zhuang Liu"
] | Conference | Poster | 2402.17762 | [
"https://github.com/locuslab/massive-activations"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 227 |
|
null | https://openreview.net/forum?id=F2yGbwXJAi | @inproceedings{
guo2024suspicion,
title={Suspicion Agent: Playing Imperfect Information Games with Theory of Mind Aware {GPT}-4},
author={Jiaxian Guo and Bo Yang and Paul Yoo and Bill Yuchen Lin and Yusuke Iwasawa and Yutaka Matsuo},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=F2yGbwXJAi}
} | Unlike perfect information games, where all elements are known to every player, imperfect information games emulate the real-world complexities of decision-making under uncertain or incomplete information.
GPT-4, the recent breakthrough in large language models (LLMs) trained on massive passive data, is notable for its knowledge retrieval and reasoning abilities. This paper delves into the applicability of GPT-4's learned knowledge for imperfect information games.
To achieve this, we introduce \textbf{\agentname{}}, an innovative agent that leverages GPT-4's capabilities for imperfect information games. With proper prompt engineering to achieve different functions, \agentname{} based on GPT-4 demonstrates remarkable adaptability across a range of imperfect information card games. Importantly, GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it can understand others and intentionally impact others' behavior. Leveraging this, we design a planning strategy that enables GPT-4 to competently play against different opponents, adapting its gameplay style as needed, while requiring only the game rules and descriptions of observations as input.
In the experiments, we qualitatively showcase the capabilities of \agentname{} across three different imperfect information games and then quantitatively evaluate it in Leduc Hold'em. {As an exploration study, we show that \agentname{} can potentially outperform traditional algorithms without any specialized training or examples, but still cannot beat Nash-Equilibrium algorithms}. In order to encourage and foster deeper insights within the community, we make our game-related data publicly available. | Suspicion Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4 | [
"Jiaxian Guo",
"Bo Yang",
"Paul Yoo",
"Bill Yuchen Lin",
"Yusuke Iwasawa",
"Yutaka Matsuo"
] | Conference | Poster | [
"https://github.com/cr-gjx/suspicion-agent"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 228 |
||
null | https://openreview.net/forum?id=Ecgev5ZZpq | @inproceedings{
yu2024evaluating,
title={Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models},
author={Simon Chi Lok Yu and Jie He and Pasquale Minervini and Jeff Z. Pan},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Ecgev5ZZpq}
} | With the emergence of large language models, such as LLaMA and OpenAI GPT-3, In-Context Learning (ICL) gained significant attention due to its effectiveness and efficiency. However, ICL is very sensitive to the choice, order, and verbaliser used to encode the demonstrations in the prompt. \emph{Retrieval-Augmented ICL} methods try to address this problem by leveraging retrievers to extract semantically related examples as demonstrations. While this approach yields more accurate results, its robustness against various types of adversarial attacks, including perturbations on test samples, demonstrations, and retrieved data, remains under-explored. Our study reveals that retrieval-augmented models can enhance robustness against test sample attacks, outperforming vanilla ICL with a 4.87\% reduction in Attack Success Rate (ASR); however, they exhibit overconfidence in the demonstrations, leading to a 2\% increase in ASR for demonstration attacks. Adversarial training can help improve the robustness of ICL methods to adversarial attacks; however, such a training scheme can be too costly in the context of LLMs. As an alternative, we introduce an effective training-free adversarial defence method, \emph{DARD}, which enriches the example pool with those attacked samples. We show that DARD yields improvements in performance and robustness, achieving a 15\% reduction in ASR over the baselines. Code and data are available jointly with this submission as supplementary material. | Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models | [
"Simon Chi Lok Yu",
"Jie He",
"Pasquale Minervini",
"Jeff Z. Pan"
] | Conference | Poster | [
"https://github.com/simonucl/adv-retreival-icl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 229 |
||
null | https://openreview.net/forum?id=EKBPn7no4y | @inproceedings{
zhuang2024structlm,
title={Struct{LM}: Towards Building Generalist Models for Structured Knowledge Grounding},
author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Wenhao Huang and Jie Fu and Xiang Yue and Wenhu Chen},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=EKBPn7no4y}
} | Structured data sources, such as tables, graphs, and databases, are ubiquitous knowledge sources. Despite the demonstrated capabilities of large language models (LLMs) on plain text, their proficiency in interpreting and utilizing structured data remains limited. Our investigation reveals a notable deficiency in LLMs' ability to process structured data, e.g., ChatGPT lags behind state-of-the-art (SoTA) model by an average of 35\%. To augment the Structured Knowledge Grounding (SKG) capabilities in LLMs, we have developed a comprehensive instruction tuning dataset comprising 1.1 million examples. Utilizing this dataset, we train a series of models, referred to as $\texttt{structlm}$, based on Mistral and the CodeLlama model family, ranging from 7B to 34B parameters. Our $\texttt{structlm}$ series surpasses task-specific models~\citep{UnifiedSKG2022} on 16 out of 18 evaluated datasets and establishes new SoTA performance on 8 SKG tasks. Furthermore, $\texttt{structlm}$ demonstrates strong generalization across 6 novel held-out SKG tasks, outperforming TableLlama by an average of 35\% and Flan-UL2 20B by an average of 10\%. Contrary to expectations, we observe that scaling model size offers marginal benefits, with $\texttt{structlm}$-34B showing only slight improvements over $\texttt{structlm}$-7B. This suggests that structured knowledge grounding is still a challenging task and requires more innovative design to push to a new level. We release the model weights and training dataset to the community, along with relevant code on Github. | StructLM: Towards Building Generalist Models for Structured Knowledge Grounding | [
"Alex Zhuang",
"Ge Zhang",
"Tianyu Zheng",
"Xinrun Du",
"Junjie Wang",
"Weiming Ren",
"Wenhao Huang",
"Jie Fu",
"Xiang Yue",
"Wenhu Chen"
] | Conference | Poster | 2402.16671 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 230 |
|
null | https://openreview.net/forum?id=EIjJ6ykPnh | @inproceedings{
singhal2024dpo,
title={D2{PO}: Discriminator-Guided {DPO} with Response Evaluation Models},
author={Prasann Singhal and Nathan Lambert and Scott Niekum and Tanya Goyal and Greg Durrett},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=EIjJ6ykPnh}
} | Varied approaches for aligning language models have been proposed, including supervised fine-tuning, RLHF, and direct optimization methods such as DPO. Although DPO has rapidly gained popularity due to its straightforward training process and competitive results, there is an open question of whether there remain practical advantages of using a discriminator, such as a reward model, to evaluate responses. We propose D2PO, discriminator-guided DPO, an approach for the online setting where preferences are being collected throughout learning. As we collect gold preferences, we use these not only to train our policy, but to train a discriminative response evaluation model to silver-label even more synthetic data for policy training. We explore this approach across a set of diverse tasks, including a realistic chat setting, and we find that our approach can lead to higher-quality outputs compared to DPO with the same data budget, and greater efficiency in terms of preference data requirements. Furthermore, we show that our silver labeling is most helpful when training the policy with DPO, outperforming traditional PPO, and benefits from maintaining a separate discriminator from the policy model. | D2PO: Discriminator-Guided DPO with Response Evaluation Models | [
"Prasann Singhal",
"Nathan Lambert",
"Scott Niekum",
"Tanya Goyal",
"Greg Durrett"
] | Conference | Poster | 2405.01511 | [
"https://github.com/PrasannS/d2po"
] | https://huggingface.co/papers/2405.01511 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 | 231 |
null | https://openreview.net/forum?id=EHPns3hVkj | @inproceedings{
alves2024tower,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte Miguel Alves and Jos{\'e} Pombal and Nuno M Guerreiro and Pedro Henrique Martins and Jo{\~a}o Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and Jos{\'e} G. C. de Souza and Andre Martins},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=EHPns3hVkj}
} | While general-purpose large language models (LLMs) demonstrate proficiency on multiple tasks within the domain of translation, approaches based on open LLMs are competitive only when specializing on a single task. In this paper, we propose a recipe for tailoring LLMs to multiple tasks present in translation workflows. We perform continued pretraining on a multilingual mixture of monolingual and parallel data, creating TowerBase, followed by finetuning on instructions relevant for translation processes, creating TowerInstruct. Our model surpasses open alternatives on several relevant tasks and is competitive with general-purpose closed LLMs. We will release the Tower models, our specialization dataset, an evaluation framework for LLMs focusing on the translation ecosystem, and a collection of model generations on our benchmark. | Tower: An Open Multilingual Large Language Model for Translation-Related Tasks | [
"Duarte Miguel Alves",
"José Pombal",
"Nuno M Guerreiro",
"Pedro Henrique Martins",
"João Alves",
"Amin Farajian",
"Ben Peters",
"Ricardo Rei",
"Patrick Fernandes",
"Sweta Agrawal",
"Pierre Colombo",
"José G. C. de Souza",
"Andre Martins"
] | Conference | Poster | 2402.17733 | [
"https://github.com/epfllm/megatron-llm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 232 |
|
null | https://openreview.net/forum?id=EEPBOB2Xww | @inproceedings{
zhang2024ferretv,
title={Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models},
author={Haotian Zhang and Haoxuan You and Philipp Dufter and Bowen Zhang and Chen Chen and Hong-You Chen and Tsu-Jui Fu and William Yang Wang and Shih-Fu Chang and Zhe Gan and Yinfei Yang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=EEPBOB2Xww}
} | While Ferret seamlessly integrates regional understanding into the Large Language Model (LLM) to facilitate its referring and grounding capability, it poses certain limitations: constrained by the pre-trained fixed visual encoder and failed to perform well on broader tasks. In this work, we unveil Ferret-v2, a significant upgrade to Ferret, with three key designs. (1) Any resolution grounding and referring: A flexible approach that effortlessly handles higher image resolution, improving the model's ability to process and understand images in greater detail. (2) Multi-granularity visual encoding: By integrating the additional DINOv2 encoder, the model learns better and diverse underlying contexts for global and fine-grained visual information. (3) A three-stage training paradigm: Besides image-caption alignment, an additional stage is proposed for high-resolution dense alignment before the final instruction tuning. Experiments show that Ferret-v2 provides substantial improvements over Ferret and other state-of-the-art methods, thanks to its high-resolution scaling and fine-grained visual processing. | Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models | [
"Haotian Zhang",
"Haoxuan You",
"Philipp Dufter",
"Bowen Zhang",
"Chen Chen",
"Hong-You Chen",
"Tsu-Jui Fu",
"William Yang Wang",
"Shih-Fu Chang",
"Zhe Gan",
"Yinfei Yang"
] | Conference | Poster | 2404.07973 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 233 |
|
null | https://openreview.net/forum?id=Dt6qXZsgaU | @inproceedings{
zhao2024selfguide,
title={Self-Guide: Better Task-Specific Instruction Following via Self-Synthetic Finetuning},
author={Chenyang Zhao and Xueying Jia and Vijay Viswanathan and Graham Neubig and Tongshuang Wu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Dt6qXZsgaU}
} | Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts. However, prompting often leads models to make predictions with lower accuracy compared to finetuning a model with ample training data. On the other hand, while finetuning LLMs on task-specific data generally improves their performance, abundant annotated datasets are not available for all tasks. Previous work has explored generating task-specific data from state-of-the-art LLMs and using this data to finetune smaller models, but this approach requires access to a language model other than the one being trained, which introduces cost, scalability challenges, and legal hurdles associated with continuously relying on more powerful LLMs. In response to these, we propose Self-Guide, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM, then use these input-output pairs to finetune the student LLM itself. In our empirical evaluation of the Natural Instructions V2 benchmark, we find that Self-Guide improves the performance of LLM by a substantial margin. Specifically, we report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics. This sheds light on the promise of self-synthesized data guiding LLMs towards becoming task-specific experts without any external learning signals. | Self-Guide: Better Task-Specific Instruction Following via Self-Synthetic Finetuning | [
"Chenyang Zhao",
"Xueying Jia",
"Vijay Viswanathan",
"Graham Neubig",
"Tongshuang Wu"
] | Conference | Poster | 2407.12874 | [
"https://github.com/zhaochenyang20/Prompt2Model-Self-Guide"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 234 |
|
null | https://openreview.net/forum?id=DomBynQsqt | @inproceedings{
zhu2024mdiffusion,
title={3M-Diffusion: Latent Multi-Modal Diffusion for Language-Guided Molecular Structure Generation},
author={Huaisheng Zhu and Teng Xiao and Vasant G Honavar},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=DomBynQsqt}
} | Generating molecular structures with desired properties is a critical task with broad applications in drug discovery and materials design. We propose 3M-Diffusion, a novel multi-modal molecular graph generation method, to generate diverse, ideally novel molecular structures with desired properties. 3M-Diffusion encodes molecular graphs into a graph latent space which it then aligns with the text space learned by encoder-based LLMs from textual descriptions. It then reconstructs the molecular structure and atomic attributes based on the given text descriptions using the molecule decoder. It then learns a probabilistic mapping from the text space to the latent molecular graph space using a diffusion model. The results of our extensive experiments on several datasets demonstrate that 3M-Diffusion can generate high-quality, novel and diverse molecular graphs that semantically match the textual description provided. | 3M-Diffusion: Latent Multi-Modal Diffusion for Language-Guided Molecular Structure Generation | [
"Huaisheng Zhu",
"Teng Xiao",
"Vasant G Honavar"
] | Conference | Poster | 2403.07179 | [
"https://github.com/huaishengzhu/3mdiffusion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 235 |
|
null | https://openreview.net/forum?id=DbsLm2KAqP | @inproceedings{
li2024culturegen,
title={{CULTURE}-{GEN}: Revealing Global Cultural Perception in Language Models through Natural Language Prompting},
author={Huihan Li and Liwei Jiang and Nouha Dziri and Xiang Ren and Yejin Choi},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=DbsLm2KAqP}
} | As the utilization of large language models (LLMs) has proliferated world-wide, it is crucial for them to have adequate knowledge and fair representation for diverse global cultures. In this work, we uncover culture perceptions of three SOTA models on 110 countries and regions on 8 culture-related topics through culture-conditioned generations, and extract symbols from these generations that are associated to each culture by the LLM. We discover that culture-conditioned generation consist of linguistic “markers” that distinguish marginalized cultures apart from default cultures. We also discover that LLMs have an uneven degree of diversity in the culture symbols, and that cultures from different geographic regions have different presence in LLMs’ culture-agnostic generation. Our findings promote further research in studying the knowledge and fairness of global culture perception in LLMs. | CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting | [
"Huihan Li",
"Liwei Jiang",
"Nouha Dziri",
"Xiang Ren",
"Yejin Choi"
] | Conference | Poster | 2404.10199 | [
"https://github.com/huihanlhh/culture-gen"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 236 |
|
null | https://openreview.net/forum?id=DRffhKBVlE | @inproceedings{
li2024lite,
title={{LITE}: Modeling Environmental Ecosystems with Multimodal Large Language Models},
author={Haoran Li and Junqi Liu and Zexian Wang and Shiyuan Luo and Xiaowei Jia and Huaxiu Yao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=DRffhKBVlE}
} | The modeling of environmental ecosystems plays a pivotal role in the sustainable management of our planet. Accurate prediction of key environmental variables over space and time can aid in informed policy and decision-making, thus improving people's livelihood. Recently, deep learning-based methods have shown promise in modeling the spatial-temporal relationships for predicting environmental variables. However, these approaches often fall short in handling incomplete features and distribution shifts, which are commonly observed in environmental data due to the substantial cost of data collection and malfunctions in measuring instruments. To address these issues, we propose LITE -- a multimodal large language model for environmental ecosystems modeling. Specifically, LITE unifies different environmental variables by transforming them into natural language descriptions and line graph images. Then, LITE utilizes unified encoders to capture spatial-temporal dynamics and correlations in different modalities. During this step, the incomplete features are imputed by a sparse Mixture-of-Experts framework, and the distribution shift is handled by incorporating multi-granularity information from past observations. Finally, guided by domain instructions, a language model is employed to fuse the multimodal representations for the prediction. Our experiments demonstrate that LITE significantly enhances performance in environmental spatial-temporal prediction across different domains compared to the best baseline, with a 41.25\% reduction in prediction error. This justifies its effectiveness. | LITE: Modeling Environmental Ecosystems with Multimodal Large Language Models | [
"Haoran Li",
"Junqi Liu",
"Zexian Wang",
"Shiyuan Luo",
"Xiaowei Jia",
"Huaxiu Yao"
] | Conference | Poster | 2404.01165 | [
"https://github.com/hrlics/lite"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 237 |
|
null | https://openreview.net/forum?id=DOMP5AgwQz | @inproceedings{
huang2024ctikg,
title={{CTIKG}: {LLM}-Powered Knowledge Graph Construction from Cyber Threat Intelligence},
author={Liangyi Huang and Xusheng Xiao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=DOMP5AgwQz}
} | To gain visibility into evolving threat landscape, knowledge of cyber threats has been aggressively collected across organizations and is often shared through Cyber Threat Intelligence (CTI). While knowledge of CTI can be shared via structured format such as Indicators of Compromise (IOC), articles in technical blogs and posts in forums (referred to as CTI articles) provide more comprehensive descriptions of the observed real-world at- tacks. However, existing works can only analyze standard texts from mainstream cyber threat knowledge bases such as CVE and NVD, and lack of the capability to link multiple CTI articles to uncover the relationships among security-related entities such as vulnerabilities. In this paper, we propose a novel approach, CTIKG, that utilizes prompt engineering to efficiently build a security-oriented knowledge graph from CTI articles based on LLMs. To mitigate the challenges of LLMs in randomness, hallucinations and tokens limitation, CTIKG divides an article into segments and employs multiple LLM agents with dual memory design to (1) process each text segment separately and (2) summarize the results of the text segments to generate more accurate results. We evaluate CTIKG on two representative benchmarks built from real world CTI articles, and the results show that CTIKG achieves 86.88% precision in building security-oriented knowledge graphs, achieving at least 30% improvements over the state-of-the-art techniques. We also demonstrate that the retry mechanism makes open source language models outperform GPT4 for building knowledge graphs. | CTIKG: LLM-Powered Knowledge Graph Construction from Cyber Threat Intelligence | [
"Liangyi Huang",
"Xusheng Xiao"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 238 |
||
null | https://openreview.net/forum?id=DMUGTMWrKZ | @inproceedings{
zhao2024enhancing,
title={Enhancing Adversarial Robustness of {LLM}s with Analytic Hierarchy Process},
author={Jiahao Zhao and Minzheng Wang and Nan Xu and YinLuo and Wenji Mao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=DMUGTMWrKZ}
} | With the increasing impact of large language models (LLMs) across diverse applications, ensuring the robustness of LLMs has become a pressing concern. Existing defense strategies are tailored to specific attack scenarios, which typically require high-cost model training and cannot rapidly respond to new threats. To tackle this issue, we conceptualize the defense strategy in LLMs as a cognitive process for dealing with complex user queries. Intuitively, faced with a spectrum of queries that potentially contain malicious perturbations, LLMs need human-like discernment to avoid being misled. Drawing inspiration from cognitive theory, we introduce an innovative Analytic Hierarchy Process (AHP) inference framework. Our methodology involves decomposing intricate tasks into manageable subtasks, prioritizing them, and systematically addressing each step. Our framework is based on AI feedback, eliminating the necessity for training and optimization. We evaluate the effectiveness of our framework in jailbreak attacks and robustness in downstream tasks using representative LLMs, including GPT-3.5 and Llama2. The experimental results demonstrate that our proposed framework significantly enhances the adversarial robustness of LLMs. | Enhancing Adversarial Robustness of LLMs with Analytic Hierarchy Process | [
"Jiahao Zhao",
"Minzheng Wang",
"Nan Xu",
"YinLuo",
"Wenji Mao"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 239 |
||
null | https://openreview.net/forum?id=D06yk3DBas | @inproceedings{
cassano2024can,
title={Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions},
author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Jacob Ginesin and Edward Berman and George Chakhnashvili and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=D06yk3DBas}
} | A significant amount of research is focused on developing and evaluating
large language models for a variety of code synthesis tasks. These include
synthesizing code from natural language, synthesizing tests from
code, and synthesizing explanations of code. In contrast, the behavior of
instructional code editing with LLMs is understudied.
These are tasks in which the model is provided a block of code and an instruction to modify the code.
The editing instruction may ask for a feature to be added or removed, describe a bug and ask
for a fix, or ask for a different kind of solution.
We introduce a carefully crafted benchmark of code editing tasks and use it
to evaluate several cutting edge LLMs. Our evaluation exposes a significant gap
between the capabilities of state-of-the-art open and closed models. For
example, even GPT-3.5-Turbo is better than the best open model at
code editing tasks. We also introduce a new, carefully curated, permissively licensed training dataset of code editing tasks
coupled with natural language instructions.
Using this training dataset, we show that we can fine-tune open Code LLMs to significantly
improve their code editing capabilities,
closing the gap between open and closed models.
All code, data, and models are available at https://github.com/nuprl/CanItEdit. | Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions | [
"Federico Cassano",
"Luisa Li",
"Akul Sethi",
"Noah Shinn",
"Abby Brennan-Jones",
"Jacob Ginesin",
"Edward Berman",
"George Chakhnashvili",
"Anton Lozhkov",
"Carolyn Jane Anderson",
"Arjun Guha"
] | Conference | Poster | 2312.12450 | [
"https://github.com/nuprl/canitedit"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 240 |
|
null | https://openreview.net/forum?id=CybBmzWBX0 | @inproceedings{
dubois2024lengthcontrolled,
title={Length-Controlled AlpacaEval: A Simple Debiasing of Automatic Evaluators},
author={Yann Dubois and Percy Liang and Tatsunori Hashimoto},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=CybBmzWBX0}
} | LLM-based auto-annotators have become a key component of the LLM development process due to their cost-effectiveness and scalability compared to human-based evaluation.
However, these auto-annotators can introduce complex biases that are hard to remove. Even simple, known confounders such as preference for longer outputs remains in existing automated evaluation metrics.
We propose a simple regression analysis approach for controlling biases in auto-evaluations.
As a real case study, we focus on reducing the length bias of AlpacaEval, a fast and affordable benchmark for instruction-following LLMs that uses LLMs to estimate response quality.
Despite being highly correlated with human preferences, AlpacaEval is known to favor models that generate longer outputs.
We introduce a length-controlled AlpacaEval that aims to answer the counterfactual question: "What would the preference be if the model's and baseline's output had the same length?"
To achieve this, we first fit a GLM to predict the biased output of interest (auto-annotator preferences) based on the mediators we want to control for (length difference) and other relevant features.
We then obtain length-controlled preferences by predicting preferences while conditioning the GLM with a zero difference in lengths.
Length-controlling not only improves the robustness of the metric to manipulations in model verbosity, we also find that it increases the Spearman correlation with LMSYS' Chatbot Arena from 0.94 to 0.98.
We release \thecode{} and \leaderboard{}. | Length-Controlled AlpacaEval: A Simple Debiasing of Automatic Evaluators | [
"Yann Dubois",
"Percy Liang",
"Tatsunori Hashimoto"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 241 |
||
null | https://openreview.net/forum?id=CrzAj0kZjR | @inproceedings{
andukuri2024stargate,
title={{ST}aR-{GATE}: Teaching Language Models to Ask Clarifying Questions},
author={Chinmaya Andukuri and Jan-Philipp Fr{\"a}nken and Tobias Gerstenberg and Noah Goodman},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=CrzAj0kZjR}
} | When prompting language models to complete a task, users often leave important aspects unsaid. While asking questions could resolve this ambiguity (GATE; Li et al., 2023), models often struggle to ask good questions. We explore a language model's ability to self-improve (STaR; Zelikman et al., 2022) by rewarding the model for generating useful questions—a simple method we dub STaR-GATE. We generate a synthetic dataset of 25,500 unique persona-task prompts to simulate conversations between a pretrained language model—the $\texttt{Questioner}$—and a $\texttt{Roleplayer}$ whose preferences are unknown to the $\texttt{Questioner}$. By asking questions, the $\texttt{Questioner}$ elicits preferences from the $\texttt{Roleplayer}$. The $\texttt{Questioner}$ is iteratively finetuned on questions that increase the probability of high-quality responses to the task, which are generated by an $\texttt{Oracle}$ with access to the $\texttt{Roleplayer}$'s latent preferences. After two iterations of self-improvement, the $\texttt{Questioner}$ asks better questions, allowing it to generate responses that are preferred over responses from the initial model on $\textbf{72}$% of tasks. Our results indicate that teaching a language model to ask better questions leads to better personalized responses. | STaR-GATE: Teaching Language Models to Ask Clarifying Questions | [
"Chinmaya Andukuri",
"Jan-Philipp Fränken",
"Tobias Gerstenberg",
"Noah Goodman"
] | Conference | Poster | 2403.19154 | [
"https://github.com/scandukuri/assistant-gate"
] | https://huggingface.co/papers/2403.19154 | 2 | 0 | 0 | 4 | [
"scandukuri/mistral-stargate",
"scandukuri/mistral-stargate-m1",
"scandukuri/llama3-8b-stargate-m1",
"RichardErkhov/scandukuri_-_llama3-8b-stargate-m1-gguf"
] | [] | [] | 1 | 242 |
null | https://openreview.net/forum?id=CI7D2kiih1 | @inproceedings{
zayed2024should,
title={Should We Attend More or Less? Modulating Attention for Fairness},
author={Abdelrahman Zayed and Goncalo Mordido and Samira Shabanian and Sarath Chandar},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=CI7D2kiih1}
} | The advances in natural language processing (NLP) pose both opportunities and challenges. While recent progress enables the development of high-performing models for a variety of tasks, it also poses the risk of models learning harmful biases from the data, such as gender stereotypes. In this work, we investigate the role of attention, a widely-used technique in current state-of-the-art NLP models, in the propagation of social biases. Specifically, we study the relationship between the entropy of the attention distribution and the model's performance and fairness. We then propose a novel method for modulating attention weights to improve model fairness after training. Since our method is only applied post-training and pre-inference, it is an intra-processing method and is, therefore, less computationally expensive than existing in-processing and pre-processing approaches. Our results show an increase in fairness and minimal performance loss on different text classification and generation tasks using language models of varying sizes. | Should We Attend More or Less? Modulating Attention for Fairness | [
"Abdelrahman Zayed",
"Goncalo Mordido",
"Samira Shabanian",
"Sarath Chandar"
] | Conference | Poster | 2305.13088 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 243 |
|
null | https://openreview.net/forum?id=C0j44uRPcl | @inproceedings{
ko2024on,
title={On Robustness-Accuracy Characterization of Language Models using Synthetic Datasets},
author={Ching-Yun Ko and Pin-Yu Chen and Payel Das and Yung-Sung Chuang and Luca Daniel},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=C0j44uRPcl}
} | In recent years, language models (LMs) that were pretrained at scale on diverse data have proven to be a successful approach for solving different downstream tasks. However, new concerns about proper performance evaluation have been raised, especially for test-data leakage caused by accidentally including them during pretraining, or by indirectly exposing them through API calls for evaluation. Motivated by these, in this paper, we propose a new evaluation workflow that generates steerable synthetic language datasets and proxy tasks for benchmarking the performance of pre-trained LMs on sentence classification tasks. This approach allows for better characterization of the joint analysis on the robustness and accuracy of LMs without risking sensitive information leakage. It also provides a more controlled and private way to evaluate LMs that avoids overfitting specific test sets. Verified on various pretrained LMs, the proposed approach demonstrates promising high correlation with real downstream performance. | On Robustness-Accuracy Characterization of Language Models using Synthetic Datasets | [
"Ching-Yun Ko",
"Pin-Yu Chen",
"Payel Das",
"Yung-Sung Chuang",
"Luca Daniel"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 244 |
||
null | https://openreview.net/forum?id=BgvgMxY8s5 | @inproceedings{
hasan2024handling,
title={Handling Open-Vocabulary Constructs in Formalizing Specifications: Retrieval Augmented Parsing with Expert Knowledge},
author={Mohammad Saqib Hasan and Sayontan Ghosh and Dhruv Verma and Geoff Kuenning and Erez Zadok and Scott Smolka and Niranjan Balasubramanian},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=BgvgMxY8s5}
} | We study the problem of Open-vocabulary constructs (OVCs), ones that are not known beforehand, in the context of converting natural
language (NL) specification sentences into formal languages (e.g., LTL or code). Models tend to fare poorly on such OVCs, since they do
not have the necessary knowledge a priori. In such settings, a domain expert can provide the correct constructs based on their
preference or domain knowledge at inference time. Our goal is to effectively reuse this inference-time, expert-provided knowledge in future specification sentences without having to retrain the model. To this end, we first present a new parsing setting---\emph{dynamic knowledge-augmented parsing} (DKAP)---where, in addition to the input sentence, the model is given (dynamically growing) expert knowledge in the form of a key-value lexicon that associates NL phrases with correct OVC constructs. To address the DKAP problem, we propose ROLex, a retrieval-augmented parsing approach that uses the dynamic expert lexicon. ROLex consists of a retriever and a generator that are trained to find and use the relevant subset of the key-value store to produce the correct parse. One key challenge in realizing this solution is the lack of training data for the retrieval-augmented parsing. We show how we can make use of synthetic data generation, along with original task-level training data---i.e., the (NL sentence, FL statement) pairs---to carry out the requisite training for the retrieval-augmented parsing setting. Further, to improve training effectiveness, we have devised multiple strategies for focusing the model on the relevant subset of retrieved knowledge. Finally, we introduce a new evaluation paradigm designed to address the DKAP problem by simulating the dynamic expert-provided knowledge in three different formalization settings (NL2LTL, NL2Code, and NL2CMD). Our evaluations show that DKAP is a difficult challenge, and ROLex helps improve the performance of baseline models by using dynamic expert knowledge effectively. | Handling Open-Vocabulary Constructs in Formalizing Specifications: Retrieval Augmented Parsing with Expert Knowledge | [
"Mohammad Saqib Hasan",
"Sayontan Ghosh",
"Dhruv Verma",
"Geoff Kuenning",
"Erez Zadok",
"Scott Smolka",
"Niranjan Balasubramanian"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 245 |
||
null | https://openreview.net/forum?id=BaOAvPUyBO | @inproceedings{
wu2024do,
title={Do Language Models Plan Ahead for Future Tokens?},
author={Wilson Wu and John Xavier Morris and Lionel Levine},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=BaOAvPUyBO}
} | Do transformers ``think ahead'' during inference at a given position? It is known transformers prepare information in the hidden states of the forward pass at time step $t$ that is then used in future forward passes $t+\tau$. We posit two explanations for this phenomenon: pre-caching, in which off-diagonal gradient terms present during training result in the model computing features at $t$ irrelevant to the present inference task but useful for the future, and breadcrumbs, in which features most relevant to time step $t$ are already the same as those that would most benefit inference at time $t+\tau$. We test these hypotheses by training language models without propagating gradients to past timesteps, a scheme we formalize as myopic training. In a constructed synthetic data setting, we find clear evidence for pre-caching. In the autoregressive language modeling setting, our experiments are more suggestive of the breadcrumbs hypothesis, though pre-caching increases with model scale. | Do Language Models Plan Ahead for Future Tokens? | [
"Wilson Wu",
"John Xavier Morris",
"Lionel Levine"
] | Conference | Poster | 2404.00859 | [
"https://github.com/wiwu2390/futuregpt2-public"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 246 |
|
null | https://openreview.net/forum?id=BDBdblmyzY | @inproceedings{
koo2024automatabased,
title={Automata-based constraints for language model decoding},
author={Terry Koo and Frederick Liu and Luheng He},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=BDBdblmyzY}
} | Language models (LMs) are often expected to generate strings in some formal language; for example, structured data, API calls, or code snippets.
Although LMs can be tuned to improve their adherence to formal syntax, this does not *guarantee* conformance, especially with smaller LMs suitable for large-scale deployment.
In addition, tuning requires significant resources, making it impractical for uncommon or task-specific formats.
To prevent downstream parsing errors we would ideally *constrain* the LM to only produce valid output, but this is severely complicated by tokenization, which is typically both ambiguous and misaligned with the formal grammar.
We solve these issues through the application of automata theory, deriving an efficient closed-form solution for the *regular languages*, a broad class of formal languages with many practical applications, including API calls or schema-guided JSON and YAML.
We also discuss pragmatic extensions for coping with the issue of high branching factor, and extend our techniques to *deterministic context-free languages*, which similarly admit an efficient closed-form solution.
Previous work on this topic (Willard and Louf, 2023) layers bespoke solutions onto automata, leading to problems with speed, correctness, and extensibility.
Instead, we reformulate the entire task in terms of automata so we can leverage well-studied and well-optimized algorithms.
Our system compiles constraints ~7,000x faster, is provably correct, and can be extended in a modular fashion. | Automata-based constraints for language model decoding | [
"Terry Koo",
"Frederick Liu",
"Luheng He"
] | Conference | Poster | 2407.08103 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 247 |
|
null | https://openreview.net/forum?id=BAakY1hNKS | @inproceedings{
wu2024autogen,
title={AutoGen: Enabling Next-Gen {LLM} Applications via Multi-Agent Conversations},
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zhang and Shaokun Zhang and Jiale Liu and Ahmed Hassan Awadallah and Ryen W White and Doug Burger and Chi Wang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=BAakY1hNKS}
} | We present AutoGen, an open-source framework that allows developers to build LLM applications by composing multiple agents to converse with each other to accomplish tasks. AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools. It also enables developers to create flexible agent behaviors and conversation patterns for different applications using both natural language and code. AutoGen serves as a generic infrastructure and is widely used by AI practitioners and researchers to build diverse applications of various complexities and LLM capacities. We demonstrate the framework’s effectiveness with several pilot applications, with domains ranging from mathematics and coding to question-answering, supply-chain optimization, online decision-making, and entertainment. | AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversations | [
"Qingyun Wu",
"Gagan Bansal",
"Jieyu Zhang",
"Yiran Wu",
"Beibin Li",
"Erkang Zhu",
"Li Jiang",
"Xiaoyun Zhang",
"Shaokun Zhang",
"Jiale Liu",
"Ahmed Hassan Awadallah",
"Ryen W White",
"Doug Burger",
"Chi Wang"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 248 |
||
null | https://openreview.net/forum?id=B41hNBoWLo | @inproceedings{
maini2024tofu,
title={{TOFU}: A Task of Fictitious Unlearning for {LLM}s},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary Chase Lipton and J Zico Kolter},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=B41hNBoWLo}
} | Large language models trained on massive corpora of data from the web can memorize and reproduce sensitive or private data
raising both legal and ethical concerns. Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training. Although several methods exist for such unlearning, it is unclear to what extent they result in models equivalent to those where the data to be forgotten was never learned in the first place. To address this challenge, we present TOFU, a Task of Fictitious Unlearning, as a benchmark aimed at helping deepen our understanding of unlearning. We offer a dataset of $200$ diverse synthetic author profiles, each consisting of 20 question-answer pairs, and a subset of these profiles called the forget set that serves as the target for unlearning. We compile a suite of metrics that work together to provide a holistic picture of unlearning efficacy. Finally, we provide a set of baseline results from existing unlearning algorithms. Importantly, none of the baselines we consider show effective unlearning motivating continued efforts to develop approaches for unlearning that effectively tune models so that they truly behave as if they were never trained on the forget data at all. | TOFU: A Task of Fictitious Unlearning for LLMs | [
"Pratyush Maini",
"Zhili Feng",
"Avi Schwarzschild",
"Zachary Chase Lipton",
"J Zico Kolter"
] | Conference | Poster | 2401.06121 | [
"https://github.com/ucsb-nlp-chang/uld"
] | https://huggingface.co/papers/2401.06121 | 3 | 14 | 0 | 5 | [
"locuslab/tofu_ft_phi-1.5",
"locuslab/tofu_ft_llama2-7b",
"RichardErkhov/locuslab_-_tofu_ft_llama2-7b-4bits",
"RichardErkhov/locuslab_-_tofu_ft_llama2-7b-8bits"
] | [
"locuslab/TOFU",
"LZ12DH/unlearning",
"kimperyang/TOFU-C",
"an1118/TOFU-C",
"an1118/TOFU-Cf",
"an1118/TOFU-Cr",
"kimperyang/TOFUCr1",
"kimperyang/TOFUCrP",
"kimperyang/TOFU-C-Shuffle",
"Gyikoo/TOFU-C-single",
"an1118/TOFU-Cbin",
"kimperyang/TOFU-C-Direct",
"Gyikoo/TOFU-C-All",
"an1118/TOFU-C-All"
] | [
"locuslab/tofu_leaderboard"
] | 1 | 249 |
null | https://openreview.net/forum?id=Aaz6R4Tlwv | @inproceedings{
edwards2024synergpt,
title={Syner{GPT}: In-Context Learning for Personalized Drug Synergy Prediction and Drug Design},
author={Carl Edwards and Aakanksha Naik and Tushar Khot and Martin D. Burke and Heng Ji and Tom Hope},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=Aaz6R4Tlwv}
} | Predicting synergistic drug combinations can help accelerate discovery of cancer treatments, particularly therapies personalized to a patient's specific tumor via biopsied cells. In this paper, we propose a novel setting and models for *in-context drug synergy learning*. We are given a small "personalized dataset" of 10-20 drug synergy relationships in the context of specific cancer cell targets. Our goal is to predict additional drug synergy relationships in that context. Inspired by recent work that pre-trains a GPT language model (LM) to "in-context learn" common function classes, we devise novel pre-training schemes that enable a GPT model to in-context learn "drug synergy functions". Our model---which does not use any textual corpora, molecular fingerprints, protein interaction or any other domain-specific knowledge--- is able to achieve competitive results. We further integrate our in-context approach with a genetic algorithm to optimize model prompts and select synergy candidates to test after conducting a patient biopsy. Finally, we explore a novel task of inverse drug design which can potentially enable the design of drugs that synergize specifically to target a given patient's "personalized dataset'". Our findings could have an important impact on precision cancer medicine, and also raise intriguing questions on non-textual pre-training for LMs. | SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction and Drug Design | [
"Carl Edwards",
"Aakanksha Naik",
"Tushar Khot",
"Martin D. Burke",
"Heng Ji",
"Tom Hope"
] | Conference | Poster | 2307.11694 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 250 |
|
null | https://openreview.net/forum?id=ADtL6fgNRv | @inproceedings{
hernandez2024inspecting,
title={Inspecting and Editing Knowledge Representations in Language Models},
author={Evan Hernandez and Belinda Z. Li and Jacob Andreas},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=ADtL6fgNRv}
} | Neural language models (LMs) represent facts about the world described by text. Sometimes these facts derive from training data (in most LMs, a representation of the word *banana* encodes the fact that bananas are fruits). Sometimes facts derive from input text itself (a representation of the sentence *I poured out the bottle* encodes the fact that the bottle became empty). We describe REMEDI, a method for learning to map statements in natural language to fact encodings in an LM's internal representation system. REMEDI encodings can be used as *knowledge editors*: when added to LM hidden representations, they modify downstream generation to be consistent with new facts. REMEDI encodings may also be used as *probes*: when compared to LM representations, they reveal which properties LMs already attribute to mentioned entities, in some cases making it possible to predict when LMs will generate outputs that conflict with background knowledge or input text. REMEDI thus links work on probing, prompting, and LM editing, and offers steps toward general tools for fine-grained inspection and control of knowledge in LMs. | Inspecting and Editing Knowledge Representations in Language Models | [
"Evan Hernandez",
"Belinda Z. Li",
"Jacob Andreas"
] | Conference | Poster | 2304.00740 | [
"https://github.com/evandez/remedi"
] | https://huggingface.co/papers/2304.00740 | 0 | 0 | 0 | 3 | [] | [] | [] | 1 | 251 |
null | https://openreview.net/forum?id=9gdZI7c6yr | @inproceedings{
liu2024aligning,
title={Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators},
author={Yinhong Liu and Han Zhou and Zhijiang Guo and Ehsan Shareghi and Ivan Vuli{\'c} and Anna Korhonen and Nigel Collier},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=9gdZI7c6yr}
} | Large Language Models (LLMs) have demonstrated promising capabilities as automatic evaluators in assessing the quality of generated natural language. However, LLMs still exhibit biases in evaluation and often struggle to generate coherent evaluations that align with human assessments. In this work, we first conduct a systematic study of the misalignment between LLM evaluators and human evaluation, revealing that existing calibration methods aimed at mitigating biases of LLMs are insufficient for effectively aligning LLM evaluators. Inspired by the use of preference data in RLHF, we formulate the evaluation as a ranking problem and introduce Pairwise-preference Search (PairS), an uncertainty-guided search method that employs LLMs to conduct pairwise comparisons locally and efficiently ranks candidate texts globally. PairS achieves state-of-the-art performance on representative evaluation tasks in long-form generations and demonstrates significant improvements over direct scoring. Furthermore, we provide insights into the role of pairwise preference in quantifying the
transitivity of LLMs and demonstrate how PairS benefits from calibration using debiased pairwise evaluations. | Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators | [
"Yinhong Liu",
"Han Zhou",
"Zhijiang Guo",
"Ehsan Shareghi",
"Ivan Vulić",
"Anna Korhonen",
"Nigel Collier"
] | Conference | Poster | 2403.16950 | [
"https://github.com/cambridgeltl/pairs"
] | https://huggingface.co/papers/2403.16950 | 1 | 4 | 0 | 7 | [] | [] | [] | 1 | 252 |
null | https://openreview.net/forum?id=9Wmdk94oKF | @inproceedings{
shi2024chops,
title={{CHOPS}: {CH}at with custOmer Profile Systems for Customer Service with {LLM}s},
author={Jingzhe Shi and Jialuo Li and Qinwei Ma and Zaiwen Yang and Huan Ma and Lei Li},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=9Wmdk94oKF}
} | Businesses and software platforms are increasingly utilizing Large Language Models (LLMs) like GPT-3.5, GPT-4, GLM-3, and LLaMa-2 as chat assistants with file access or as reasoning agents for custom service. Current LLM-based customer service models exhibit limited integration with customer profiles and lack operational capabilities, while existing API integrations prioritize diversity over precision and error avoidance that are crucial in real-world scenarios for Customer Service. We propose an LLMs agent called **CHOPS** (**CH**at with cust**O**mer **P**rofile in existing **S**ystem) that: (1) efficiently utilizes existing databases or systems to access user information or interact with these systems based on existing guidance; (2) provides accurate and reasonable responses or executing required operations in the system while avoiding harmful operations; and (3) leverages the combination of small and large LLMs together to provide satisfying performance while having decent inference cost. We introduce a practical dataset, *CPHOS-dataset*, including a database, some guiding files, and QA pairs collected from *CPHOS*, which employs an online platform to facilitate the organization of simulated Physics Olympiads for high school teachers and students. We conduct extensive experiments to validate the performance of our proposed **CHOPS** architecture using the *CPHOS-dataset*, aiming to demonstrate how LLMs can enhance or serve as alternatives to human customer service. | CHOPS: CHat with custOmer Profile Systems for Customer Service with LLMs | [
"Jingzhe Shi",
"Jialuo Li",
"Qinwei Ma",
"Zaiwen Yang",
"Huan Ma",
"Lei Li"
] | Conference | Poster | 2404.01343 | [
"https://github.com/jingzheshi/chops"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 253 |
|
null | https://openreview.net/forum?id=9JY1QLVFPZ | @inproceedings{
zhang2024forcing,
title={Forcing Diffuse Distributions out of Language Models},
author={Yiming Zhang and Avi Schwarzschild and Nicholas Carlini and J Zico Kolter and Daphne Ippolito},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=9JY1QLVFPZ}
} | Despite being trained specifically to follow user instructions, today’s instruction-tuned
language models perform poorly when instructed to produce random outputs.
For example, when prompted to pick a number uniformly between one and
ten Llama-2-13B-chat disproportionately favors the number five, and when tasked
with picking a first name at random, Mistral-7B-Instruct chooses Avery 40 times
more often than we would expect based on the U.S. population. When these language
models are used for real-world tasks where diversity of outputs is crucial,
such as language model assisted dataset construction, their inability to produce
diffuse distributions over valid choices is a major hurdle. In this work, we propose
a fine-tuning method that encourages language models to output distributions that
are diffuse over valid outcomes. The methods we introduce generalize across a
variety of tasks and distributions and make large language models practical for
synthetic dataset generation with little human intervention. | Forcing Diffuse Distributions out of Language Models | [
"Yiming Zhang",
"Avi Schwarzschild",
"Nicholas Carlini",
"J Zico Kolter",
"Daphne Ippolito"
] | Conference | Poster | 2404.10859 | [
"https://github.com/y0mingzhang/diffuse-probabilities"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 254 |
|
null | https://openreview.net/forum?id=9Ik05cycLq | @inproceedings{
kumar2024certifying,
title={Certifying {LLM} Safety against Adversarial Prompting},
author={Aounon Kumar and Chirag Agarwal and Suraj Srinivas and Aaron Jiaxun Li and Soheil Feizi and Himabindu Lakkaraju},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=9Ik05cycLq}
} | Large language models (LLMs) are vulnerable to adversarial attacks, which add maliciously designed token sequences to bypass the model’s safety guardrails and cause it to produce harmful content. In this work, we introduce erase-and-check, the first framework to defend against adversarial prompts with certifiable safety guarantees. Given a prompt, our erase-and-check method erases tokens individually and inspects the resulting subsequences using a safety filter, declaring it harmful if any of the subsequences are detected as harmful. Our safety filters are implemented by leveraging Llama 2 and DistilBERT. We theoretically demonstrate that our method detects harmful prompts with accuracy at least as high as the safety filter. Additionally, we propose three efficient empirical defenses inspired by our erase-and-check (EC) method: i) RandEC, a randomized subsampling version of erase-and-check; ii) GreedyEC, which greedily erases tokens that maximize the softmax score of the harmful class; and iii) GradEC, which uses gradient information to optimize the tokens to erase. Extensive empirical evaluation with real-world datasets demonstrates the effectiveness of the proposed methods in defending against state-of-the-art adversarial prompting attacks. | Certifying LLM Safety against Adversarial Prompting | [
"Aounon Kumar",
"Chirag Agarwal",
"Suraj Srinivas",
"Aaron Jiaxun Li",
"Soheil Feizi",
"Himabindu Lakkaraju"
] | Conference | Poster | 2309.02705 | [
"https://github.com/aounon/certified-llm-safety"
] | https://huggingface.co/papers/2309.02705 | 0 | 0 | 0 | 6 | [] | [] | [
"TrustSafeAI/GradientCuff-Jailbreak-Defense"
] | 1 | 255 |
null | https://openreview.net/forum?id=98ekcwQqb7 | @inproceedings{
jin2024latent,
title={Latent Causal Probing: A Formal Perspective on Probing with Causal Models of Data},
author={Charles Jin},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=98ekcwQqb7}
} | As language models (LMs) deliver increasing performance on a range of NLP tasks, *probing classifiers* have become an indispensable technique in the effort to better understand their inner workings. A typical setup involves (1) defining an auxiliary task consisting of a dataset of text annotated with labels, then (2) supervising small classifiers to predict the labels from the representations of a pretrained LM as it processes the dataset. A high probing accuracy is interpreted as evidence that the LM has learned to perform the auxiliary task as an unsupervised byproduct of its original pretraining objective. Despite the widespread usage of probes, however, the robust design and analysis of probing experiments remains a challenge. We develop a formal perspective on probing using *structural causal models* (SCM). Specifically, given an SCM which explains the distribution of tokens observed during training, we frame the central hypothesis as whether the LM has learned to represent the latent variables of the SCM. Empirically, we extend a recent study of LMs in the context of a synthetic grid-world navigation task, where having an exact model of the underlying causal structure allows us to draw strong inferences from the result of probing experiments. Our techniques provide robust empirical evidence for the ability of LMs to induce the latent concepts underlying text. | Latent Causal Probing: A Formal Perspective on Probing with Causal Models of Data | [
"Charles Jin"
] | Conference | Poster | 2407.13765 | [
"https://github.com/charlesjin/emergent-semantics"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 256 |
|
null | https://openreview.net/forum?id=95TayIeqJ4 | @inproceedings{
tam2024tmmlu,
title={{TMMLU}+: An Improved Traditional Chinese Evaluation Suite for Foundation Models},
author={Zhi Rui Tam and Ya Ting Pai and Yen-Wei Lee and Hong-Han Shuai and Jun-Da Chen and Wei Min Chu and Sega Cheng},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=95TayIeqJ4}
} | We present TMMLU+, a new benchmark designed for Traditional Chinese language understanding. TMMLU+ is a multi-choice question-answering dataset with 66 subjects from elementary to professional level. It is six times larger and boasts a more balanced subject distribution than its predecessor, Taiwan Massive Multitask Language Understanding (TMMLU). We also benchmark closed-source models and 26 open-weight Chinese large language models (LLMs) of parameters ranging from 1.8B to 72B on the proposed TMMLU+. Our findings reveal that (1.) Traditional Chinese models still trail behind their Simplified Chinese counterparts, highlighting a need for more focused advancements in LLMs catering to Traditional Chinese. (2.) Current LLMs still fall short of human performance in average scores, indicating a potential need for future research to delve deeper into social science and humanities subjects. (3.) Among all the tokenization compression metrics examined, we identify that only the fertility score uniquely demonstrates strong correlations with our benchmark results. We foresee that TMMLU+ will pinpoint areas for future model improvement, thereby narrowing the gap between machine and human linguistic capabilities and supporting researchers in developing Traditional Chinese LLMs. Our dataset, along with the benchmark source code, is accessible at huggingface.co/datasets/ikala/tmmluplus. | TMMLU+: An Improved Traditional Chinese Evaluation Suite for Foundation Models | [
"Zhi Rui Tam",
"Ya Ting Pai",
"Yen-Wei Lee",
"Hong-Han Shuai",
"Jun-Da Chen",
"Wei Min Chu",
"Sega Cheng"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 257 |
||
null | https://openreview.net/forum?id=8w0RApM5yG | @inproceedings{
kumari2024bumblebee,
title={BumbleBee: Dynamic {KV}-Cache Streaming Submodular Summarization for Infinite-Context Transformers},
author={Lilly Kumari and Shengjie Wang and Tianyi Zhou and Nikhil Sarda and Anthony Rowe and Jeff Bilmes},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=8w0RApM5yG}
} | Transformer-based Large Language Models (LLMs) have shown tremendous advancements across various domains. However, their need to maintain key-value representations (a KV cache) of previously seen tokens in the GPU memory leads to a significant memory overhead that scales linearly with the sequence length and batch size. With the advent of extremely long context LLMs, efficiently modeling long-range dependencies becomes challenging. In this work, we focus on the problem of long context summarization by formulating it as a subset selection problem. Specifically, we propose a novel submodular optimization framework called BumbleBee that uses a mixture of submodular functions to balance the diversity amongst the context tokens in the key embedding space and their importance computed using accumulated attention attributed to them across different input tokens. Our framework can work for both the LLM prefill and decoding phases, utilizing offline or online versions of our submodular algorithm respectively. While the context sizes grow to be as large only as the summary size, the temporal extent of the contexts may grow unboundedly, justifying the moniker ‘‘Infinite-Context Transformers.’’ Empirically, we validate the effectiveness of our framework across 13 different datasets using the LLaMA 7B and 13B models. Our results show that BumbleBee improves accuracy compared to state-of-the-art techniques at comparable context reduction ratios. | BumbleBee: Dynamic KV-Cache Streaming Submodular Summarization for Infinite-Context Transformers | [
"Lilly Kumari",
"Shengjie Wang",
"Tianyi Zhou",
"Nikhil Sarda",
"Anthony Rowe",
"Jeff Bilmes"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 258 |
||
null | https://openreview.net/forum?id=8tKjqqMM5z | @inproceedings{
luohe2024keep,
title={Keep the Cost Down: A Review on Methods to Optimize {LLM}{\textquoteright}s {KV}-Cache Consumption},
author={Shi Luohe and Hongyi Zhang and Yao Yao and Zuchao Li and hai zhao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=8tKjqqMM5z}
} | Large Language Models (LLMs), epitomized by ChatGPT's release in late 2022, have revolutionized various industries with their advanced language comprehension. However, their efficiency is challenged by the Transformer architecture's struggle with handling long texts. KV-Cache has emerged as a pivotal solution to this issue, converting the time complexity of token generation from quadratic to linear, albeit with increased GPU memory overhead proportional to conversation length. With the development of the LLM community and academia, various KV-Cache compression methods have been proposed. In this review, we dissect the various properties of KV-Cache and elaborate on various methods currently used to optimize the KV-Cache space usage of LLMs. These methods span the pre-training phase, deployment phase, and inference phase, and we summarize the commonalities and differences among these methods. Additionally, we list some metrics for evaluating the long-text capabilities of large language models, from both efficiency and capability perspectives. Our review thus sheds light on the evolving landscape of LLM optimization, offering insights into future advancements in this dynamic field. | Keep the Cost Down: A Review on Methods to Optimize LLM’s KV-Cache Consumption | [
"Shi Luohe",
"Hongyi Zhang",
"Yao Yao",
"Zuchao Li",
"hai zhao"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 259 |
||
null | https://openreview.net/forum?id=8TdcXwfNRB | @inproceedings{
mishra-sharma2024paperclip,
title={{PAPERCLIP}: Associating Astronomical Observations and Natural Language with Multi-Modal Models},
author={Siddharth Mishra-Sharma and YIDING SONG and Jesse Thaler},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=8TdcXwfNRB}
} | We present PAPERCLIP (Proposal Abstracts Provide an Effective Representation for Contrastive Language-Image Pre-training), a method which associates astronomical observations imaged by telescopes with natural language using a neural network model. The model is fine-tuned from a pre-trained Contrastive Language-Image Pre-training (CLIP) model using successful observing proposal abstracts and corresponding downstream observations, with the abstracts optionally summarized via guided generation using large language models (LLMs). Using observations from the Hubble Space Telescope (HST) as an example, we show that the fine-tuned model embodies a meaningful joint representation between observations and natural language through tests targeting image retrieval (i.e., finding the most relevant observations using natural language queries) and description retrieval (i.e., querying for astrophysical object classes and use cases most relevant to a given observation). Our study demonstrates the potential for using generalist foundation models rather than task-specific models for interacting with astronomical data by leveraging text as an interface. | PAPERCLIP: Associating Astronomical Observations and Natural Language with Multi-Modal Models | [
"Siddharth Mishra-Sharma",
"YIDING SONG",
"Jesse Thaler"
] | Conference | Poster | 2403.08851 | [
"https://github.com/smsharma/paperclip-hubble"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 260 |
|
null | https://openreview.net/forum?id=7ysaJGs7zY | @inproceedings{
shahgir2024illusionvqa,
title={Illusion{VQA}: A Challenging Optical Illusion Dataset for Vision Language Models},
author={Haz Sameen Shahgir and Khondker Salman Sayeed and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yue Dong and Rifat Shahriyar},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7ysaJGs7zY}
} | The advent of Vision Language Models (VLM) has allowed researchers to investigate the visual understanding of a neural network using natural language. Beyond object classification and detection, VLMs are capable of visual comprehension and common-sense reasoning. This naturally led to the question: How do VLMs respond when the image itself is inherently *unreasonable*? To this end, we present IllusionVQA: a diverse dataset of challenging optical illusions and hard-to-interpret scenes to test the capability of VLMs in two distinct multiple-choice VQA tasks - comprehension and soft localization. GPT4V, the best performing VLM, achieves 62.99\% accuracy (4-shot) on the comprehension task and 49.7\% on the localization task (4-shot and Chain-of-Thought). Human evaluation reveals that humans achieve 91.03\% and 100\% accuracy in comprehension and localization. We discover that In-Context Learning (ICL) and Chain-of-Thought reasoning substantially degrade the performance of Gemini-Pro on the localization task. Tangentially, we discover a potential weakness in the ICL capabilities of VLMs: they fail to locate optical illusions even when the correct answer is in the context window as a few-shot example. | IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models | [
"Haz Sameen Shahgir",
"Khondker Salman Sayeed",
"Abhik Bhattacharjee",
"Wasi Uddin Ahmad",
"Yue Dong",
"Rifat Shahriyar"
] | Conference | Poster | 2403.15952 | [
"https://github.com/csebuetnlp/illusionvqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 261 |
|
null | https://openreview.net/forum?id=7xUtka9ck9 | @inproceedings{
haller2024yes,
title={Yes, no, maybe? Revisiting language models' response stability under paraphrasing for the assessment of political leaning},
author={Patrick Haller and Jannis Vamvas and Lena Ann J{\"a}ger},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7xUtka9ck9}
} | An increasing number of studies are aimed at uncovering characteristics such as personality traits or political leanings of language models (LMs), using questionnaires developed for human respondents. From this previous body of work, it is evident that models are highly sensitive to prompt design, including the phrasing of questions and statements, as well as the format of the expected response (e.g., forced choice, vs open-ended). These sensitivities then often lead to inconsistent responses. However, most studies assess response stability on a small scale with low statistical power e.g., using less than ten paraphrases of the same question.
In this work, we investigate the stability of responses to binary forced-choice questions using a large number of paraphrases. Specifically, we probe both masked language models (MLMs) and left-to-right generative language models (GLMs) on the political compass test, assessing response validity (i.e., the proportion of valid responses to a prompt) and response stability (i.e., the variability under paraphrasing) across 500 paraphrases of each statement. This large-scale assessment allows us to approximate the underlying distribution of model responses more precisely, both in terms of the overall stability of a model under paraphrasing as well as the stability of specific items (i.e., the intended meaning of a question). In addition, to investigate whether there are structural biases that drive model responses into a certain direction, we test the association between different word- and sentence-level features, and the models' responses.
We find that while all MLMs exhibit a high degree of response validity, GLMs do not consistently produce valid responses when assessed via forced choice. In terms of response stability, we show that even models that exhibit high overall stability scores flip their responses given certain paraphrases. Crucially, even within-model, response stability can vary considerably between items. We also find that models tend to agree more with statements that show high positive sentiment scores.
Based on our results, we argue that human-centered questionnaires might not be appropriate in the context of probing LMs as both their response validity and stability differ considerably between items. Moreover, although stability metrics represent useful descriptions of model properties, it should be emphasized that even for models exhibiting fairly high stability, specific paraphrases can lead to substantially different model responses. | Yes, no, maybe? Revisiting language models' response stability under paraphrasing for the assessment of political leaning | [
"Patrick Haller",
"Jannis Vamvas",
"Lena Ann Jäger"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 262 |
||
null | https://openreview.net/forum?id=7jSMMvXLri | @inproceedings{
chen2024measuring,
title={Measuring Taiwanese Mandarin Language Understanding},
author={Po-Heng Chen and Sijia Cheng and Wei-Lin Chen and Yen-Ting Lin and Yun-Nung Chen},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7jSMMvXLri}
} | The evaluation of large language models (LLMs) has drawn substantial attention in the field recently.
This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underrepresented in existing benchmarks.
We present TMLU, a comprehensive evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin.
TMLU consists of an array of 37 subjects across social science, STEM, humanities, Taiwan-specific content, and others, ranging from middle school to professional levels.
In addition, we curate chain-of-thought-like few-shot explanations for each subject to facilitate the evaluation of complex reasoning skills.
To establish a comprehensive baseline, we conduct extensive experiments and analysis on 24 advanced LLMs.
The results suggest that Chinese open-weight models demonstrate inferior performance comparing to multilingual proprietary ones, and open-weight models tailored for Taiwanese Mandarin lag behind the Simplified-Chinese counterparts.
The findings indicate great headrooms for improvement, and emphasize the goal of TMLU to foster the development of localized Taiwanese-Mandarin LLMs.
We release the benchmark and evaluation scripts for the community to promote future research. | Measuring Taiwanese Mandarin Language Understanding | [
"Po-Heng Chen",
"Sijia Cheng",
"Wei-Lin Chen",
"Yen-Ting Lin",
"Yun-Nung Chen"
] | Conference | Poster | 2403.20180 | [
"https://github.com/miulab/taiwan-llama"
] | https://huggingface.co/papers/2403.20180 | 1 | 4 | 0 | 5 | [
"yentinglin/Llama-3-Taiwan-70B-Instruct",
"yentinglin/Llama-3-Taiwan-8B-Instruct",
"yentinglin/Llama-3-Taiwan-8B-Instruct-128k",
"yentinglin/Llama-3-Taiwan-70B-Instruct-DPO",
"yentinglin/Llama-3-Taiwan-70B-Instruct-128k",
"chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF",
"chienweichang/Llama-3-Taiwan-70B-Instruct-GGUF",
"nihaomur/Llama-3-Taiwan-8B-Instruct-AWQ-4bit",
"yentinglin/Llama-3-Taiwan-8B-Instruct-DPO",
"chienweichang/Llama-3-Taiwan-8B-Instruct-DPO-GGUF",
"chienweichang/Llama-3-Taiwan-8B-Instruct-GGUF",
"RichardErkhov/yentinglin_-_Llama-3-Taiwan-8B-Instruct-gguf",
"pigfoot/Llama-3-Taiwan-8B-Instruct-V1-5bpw-exl2"
] | [] | [
"yentinglin/Taiwan-LLaMa2",
"Chiuzu/yentinglin-Llama-3-Taiwan-70B-Instruct",
"kevindomo/yentinglin-Llama-3-Taiwan-70B-Instruct",
"kevindomo/yentinglin-Llama-3-Taiwan-70B-Instruct-DPO",
"rubengtsui/yentinglin-Llama-3-Taiwan-8B-Instruct",
"chienweichang/lmdeploy"
] | 1 | 263 |
null | https://openreview.net/forum?id=7iaAlIlV2H | @inproceedings{
wu2024pairwise,
title={Pairwise Proximal Policy Optimization: Language Model Alignment with Comparative {RL}},
author={Tianhao Wu and Banghua Zhu and Ruoyu Zhang and Zhaojin Wen and Kannan Ramchandran and Jiantao Jiao},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7iaAlIlV2H}
} | LLMs may exhibit harmful behavior without aligning with human values. The dominant approach for steering LLMs towards beneficial behavior is Reinforcement Learning with Human Feedback (RLHF). This involves training a reward model with a human-labeled ranking dataset and fine-tuning the LLM with the reward signal using RL. Despite the fact that the reward is learned from comparing different responses, the RL stage doesn't involve direct comparisons. This inconsistency between reward learning and reinforcement learning stages exacerbates RL's instability. An example would be that the well adopted RL optimizer, Proximal Policy Optimization (PPO), could perform different gradient updates even for batches with identical human preference information. To address this, we propose a new framework, reinforcement learning with comparative feedback, and a simple policy gradient algorithm, Pairwise Proximal Policy Optimization (P3O), that learns to improve from direct comparison. Theoretically, P3O has the nice property of being invariant with any reward functions that contain identical preference information, while doesn't require learning a value function. Empirical evaluations demonstrate that P3O can align with human preferences better than existing methods. This suggest that comparative RL is strong candidate for aligning LLM with preference data. | Pairwise Proximal Policy Optimization: Language Model Alignment with Comparative RL | [
"Tianhao Wu",
"Banghua Zhu",
"Ruoyu Zhang",
"Zhaojin Wen",
"Kannan Ramchandran",
"Jiantao Jiao"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 264 |
||
null | https://openreview.net/forum?id=7VPKtz8CHN | @inproceedings{
zhao2024beyond,
title={Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness},
author={Xinran Zhao and Tong Chen and Sihao Chen and Hongming Zhang and Tongshuang Wu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7VPKtz8CHN}
} | The task of Information Retrieval (IR) requires a system to identify relevant documents based on users' information needs. In real-world scenarios, retrievers are expected to not only rely on the semantic relevance between the documents and the queries but also recognize the nuanced intents or perspectives behind a user query. For example, when asked to verify a claim, a retrieval system is expected to identify evidence from both supporting vs. contradicting perspectives, for the downstream system to make a fair judgment call.
In this work, we study whether retrievers can recognize and respond to different perspectives of the queries --- beyond finding relevant documents for a claim, can retrievers distinguish supporting vs. opposing documents? We reform and extend six existing tasks to create a benchmark for retrieval, where we have diverse perspectives described in free-form text, besides root, neutral queries. We show that current retrievers covered in our experiments have limited awareness of subtly different perspectives in queries and can also be biased toward certain perspectives. Motivated by the observation, we further explore the potential to leverage geometric features of retriever representation space to improve the perspective awareness of retrievers in a zero-shot manner. We demonstrate the efficiency and effectiveness of our projection-based methods on the same set of tasks. Further analysis also shows how perspective awareness improves performance on various downstream tasks, with 4.2% higher accuracy on AmbigQA and 29.9% more correlation with designated viewpoints on essay writing, compared to non-perspective-aware baselines. | Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness | [
"Xinran Zhao",
"Tong Chen",
"Sihao Chen",
"Hongming Zhang",
"Tongshuang Wu"
] | Conference | Poster | 2405.02714 | [
"https://github.com/colinzhaoust/pir"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 265 |
|
null | https://openreview.net/forum?id=7QaEO9WYMa | @inproceedings{
fan2024polyvisualexpert,
title={Poly-Visual-Expert Vision-Language Models},
author={Xiaoran Fan and Tao Ji and 江常皓 and Shuo Li and Senjie Jin and Sirui Song and Junke Wang and Boyang Hong and Lu Chen and Guodong Zheng and Ming Zhang and Huangcaishuang and Rui Zheng and Zhiheng Xi and Yuhao Zhou and Shihan Dou and Junjie Ye and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang and Zuxuan Wu and Yu-Gang Jiang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7QaEO9WYMa}
} | Current large vision-language models (VLMs) frequently face challenges such as the limited capabilities of a single visual component and the excessive length of visual tokens. These issues can limit the model's ability to interpret complex visual information and over-lengthy contextual information accurately. Tackling these challenges is crucial for enhancing the performance and applicability of VLMs. This paper proposes leveraging the ensemble experts technique to synergize the capabilities of individual visual encoders, including those skilled in image-text matching, image segmentation, OCR, etc. This method introduces a fusion network that consolidates the outputs from different visual experts while bridging the gap between image encoders and pre-trained LLMs. In addition, we explore different positional encoding schemes to mitigate the waste of positional encoding caused by lengthy image feature sequences, effectively addressing the issue of position overflow and length limitations. For instance, in our implementation, this technique significantly reduces the positional occupancy in models like SAM, from a substantial 4096 to a more efficient 64 or even down to 1. Experimental results show that VLMs with multiple experts consistently outperform isolated visual encoders, with notable performance improvements as more experts are integrated. Our codes are available on our project website. | Poly-Visual-Expert Vision-Language Models | [
"Xiaoran Fan",
"Tao Ji",
"江常皓",
"Shuo Li",
"Senjie Jin",
"Sirui Song",
"Junke Wang",
"Boyang Hong",
"Lu Chen",
"Guodong Zheng",
"Ming Zhang",
"Huangcaishuang",
"Rui Zheng",
"Zhiheng Xi",
"Yuhao Zhou",
"Shihan Dou",
"Junjie Ye",
"Hang Yan",
"Tao Gui",
"Qi Zhang",
"Xipeng Qiu",
"Xuanjing Huang",
"Zuxuan Wu",
"Yu-Gang Jiang"
] | Conference | Poster | [
"https://github.com/fudannlplab/mousi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 266 |
||
null | https://openreview.net/forum?id=7BCmIWVT0V | @inproceedings{
sun2024corex,
title={Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration},
author={Qiushi Sun and Zhangyue Yin and Xiang Li and Zhiyong Wu and Xipeng Qiu and Lingpeng Kong},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=7BCmIWVT0V}
} | Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge. Benefiting from ultra-large-scale training corpora, a single LLM can manage typical NLP tasks competently. However, its performance in executing complex tasks is still confined by the limitations of its internal representation. To push this boundary further, we introduce Corex, a suite of novel general-purpose strategies that transform LLMs into autonomous agents, pioneering multi-model collaborations for task-solving. Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Discuss, Review, and Retrieve modes, which collectively work towards enhancing the reasoning process. These paradigms foster task-agnostic approaches that enable LLMs to “think outside the box,” thereby overcoming common errors and providing better solutions. Through extensive experiments across four different types of reasoning tasks, we demonstrate that orchestrating multiple LLM-based agents to work in concert yields better results compared to well-established existing baselines. Further analysis reveals the advantages of Corex over other multi-model methods, synergies produced among different LLMs, and the effectiveness across various aspects. | Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration | [
"Qiushi Sun",
"Zhangyue Yin",
"Xiang Li",
"Zhiyong Wu",
"Xipeng Qiu",
"Lingpeng Kong"
] | Conference | Poster | 2310.00280 | [
"https://github.com/qiushisun/corex"
] | https://huggingface.co/papers/2310.00280 | 1 | 3 | 0 | 6 | [] | [] | [] | 1 | 267 |
null | https://openreview.net/forum?id=6vEfyp0o68 | @inproceedings{
ding2024mango,
title={{MANGO}: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models},
author={Peng Ding and Jiading Fang and Peng Li and Kangrui Wang and Xiaochen Zhou and Mo Yu and Jing Li and Hongyuan Mei and Matthew Walter},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=6vEfyp0o68}
} | Large language models such as ChatGPT and GPT-4 have recently achieved astonishing performance on a variety of natural language processing tasks. In this paper, we propose MANGO, a benchmark to evaluate their capabilities to perform text-based mapping and navigation. Our benchmark includes 53 mazes taken from a suite of textgames: each maze is paired with a walkthrough that visits every location but does not cover all possible paths. The task is question-answering: for each maze, a large language model reads the walkthrough and answers hundreds of mapping and navigation questions such as "How should you go to Attic from West of House?" and "Where are we if we go north and east from Cellar?". Although these questions are easy to humans, it turns out that even GPT-4, the best-to-date language model, performs poorly at answering them. Further, our experiments suggest that a strong mapping and navigation ability would benefit large language models in performing relevant downstream tasks, such as playing textgames. Our MANGO benchmark will facilitate future research on methods that improve the mapping and navigation capabilities of language models. We host our leaderboard, data, code, and evaluation program at https://mango.ttic.edu and https://github.com/oaklight/mango/. | MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models | [
"Peng Ding",
"Jiading Fang",
"Peng Li",
"Kangrui Wang",
"Xiaochen Zhou",
"Mo Yu",
"Jing Li",
"Hongyuan Mei",
"Matthew Walter"
] | Conference | Poster | 2403.19913 | [
"https://github.com/oaklight/mango"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 268 |
|
null | https://openreview.net/forum?id=6U1FEKP7Ar | @inproceedings{
wang2024exovip,
title={ExoViP: Step-by-step Verification and Exploration with Exoskeleton Modules for Compositional Visual Reasoning},
author={Yuxuan Wang and Alan Yuille and Zhuowan Li and Zilong Zheng},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=6U1FEKP7Ar}
} | Compositional visual reasoning methods, which translate a complex query into a structured composition of feasible visual tasks, have exhibited a strong potential in complicated multi-modal tasks. Empowered by recent advances in large language models (LLMs), this multi-modal challenge has been brought to a new stage by treating LLMs as few-shot/zero-shot planners, i.e., vision-language (VL) programming.
Such methods, despite their numerous merits, suffer from challenges due to LLM planning mistakes or inaccuracy of visual execution modules, lagging behind the non-compositional models.
In this work, we devise a "plug-and-play" method, ExoViP, to correct errors in both the planning and execution stages through introspective verification. We employ verification modules as "exoskeletons" to enhance current VL programming schemes. Specifically, our proposed verification module utilizes a mixture of three sub-verifiers to validate predictions after each reasoning step, subsequently calibrating the visual module predictions and refining the reasoning trace planned by LLMs.
Experimental results on two representative VL programming methods showcase consistent improvements on five compositional reasoning tasks on standard benchmarks. In light of this, we believe that ExoViP can foster better performance and generalization on open-domain multi-modal challenges. | ExoViP: Step-by-step Verification and Exploration with Exoskeleton Modules for Compositional Visual Reasoning | [
"Yuxuan Wang",
"Alan Yuille",
"Zhuowan Li",
"Zilong Zheng"
] | Conference | Poster | 2408.02210 | [
""
] | https://huggingface.co/papers/2408.02210 | 3 | 7 | 2 | 4 | [] | [] | [] | 1 | 269 |
null | https://openreview.net/forum?id=60a1SAtH4e | @inproceedings{
li2024measuring,
title={Measuring and Controlling Instruction (In)Stability in Language Model Dialogs},
author={Kenneth Li and Tianle Liu and Naomi Bashkansky and David Bau and Fernanda Vi{\'e}gas and Hanspeter Pfister and Martin Wattenberg},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=60a1SAtH4e}
} | System-prompting is a standard tool for customizing language-model chatbots, enabling them to follow a specific instruction. An implicit assumption in the use of system prompts is that they will be _stable_, so the chatbot will continue to generate text according to the stipulated instructions for the duration of a conversation. We propose a quantitative benchmark to test this assumption, evaluating instruction stability via self-chats between two instructed chatbots. Testing popular models like LLaMA2-chat-70B and GPT-3.5, we reveal a significant _instruction drift_ within eight rounds of conversations. An empirical and theoretical analysis of this phenomenon suggests the transformer attention mechanism plays a role, due to _attention decay_ over long exchanges. To combat attention decay and instruction drift, we propose a lightweight method called split-softmax, which compares favorably against two strong baselines.
Code: [https://github.com/likenneth/persona_drift](https://github.com/likenneth/persona_drift). | Measuring and Controlling Instruction (In)Stability in Language Model Dialogs | [
"Kenneth Li",
"Tianle Liu",
"Naomi Bashkansky",
"David Bau",
"Fernanda Viégas",
"Hanspeter Pfister",
"Martin Wattenberg"
] | Conference | Poster | 2402.10962 | [
"https://github.com/likenneth/persona_drift"
] | https://huggingface.co/papers/2402.10962 | 0 | 0 | 0 | 7 | [] | [
"Naomibas/llm-system-prompts-benchmark"
] | [] | 1 | 270 |
null | https://openreview.net/forum?id=5u1GpUkKtG | @inproceedings{
eisenstein2024helping,
title={Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking},
author={Jacob Eisenstein and Chirag Nagpal and Alekh Agarwal and Ahmad Beirami and Alexander Nicholas D'Amour and Krishnamurthy Dj Dvijotham and Adam Fisch and Katherine A Heller and Stephen Robert Pfohl and Deepak Ramachandran and Peter Shaw and Jonathan Berant},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5u1GpUkKtG}
} | Reward models play a key role in aligning language model applications towards human preferences.
However, this setup creates an incentive for the language model to exploit errors in the reward model to achieve high estimated reward, a phenomenon often termed \emph{reward hacking}.
A natural mitigation is to train an ensemble of reward models, aggregating over model outputs to obtain a more robust reward estimate.
We explore the application of reward ensembles to alignment at both training time (through reinforcement learning) and inference time (through reranking).
First, we show that reward models are \emph{underspecified}: reward models that perform similarly in-distribution can yield very different rewards when used in alignment, due to distribution shift.
Second, underspecification results in overoptimization, where alignment to one reward model does not improve reward as measured by another reward model trained on the same data.
Third, overoptimization is mitigated by the use of reward ensembles, and ensembles that vary by their \emph{pretraining} seeds lead to better generalization than ensembles that differ only by their \emph{fine-tuning} seeds, with both outperforming individual reward models.
However, even pretrain reward ensembles do not eliminate reward hacking: we show several qualitative reward hacking phenomena that are not mitigated by ensembling because all reward models in the ensemble exhibit similar error patterns. | Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking | [
"Jacob Eisenstein",
"Chirag Nagpal",
"Alekh Agarwal",
"Ahmad Beirami",
"Alexander Nicholas D'Amour",
"Krishnamurthy Dj Dvijotham",
"Adam Fisch",
"Katherine A Heller",
"Stephen Robert Pfohl",
"Deepak Ramachandran",
"Peter Shaw",
"Jonathan Berant"
] | Conference | Poster | 2312.09244 | [
"https://github.com/google-deepmind/reward-ensembles"
] | https://huggingface.co/papers/2312.09244 | 3 | 5 | 1 | 12 | [] | [
"taesiri/arxiv_qa"
] | [] | 1 | 271 |
null | https://openreview.net/forum?id=5fg0VtRxgi | @inproceedings{
sodhi2024step,
title={SteP: Stacked {LLM} Policies for Web Actions},
author={Paloma Sodhi and S.R.K Branavan and Yoav Artzi and Ryan McDonald},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5fg0VtRxgi}
} | Performing tasks on the web presents fundamental challenges to large language models (LLMs), including combinatorially large open-world tasks and variations across web interfaces. Simply specifying a large prompt to handle all possible behaviors and states is extremely complex, and results in behavior leaks between unrelated behaviors. Decomposition to distinct policies can address this challenge but requires carefully handing off control between policies. We propose Stacked LLM Policies for Web Actions (SteP), an approach to dynamically compose policies to solve a diverse set of web tasks. SteP defines a Markov Decision Process where the state is a stack of policies representing the control state, i.e., the chain of policy calls. Unlike traditional methods that are restricted to static hierarchies, SteP enables dynamic control that adapts to the complexity of the task. We evaluate SteP against multiple baselines and web environments including WebArena, MiniWoB++, and a CRM. On WebArena, SteP improves (14.9\% to 33.5\%) over SOTA that use GPT-4 policies, while on MiniWob++, SteP is competitive with prior works while using significantly less data. Our code and data are available at https://asappresearch.github.io/webagents-step. | SteP: Stacked LLM Policies for Web Actions | [
"Paloma Sodhi",
"S.R.K Branavan",
"Yoav Artzi",
"Ryan McDonald"
] | Conference | Poster | 2310.03720 | [
""
] | https://huggingface.co/papers/2310.03720 | 2 | 6 | 1 | 3 | [] | [] | [] | 1 | 272 |
null | https://openreview.net/forum?id=5RdIMlGLXL | @inproceedings{
ostendorff2024llmdatasets,
title={{LLM}-Datasets: An Open Framework for Pretraining Datasets of Large Language Models},
author={Malte Ostendorff and Pedro Ortiz Suarez and Lucas Fonseca Lage and Georg Rehm},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5RdIMlGLXL}
} | Large language models have become the cornerstone of today's natural language processing research. To facilitate the training, evaluation, and deployment of language models, the community has developed a series of tools and frameworks and made them openly available. This joint community effort has led to more collaboration, standardization, and overall more progress in language model research. However, one crucial aspect of large language models has been neglected so far: the pretraining datasets. To address this gap, we present an open framework for the collection and systematic compilation of pretraining datasets, called LLM-Datasets. With LLM-Datasets, we make a community-effort and collaborate with experts from the individual languages to collect and systematically compile datasets suitable in terms of data quantity and quality for pretraining language models in a multilingual setting. The framework provides a unified interface to pretraining datasets enabling the download, text extraction, filtering, and sampling of the pretraining data. It is modular and extensible with new datasets and designed with high-performance-computing requirements in mind that are needed to achieve the scale of today's language models. Users of the framework can focus on the actual data composition and reuse existing datasets from the community while ensuring reproducibility. To showcase LLM-Datasets, we compiled a pretraining dataset with 2.3 trillion tokens for a large language model covering 32 European languages. | LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models | [
"Malte Ostendorff",
"Pedro Ortiz Suarez",
"Lucas Fonseca Lage",
"Georg Rehm"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 273 |
||
null | https://openreview.net/forum?id=5Nsl0nlStc | @inproceedings{
mavromatis2024pack,
title={Pack of {LLM}s: Model Fusion at Test-Time via Perplexity Optimization},
author={Costas Mavromatis and Petros Karypis and George Karypis},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5Nsl0nlStc}
} | Fusing knowledge from multiple Large Language Models (LLMs) can combine their diverse strengths to achieve improved performance on a given task. However, current fusion approaches either rely on learning-based fusers that do not generalize to new LLMs, or do not take into account how well each LLM understands the input. In this work, we study LLM fusion at test-time, which enables leveraging knowledge from arbitrary user-specified LLMs during inference. We introduce Pack of LLMs (PackLLM), an effective method for test-time fusion that leverages each LLM’s expertise, given an input prompt. PackLLM performs model fusion by solving an optimization problem for determining each LLM’s importance, so that perplexity over the input prompt is minimized. First, our simple PackLLM-sim variant validates that perplexity is a good indicator for measuring each LLM’s expertise. Second, our PackLLM-opt variant approximately solves the perplexity minimization problem via a greedy algorithm. The derived importance weights are used to combine the LLMs during inference. We conduct experiments with over 100 total LLMs on a diverse set of tasks. Experimental results show that (i) perplexity is a reliable measure for LLM fusion, (ii) PackLLM outperforms test-time fusion baselines by 1.89% accuracy points, (iii) PackLLM can leverage new LLMs to improve performance over learning-based fusion approaches by 3.92–11.94% accuracy points, and (iv) PackLLM benefits over selecting the best or largest model and model merging in certain cases. Our code is provided at [https://github.com/cmavro/PackLLM](https://github.com/cmavro/PackLLM). | Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization | [
"Costas Mavromatis",
"Petros Karypis",
"George Karypis"
] | Conference | Poster | 2404.11531 | [
"https://github.com/cmavro/packllm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 274 |
|
null | https://openreview.net/forum?id=5Evv4tIjUI | @inproceedings{
lee2024exploiting,
title={Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners},
author={Jihyeon Lee and Dain Kim and Doohae Jung and Boseop Kim and Kyoung-Woon On},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5Evv4tIjUI}
} | In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i.e., seq2seq) models excel in methods that rely on weight updates. Recently, a few studies have demonstrated the feasibility of few-shot learning with seq2seq models; however, this has been limited to tasks that align well with the seq2seq architecture, such as summarization and translation. Inspired by these initial studies, we provide a first-ever extensive experiment comparing the in-context few-shot learning capabilities of decoder-only and encoder-decoder models on a broad range of tasks. Furthermore, we propose two methods to more effectively elicit in-context learning ability in seq2seq models: objective-aligned prompting and a fusion-based approach. Remarkably, our approach outperforms a decoder-only model that is six times larger and exhibits significant performance improvements compared to conventional seq2seq models across a variety of settings. We posit that, with the right configuration and prompt design, seq2seq models can be highly effective few-shot learners for a wide spectrum of applications. | Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners | [
"Jihyeon Lee",
"Dain Kim",
"Doohae Jung",
"Boseop Kim",
"Kyoung-Woon On"
] | Conference | Poster | 2307.14856 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 275 |
|
null | https://openreview.net/forum?id=5B2K4LRgmz | @inproceedings{
gerstgrasser2024is,
title={Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data},
author={Matthias Gerstgrasser and Rylan Schaeffer and Apratim Dey and Rafael Rafailov and Tomasz Korbak and Henry Sleight and Rajashree Agrawal and John Hughes and Dhruv Bhandarkar Pai and Andrey Gromov and Dan Roberts and Diyi Yang and David L. Donoho and Sanmi Koyejo},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=5B2K4LRgmz}
} | The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when future models are trained on model-generated data? Recent investigations answered that such model-data feedback loops cause performance to progressively degrades with each model-data iteration until fitted models become useless, a phenomenon termed model collapse. However, those studies largely assumed that new data replace old data over time, where a more realistic assumption is that data accumulate over time. In this paper, we ask: what effect does accumulating data have on model collapse?
We first empirically study this question by pretraining sequences of language models on text corpora. After confirming that replacing the original real data by each generation's synthetic data does indeed tend towards model collapse, we demonstrate that accumulating synthetic data with real data avoids model collapse; these results hold across a range of sizes, architectures, and hyperparameters. We obtain similar results for other deep generative models: diffusion models for molecule conformation generation and variational autoencoders for image generation. To understand why accumulating data can avoid model collapse, we use an analytically tractable framework introduced by prior work in which a sequence of linear models are fit to previous models' outputs. Previous work used this framework to show that if data are replaced, the test error increases with the number of model-fitting iterations; we extend this argument to prove that if data instead accumulate, the test error has a finite upper bound independent of the number of iterations, meaning model collapse is avoided.
Our work provides consistent empirical and theoretical evidence that data accumulation avoids model collapse. | Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data | [
"Matthias Gerstgrasser",
"Rylan Schaeffer",
"Apratim Dey",
"Rafael Rafailov",
"Tomasz Korbak",
"Henry Sleight",
"Rajashree Agrawal",
"John Hughes",
"Dhruv Bhandarkar Pai",
"Andrey Gromov",
"Dan Roberts",
"Diyi Yang",
"David L. Donoho",
"Sanmi Koyejo"
] | Conference | Poster | 2404.01413 | [
""
] | https://huggingface.co/papers/2404.01413 | 0 | 0 | 0 | 14 | [] | [] | [] | 1 | 276 |
null | https://openreview.net/forum?id=4aqq9xTtih | @inproceedings{
dong2024promptprompted,
title={Prompt-prompted Adaptive Structured Pruning for Efficient {LLM} Generation},
author={Harry Dong and Beidi Chen and Yuejie Chi},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=4aqq9xTtih}
} | With the development of transformer-based large language models (LLMs), they have been applied to many fields due to their remarkable utility, but this comes at a considerable computational cost at deployment. Fortunately, some methods such as pruning or constructing a mixture of experts (MoE) aim at exploiting sparsity in transformer feedforward (FF) blocks to gain boosts in speed and reduction in memory requirements. However, these techniques can be very costly and inflexible in practice, as they often require training or are restricted to specific types of architectures. To address this, we introduce GRIFFIN, a novel training-free and calibration-free method that selects unique FF experts at the sequence level for efficient generation across a plethora of LLMs with different non-ReLU activation functions. This is possible due to a critical observation that many trained LLMs naturally produce highly structured FF activation patterns within a sequence, which we call flocking. Despite our method's simplicity, we show with 50% of the FF parameters, GRIFFIN maintains the original model's performance with little to no degradation on a variety of classification and generation tasks, all while improving latency (e.g. 1.29$\times$ and 1.25$\times$ speed-ups in Gemma 7B and Llama 2 13B, respectively, on an NVIDIA L40). Code is available at https://github.com/hdong920/GRIFFIN. | Prompt-prompted Adaptive Structured Pruning for Efficient LLM Generation | [
"Harry Dong",
"Beidi Chen",
"Yuejie Chi"
] | Conference | Poster | 2404.01365 | [
"https://github.com/hdong920/griffin"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 277 |
|
null | https://openreview.net/forum?id=4HNAwZFDcH | @inproceedings{
styles2024workbench,
title={WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting},
author={Olly Styles and Sam Miller and Patricio Cerda-Mardini and Tanaya Guha and Victor Sanchez and Bertie Vidgen},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=4HNAwZFDcH}
} | We introduce WorkBench: a benchmark dataset for evaluating agents’ ability to execute tasks in a workplace setting. WorkBench contains a sandbox environment with five databases, 26 tools, and 690 tasks. These tasks represent common business activities, such as sending emails and scheduling meetings. The tasks in WorkBench are challenging as they require planning, tool selection, and often multiple actions. If a task has been successfully executed, one (or more) of the database values may change. The correct outcome for each task is unique and unambiguous, which allows for robust, automated evaluation. We call this key contribution outcome-centric evaluation. We evaluate five existing ReAct agents on WorkBench, finding they successfully complete as few as 3% of tasks (Llama2-70B), and just 43% for the best-performing (GPT-4). We further find that agents’ errors can result in the wrong action being taken, such as an email being sent to the wrong person. WorkBench reveals weaknesses in agents’ ability to undertake common business activities, raising questions about their use in high-stakes workplace settings. WorkBench is publicly available as a free resource at https://github.com/link_updated_upon_acceptance | WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting | [
"Olly Styles",
"Sam Miller",
"Patricio Cerda-Mardini",
"Tanaya Guha",
"Victor Sanchez",
"Bertie Vidgen"
] | Conference | Poster | 2405.00823 | [
"https://github.com/olly-styles/workbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 278 |
|
null | https://openreview.net/forum?id=46Zgqo4QIU | @inproceedings{
zelikman2024selftaught,
title={Self-Taught Optimizer ({STOP}): Recursively Self-Improving Code Generation},
author={Eric Zelikman and Eliana Lorch and Lester Mackey and Adam Tauman Kalai},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=46Zgqo4QIU}
} | Several recent advances in AI systems solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. A variety of self-improvement strategies are proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our experiments, is capable of writing code that can call itself to improve itself. We consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox. | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | [
"Eric Zelikman",
"Eliana Lorch",
"Lester Mackey",
"Adam Tauman Kalai"
] | Conference | Poster | 2310.02304 | [
"https://github.com/microsoft/stop"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 279 |
|
null | https://openreview.net/forum?id=3ypWPhMGhV | @inproceedings{
chu2024cohesive,
title={Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues},
author={KuanChao Chu and Yi-Pei Chen and Hideki Nakayama},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3ypWPhMGhV}
} | This paper investigates the quality of multi-agent dialogues in simulations powered by Large Language Models (LLMs). Analyzing dialogues and memory over multiple sessions revealed significant issues such as repetition, inconsistency, and hallucination, exacerbated by the propagation of erroneous information. To combat these challenges, we propose a novel Screening, Diagnosis, and Regeneration (SDR) framework that detects and corrects utterance errors through a comprehensive process involving immediate issue identification, evidence gathering from past dialogues, and LLM analysis for utterance revision. By incorporating our SDR framework to Generative Agents (Park et al., 2023), we enhance the diversity, consistency, and factualness of the generated dialogues. This work presents a pioneering approach to enhancing dialogue quality in multi-agent simulations, establishing a new standard for future research in the field. | Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues | [
"KuanChao Chu",
"Yi-Pei Chen",
"Hideki Nakayama"
] | Conference | Poster | 2407.09897 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 280 |
|
null | https://openreview.net/forum?id=3nTbuygoop | @inproceedings{
wu2024stateflow,
title={StateFlow: Enhancing {LLM} Task-Solving through State-Driven Workflows},
author={Yiran Wu and Tianwei Yue and Shaokun Zhang and Chi Wang and Qingyun Wu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3nTbuygoop}
} | It is a notable trend to use Large Language Models (LLMs) to tackle complex tasks, e.g., tasks that require a sequence of actions and dynamic interaction with tools and external environments.
In this paper, we propose StateFlow, a novel LLM-based task-solving paradigm that conceptualizes complex task-solving processes as state machines.
In StateFlow, we distinguish between "process grounding” (via state and state transitions) and "sub-task solving” (through actions within a state), enhancing control and interpretability of the task-solving procedure. A state represents the status of a running process. The transitions between states are controlled by heuristic rules or decisions made by the LLM, allowing for a dynamic and adaptive progression.
Upon entering a state, a series of actions is executed, involving not only calling LLMs guided by different prompts, but also the utilization of external tools as needed.
Our results show that StateFlow significantly enhances LLMs' efficiency. For instance, StateFlow achieves 13\% and 28\% higher success rates compared to ReAct in InterCode SQL and ALFWorld benchmark, with 5$\times$ and 3$\times$ less cost respectively.
We also show that StateFlow can be combined with iterative refining methods like Reflexion to further improve performance. | StateFlow: Enhancing LLM Task-Solving through State-Driven Workflows | [
"Yiran Wu",
"Tianwei Yue",
"Shaokun Zhang",
"Chi Wang",
"Qingyun Wu"
] | Conference | Poster | 2403.11322 | [
"https://github.com/yiranwu0/stateflow"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 281 |
|
null | https://openreview.net/forum?id=3X2L2TFr0f | @inproceedings{
hu2024minicpm,
title={Mini{CPM}: Unveiling the Potential of Small Language Models with Scalable Training Strategies},
author={Shengding Hu and Yuge Tu and Xu Han and Ganqu Cui and Chaoqun He and Weilin Zhao and Xiang Long and Zhi Zheng and Yewei Fang and Yuxiang Huang and Xinrong Zhang and Zhen Leng Thai and Chongyi Wang and Yuan Yao and Chenyang Zhao and Jie Zhou and Jie Cai and Zhongwu Zhai and Ning Ding and Chao Jia and Guoyang Zeng and dahai li and Zhiyuan Liu and Maosong Sun},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3X2L2TFr0f}
} | The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense, particularly given the immense cost of experimentation. This scenario underscores the importance of exploring the potential of Small Language Models (SLMs) as a resource-efficient alternative. In this context, we introduce MiniCPM, specifically the 1.2B and 2.4B non-embedding parameter variants, not only excel in their respective categories but also demonstrate capabilities on par with 7B-13B LLMs. While focusing on SLMs, our approach exhibits scalability in both model and data dimensions for future LLM research. Regarding model scaling, we employ extensive model wind tunnel experiments for stable and optimal scaling. For data scaling, we introduce a Warmup-Stable-Decay (WSD) learning rate scheduler (LRS), conducive to continuous training and domain adaptation. We present an in-depth analysis of the intriguing training dynamics that occurred in the WSD LRS. With WSD LRS, we are now able to efficiently study data-model scaling law without extensive retraining experiments on both axes of model and data, from which we derive the much higher compute optimal data-model ratio than Chinchilla Optimal. Additionally, we introduce MiniCPM family, including MiniCPM-DPO, MiniCPM-MoE and MiniCPM-128K, whose excellent performance further cementing MiniCPM's foundation in diverse SLM applications. MiniCPM models are available publicly~\footnote{\url{https://github.com/OpenBMB/MiniCPM}}. | MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies | [
"Shengding Hu",
"Yuge Tu",
"Xu Han",
"Ganqu Cui",
"Chaoqun He",
"Weilin Zhao",
"Xiang Long",
"Zhi Zheng",
"Yewei Fang",
"Yuxiang Huang",
"Xinrong Zhang",
"Zhen Leng Thai",
"Chongyi Wang",
"Yuan Yao",
"Chenyang Zhao",
"Jie Zhou",
"Jie Cai",
"Zhongwu Zhai",
"Ning Ding",
"Chao Jia",
"Guoyang Zeng",
"dahai li",
"Zhiyuan Liu",
"Maosong Sun"
] | Conference | Oral | 2404.06395 | [
"https://github.com/openbmb/minicpm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 282 |
|
null | https://openreview.net/forum?id=3TzGD95Jw1 | @inproceedings{
su2024timo,
title={Timo: Towards Better Temporal Reasoning for Language Models},
author={Zhaochen Su and Jun Zhang and Tong Zhu and Xiaoye Qu and Juntao Li and Min zhang and Yu Cheng},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3TzGD95Jw1}
} | Reasoning about time is essential for Large Language Models (LLMs) to understand the world. Previous works focus on solving specific tasks, primarily on time-sensitive question answering.
While these methods have proven effective, they cannot generalize to a wider spectrum of temporal reasoning tasks.
Therefore, we propose a crucial question: Can we build a universal framework to handle a variety of temporal reasoning tasks?
To that end, we systematically study 38 temporal reasoning tasks.
Based on the observation that 19 tasks are directly related to mathematics, we first leverage the available mathematical dataset to set a solid foundation for temporal reasoning.
However, the in-depth study indicates that focusing solely on mathematical enhancement falls short of addressing pure temporal reasoning tasks. To mitigate this limitation, we propose a simple but effective self-critic temporal optimization method to enhance the model's temporal reasoning capabilities without sacrificing general task abilities.
Finally, we develop Timo, a model designed to excel in temporal reasoning at the 7B and 13B scales. Notably, Timo outperforms the counterpart LLMs by 10.0 and 7.6 in average accuracy scores and achieves the new state-of-the-art (SOTA) performance of comparable size. Extensive experiments further validate our framework's effectiveness and its generalization across diverse temporal tasks. The code is available at https://github.com/zhaochen0110/Timo. | Timo: Towards Better Temporal Reasoning for Language Models | [
"Zhaochen Su",
"Jun Zhang",
"Tong Zhu",
"Xiaoye Qu",
"Juntao Li",
"Min zhang",
"Yu Cheng"
] | Conference | Poster | 2406.14192 | [
"https://github.com/zhaochen0110/timo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 283 |
|
null | https://openreview.net/forum?id=3HTVP34WWE | @inproceedings{
wang2024bot,
title={Bot or Human? Detecting Chat{GPT} Imposters with A Single Question},
author={Hong Wang and Xuan Luo and Weizhi Wang and Melody Yu and Xifeng Yan},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3HTVP34WWE}
} | Large language models (LLMs) like GPT-4 have recently demonstrated impressive capabilities in natural language understanding and generation. However, there is a concern that they can be misused for malicious purposes, such as fraud or denial-of-service attacks. Therefore, it is crucial to develop methods for detecting whether the party involved in a conversation is a bot or a human. In this paper, we propose a framework named **FLAIR**, Finding Large Language Model Authenticity via a Single Inquiry and Response, to detect conversational bots in an online manner. Specifically, we target a single question scenario that can effectively differentiate human users from bots. The questions are divided into two categories: those that are easy for humans but difficult for bots (e.g., counting, substitution, searching, and ASCII art reasoning), and those that are easy for bots but difficult for humans (e.g., memorization and computation). Our approach shows different strengths of these questions in their effectiveness, providing a new way for online service providers to protect themselves against nefarious activities. Our code and question set are available at https://github.com/hongwang600/FLAIR. | Bot or Human? Detecting ChatGPT Imposters with A Single Question | [
"Hong Wang",
"Xuan Luo",
"Weizhi Wang",
"Melody Yu",
"Xifeng Yan"
] | Conference | Poster | 2305.06424 | [
"https://github.com/hongwang600/flair"
] | https://huggingface.co/papers/2305.06424 | 1 | 1 | 0 | 4 | [] | [] | [] | 1 | 284 |
null | https://openreview.net/forum?id=3GhOWfSLrD | @inproceedings{
wang2024will,
title={Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in {LLM}s},
author={Pengda Wang and Zilin Xiao and Hanjie Chen and Frederick L. Oswald},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=3GhOWfSLrD}
} | Although large language models (LLMs) have demonstrated remarkable proficiency in modeling text and generating human-like text, they may exhibit biases acquired from training data in doing so. Specifically, LLMs may be susceptible to a common cognitive trap in human decision-making called the representativeness heuristic. This is a concept in psychology that refers to judging the likelihood of an event based on how closely it resembles a well-known prototype or typical example, versus considering broader facts or statistical evidence. This research investigates the impact of the representativeness heuristic on LLM reasoning. We created ReHeAT (Representativeness Heuristic AI Testing), a dataset containing a series of problems spanning six common types of representativeness heuristics. Experiments reveal that four LLMs applied to ReHeAT all exhibited representativeness heuristic biases. We further identify that the model's reasoning steps are often incorrectly based on a stereotype rather than on the problem's description. Interestingly, the performance improves when adding a hint in the prompt to remind the model to use its knowledge. This suggests the uniqueness of the representativeness heuristic compared to traditional biases. It can occur even when LLMs possess the correct knowledge while falling into a cognitive trap. This highlights the importance of future research focusing on the representativeness heuristic in model reasoning and decision-making and on developing solutions to address it. | Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in LLMs | [
"Pengda Wang",
"Zilin Xiao",
"Hanjie Chen",
"Frederick L. Oswald"
] | Conference | Oral | 2404.01461 | [
"https://github.com/mrzilinxiao/llmheuristicreheat"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 285 |
|
null | https://openreview.net/forum?id=2wtj0up8rv | @inproceedings{
zhou2024enhancing,
title={Enhancing Language Models with Idiomatic Reasoning},
author={Jianing Zhou and Ziheng Zeng and Hongyu Gong and Suma Bhat},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=2wtj0up8rv}
} | Advancements in Large Language Models (LLMs) have significantly propelled the field of Natural Language Processing (NLP); however, nuanced reasoning in the presence of non-canonical language forms, such as figurative language, remains an intricate challenge. These language forms, integral to human communication, elude standard LLM comprehension due to their inherent non-compositionality, contextual ambiguity, and sparse representation in text corpora. Addressing these challenges, this paper introduces an innovative approach to seamlessly incorporate idiomatic knowledge into pre-trained language models (PTLMs). Our methodology first employs a multi-view data augmentation strategy that uses idiomatic instances representing one property to generate training data for various idiom-related tasks. When combined with a novel parameter-efficient tuning mechanism that accounts for the unique attributes of idiomatic language, we embed task-specific and idiomaticity-aware inductive biases within a PTLM. Integrating a meta-pretraining protocol based on meta-learning principles, further equips the model with enhanced adaptability to diverse downstream idiom-aware tasks. Empirical validation on diverse benchmarks centered around idiom comprehension and reasoning, demonstrates the efficacy of our approach. Notably, our model surpasses various parameter-efficient fine-tuning baselines outperforming the conventional full fine-tuning paradigms, thereby creating more contextually aware and linguistically robust language models. | Enhancing Language Models with Idiomatic Reasoning | [
"Jianing Zhou",
"Ziheng Zeng",
"Hongyu Gong",
"Suma Bhat"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 286 |
||
null | https://openreview.net/forum?id=2oHnsM9M9D | @inproceedings{
brassard2024acorn,
title={{ACORN}: Aspect-wise Commonsense Reasoning Explanation Evaluation},
author={Ana Brassard and Benjamin Heinzerling and Keito Kudo and Keisuke Sakaguchi and Kentaro Inui},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=2oHnsM9M9D}
} | Evaluating the quality of free-text explanations is a multifaceted, subjective, and labor-intensive task. Large language models (LLMs) present an appealing alternative due to their potential for consistency, scalability, and cost-efficiency. In this work, we present ACORN, a new dataset of 3,500 free-text explanations and aspect-wise quality ratings, and use it to evaluate how LLMs rate explanations. We observed that larger models outputted labels that maintained or increased the inter-annotator agreement, suggesting that they are within the expected variance between human raters. However, their correlation with majority-voted human ratings varied across different quality aspects, indicating that they are not a complete replacement. In turn, using LLMs as a supplement to a smaller group of human raters in some cases improved the correlation with the original majority labels. However, the effect was limited to cases where human raters were scarce, and an additional human rater had a more pronounced effect in all cases. Overall, we recommend against using LLMs as a complete replacement for human raters but encourage using them in configurations that end with targeted human involvement. | ACORN: Aspect-wise Commonsense Reasoning Explanation Evaluation | [
"Ana Brassard",
"Benjamin Heinzerling",
"Keito Kudo",
"Keisuke Sakaguchi",
"Kentaro Inui"
] | Conference | Poster | 2405.04818 | [
"https://github.com/a-brassard/acorn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 287 |
|
null | https://openreview.net/forum?id=2nTzomzjjb | @inproceedings{
jin2024prollm,
title={Pro{LLM}: Protein Chain-of-Thoughts Enhanced {LLM} for Protein-Protein Interaction Prediction},
author={Mingyu Jin and Haochen Xue and Zhenting Wang and Boming Kang and Ruosong Ye and Kaixiong Zhou and Mengnan Du and Yongfeng Zhang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=2nTzomzjjb}
} | The prediction of protein-protein interactions (PPIs) is crucial for understanding biological functions and diseases. Previous machine learning approaches to PPI prediction mainly focus on direct physical interactions, ignoring the broader context of nonphysical connections through intermediate proteins, thus limiting their effectiveness. The emergence of Large Language Models (LLMs) provides a new opportunity for addressing this complex biological challenge. By transforming structured data into natural language prompts, we can map the relationships between proteins into texts. This approach allows LLMs to identify indirect connections between proteins, tracing the path from upstream to downstream. Therefore, we propose a novel framework ProLLM that employs an LLM tailored for PPI for the first time. Specifically, we propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as natural language prompts. ProCoT considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins. Thus, we can use ProCoT to predict the interaction between upstream proteins and downstream proteins. The training of ProLLM employs the ProCoT format, which enhances the model's understanding of complex biological problems. In addition to ProCoT, this paper also contributes to the exploration of embedding replacement of protein sites in natural language prompts, and instruction fine-tuning in protein knowledge datasets. We demonstrate the efficacy of ProLLM through rigorous validation against benchmark datasets, showing significant improvement over existing methods in terms of prediction accuracy and generalizability. Our results highlight the potential of LLMs to transform the field of PPI, serving as a robust potential tool for various categories of biological and medical research. The code is available at: https://anonymous.4open.science/r/ProLLM-AB04. | ProLLM: Protein Chain-of-Thoughts Enhanced LLM for Protein-Protein Interaction Prediction | [
"Mingyu Jin",
"Haochen Xue",
"Zhenting Wang",
"Boming Kang",
"Ruosong Ye",
"Kaixiong Zhou",
"Mengnan Du",
"Yongfeng Zhang"
] | Conference | Poster | 2405.06649 | [
"https://github.com/mingyuj666/prollm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 288 |
|
null | https://openreview.net/forum?id=2cop2jmQVL | @inproceedings{
gandhi2024stream,
title={Stream of Search (SoS): Learning to Search in Language},
author={Kanishk Gandhi and Denise H J Lee and Gabriel Grand and Muxin Liu and Winson Cheng and Archit Sharma and Noah Goodman},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=2cop2jmQVL}
} | Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string --- stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25\% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36\% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones. | Stream of Search (SoS): Learning to Search in Language | [
"Kanishk Gandhi",
"Denise H J Lee",
"Gabriel Grand",
"Muxin Liu",
"Winson Cheng",
"Archit Sharma",
"Noah Goodman"
] | Conference | Oral | 2404.03683 | [
"https://github.com/kanishkg/stream-of-search"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 289 |
|
null | https://openreview.net/forum?id=1pgfvZj0Rx | @inproceedings{
grabb2024risks,
title={Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation},
author={Declan Grabb and Max Lamparth and Nina Vasan},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=1pgfvZj0Rx}
} | Amidst the growing interest in developing task-autonomous AI for automated mental health care, this paper addresses the ethical and practical challenges associated with the issue and proposes a structured framework that delineates levels of autonomy, outlines ethical requirements, and defines beneficial default behaviors for AI agents in the context of mental health support. We also evaluate fourteen state-of-the-art language models (ten off-the-shelf, four fine-tuned) using 16 mental health-related questions designed to reflect various mental health conditions, such as psychosis, mania, depression, suicidal thoughts, and homicidal tendencies. The question design and response evaluations were conducted by mental health clinicians (M.D.s). We find that existing language models are insufficient to match the standard provided by human professionals who can navigate nuances and appreciate context. This is due to a range of issues, including overly cautious or sycophantic responses and the absence of necessary safeguards. Alarmingly, we find that most of the tested models could cause harm if accessed in mental health emergencies, failing to protect users and potentially exacerbating existing symptoms. We explore solutions to enhance the safety of current models. Before the release of increasingly task-autonomous AI systems in mental health, it is crucial to ensure that these models can reliably detect and manage symptoms of common psychiatric disorders to prevent harm to users. This involves aligning with the ethical framework and default behaviors outlined in our study. We contend that model developers are responsible for refining their systems per these guidelines to safeguard against the risks posed by current AI technologies to user mental health and safety.
Trigger warning: Contains and discusses examples of sensitive mental health topics, including suicide and self-harm. | Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation | [
"Declan Grabb",
"Max Lamparth",
"Nina Vasan"
] | Conference | Poster | 2406.11852 | [
"https://github.com/maxlampe/taimh_eval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 290 |
|
null | https://openreview.net/forum?id=1eg6UnpYu7 | @inproceedings{
wu2024prompt,
title={Prompt Public Large Language Models to Synthesize Data for Private On-device Applications},
author={Shanshan Wu and Zheng Xu and Yanxiang Zhang and Yuanbo Zhang and Daniel Ramage},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=1eg6UnpYu7}
} | Pre-training on public data is an effective method to improve the performance for federated learning (FL) with differential privacy (DP). This paper investigates how large language models (LLMs) trained on public data can improve the quality of pre-training data for the on-device language models trained with DP and FL. We carefully design LLM prompts to filter and transform existing public data, and generate new data to resemble the real user data distribution. The model pre-trained on our synthetic dataset achieves relative improvement of 19.0\% and 22.8\% in next word prediction accuracy compared to the baseline model pre-trained on a standard public dataset, when evaluated over the real user data in Gboard (Google Keyboard, a production mobile keyboard application). Furthermore, our method achieves evaluation accuracy better than or comparable to the baseline during the DP FL fine-tuning over the user data from millions of mobile devices, and our final model outperforms the baseline in production A/B testing. Our experiments demonstrate the strengths of LLMs in synthesizing data close to the private distribution even without accessing the private data, and also suggest future research directions to further reduce the distribution gap. | Prompt Public Large Language Models to Synthesize Data for Private On-device Applications | [
"Shanshan Wu",
"Zheng Xu",
"Yanxiang Zhang",
"Yuanbo Zhang",
"Daniel Ramage"
] | Conference | Poster | 2404.04360 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 291 |
|
null | https://openreview.net/forum?id=1ba209BACA | @inproceedings{
wu2024agentdocedit,
title={Agent-DocEdit: Language-Instructed {LLM} Agent for Content-Rich Document Editing},
author={Te-Lin Wu and Rajiv Jain and Yufan Zhou and Puneet Mathur and Vlad I Morariu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=1ba209BACA}
} | Editing content-rich and multimodal documents, such as posters, flyers, and slides, can be tedious if the edits are complex, repetitive, or require subtle skills and deep knowledge of the editing software.
Motivated by recent advancements in both Large Language Model (LLM) agents and multimodal modeling, we propose a framework that automates document editing which takes as input a linguistic edit request from the user and then performs sequential editing actions to the document the satisfy the request.
Our proposed method, Agent-DocEdit, first grounds the edit request directly in the underlying document structure to identify the elements that need to be manipulated. Then, we rely on the agent capabilities of LLMs to generate an edit program which calls a set of pre-defined APIs to modify the underlying structure of the document.
To improve the generated edit program, we leverage a feedback mechanism incorporating a deterministic code executor and a multimodal LLM.
We demonstrate the effectiveness of our proposed modularized LLM editing agent on the DocEdit dataset, where Agent-DocEdit outperforms existing state-of-the-art methods by 70+% in document element grounding and 16+% on final rendition generation. | Agent-DocEdit: Language-Instructed LLM Agent for Content-Rich Document Editing | [
"Te-Lin Wu",
"Rajiv Jain",
"Yufan Zhou",
"Puneet Mathur",
"Vlad I Morariu"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 292 |
||
null | https://openreview.net/forum?id=1Tny4KgGO2 | @inproceedings{
xie2024from,
title={From Strategic Narratives to Code-Like Cognitive Models: An {LLM}-Based Approach in A Sorting Task},
author={Hanbo Xie and Hua-Dong Xiong and Robert Wilson},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=1Tny4KgGO2}
} | One of the goals of Cognitive Science is to understand the cognitive processes underlying human behavior. Traditionally, this goal has been approached by analyzing simple behaviors, such as choices and response times, to try to indirectly infer mental processes. However, a more direct approach is to simply ask people to report their thoughts - for example, by having them Introspect after the fact about the thought processes they used to complete a task. However, the data generated by such verbal reports have been hard to analyze, and whether the reported thoughts are an accurate reflection of the underlying cognitive processes has been difficult to test. Here we take a first stab at addressing these questions by using large language models to analyze verbally reported strategies in a sorting task. In the task, participants sort lists of pictures with unknown orders by pairwise comparison. After completing the task, participants wrote a description of their strategy for completing the task. To test whether these strategy descriptions contained information about people’s actual strategies, we compared their choice behavior with their descriptions of the task. First, we compared the descriptions and choices at the level of strategy, finding that people who used similar sorting algorithms (based on their choices) provided similar verbal descriptions (based on the embeddings of these descriptions in the LLM). Next, we generated code based on their strategy descriptions using GPT-4-Turbo and compared the simulated behaviors from the code to their actual choice behavior, showing that the LLM-generated code predicts choice more accurately than chance other, more stringent, controls. Finally, we also compare the simulated behaviors of generated codes with those from standard algorithms and induct the strategies that this code internally represents. In sum, our study offers a novel approach to modeling human cognitive processes by building code-like cognitive models from introspections, shedding light on the intersection of Artificial Intelligence and Cognitive Sciences. | From Strategic Narratives to Code-Like Cognitive Models: An LLM-Based Approach in A Sorting Task | [
"Hanbo Xie",
"Hua-Dong Xiong",
"Robert Wilson"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 293 |
||
null | https://openreview.net/forum?id=18iNTRPx8c | @inproceedings{
chen2024see,
title={See What {LLM}s Cannot Answer: A Self-Challenge Framework for Uncovering {LLM} Weaknesses},
author={Yulong Chen and Yang Liu and Jianhao Yan and Xuefeng Bai and Ming Zhong and Yinghao Yang and Ziyi Yang and Chenguang Zhu and Yue Zhang},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=18iNTRPx8c}
} | The impressive performance of Large Language Models (LLMs) has consistently surpassed numerous human-designed benchmarks, presenting new challenges in assessing the shortcomings of LLMs.
Designing tasks and finding LLMs' limitations are becoming increasingly important.
In this paper, we investigate the question of whether an LLM can discover its own limitations from the errors it makes.
To this end, we propose a Self-Challenge evaluation framework with human-in-the-loop.
Starting from seed instances that GPT-4 fails to answer, we prompt GPT-4 to summarize error patterns that can be used to generate new instances and incorporate human feedback on them to refine these patterns for generating more challenging data, iteratively.
We end up with 8 diverse patterns, such as text manipulation and questions with assumptions.
We then build a benchmark, SC-G4, consisting of 1,835 instances generated by GPT-4 using these patterns, with human-annotated gold responses.
The SC-G4 serves as a challenging benchmark that allows for a detailed assessment of LLMs' abilities.
Our results show that only 44.96\% of instances in SC-G4 can be answered correctly by GPT-4.
Interestingly, our pilot study indicates that these error patterns also challenge other LLMs, such as Claude-3 and Llama-3, and cannot be fully resolved through fine-tuning. Our work takes the first step to demonstrate that LLMs can autonomously identify their inherent flaws and provide insights for future dynamic and automatic evaluation. | See What LLMs Cannot Answer: A Self-Challenge Framework for Uncovering LLM Weaknesses | [
"Yulong Chen",
"Yang Liu",
"Jianhao Yan",
"Xuefeng Bai",
"Ming Zhong",
"Yinghao Yang",
"Ziyi Yang",
"Chenguang Zhu",
"Yue Zhang"
] | Conference | Poster | 2408.08978 | [
"https://github.com/cylnlp/Self-Challenge-GPT4"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 294 |
|
null | https://openreview.net/forum?id=0oiG1KigYN | @inproceedings{
adams2024speer,
title={{SPEER}: Sentence-Level Planning of Long Clinical Summaries via Embedded Entity Retrieval},
author={Griffin Thomas Adams and Jason Zucker and No{\'e}mie Elhadad},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=0oiG1KigYN}
} | Clinician must write a lengthy summary each time a patient is discharged from the hospital. This task is time-consuming due to the sheer number of unique clinical concepts covered in the admission. Identifying and covering salient entities is vital for the summary to be clinically useful. We fine-tune open-source LLMs (Mistral-7B-Instruct and Zephyr-7B-$\beta$) on the task and find that they generate incomplete and unfaithful summaries. To increase entity coverage, we train a smaller, encoder-only model to predict salient entities, which are treated as content-plans to guide the LLM. To encourage the LLM to focus on specific mentions in the source notes, we propose SPEER: Sentence-level Planning via Embedded Entity Retrieval. Specifically, we mark each salient entity span with special "{{ }}" boundary tags and instruct the LLM to retrieve marked spans before generating each sentence. Sentence-level planning acts as a form of state tracking in that the model is explicitly recording the entities it uses. We fine-tune Mistral and Zephyr variants on a large-scale, diverse dataset of ~167k in-patient hospital admissions and evaluate on 3 datasets. SPEER shows gains in both coverage and faithfulness metrics over non-guided and guided baselines. | SPEER: Sentence-Level Planning of Long Clinical Summaries via Embedded Entity Retrieval | [
"Griffin Thomas Adams",
"Jason Zucker",
"Noémie Elhadad"
] | Conference | Poster | 2401.02369 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 295 |
|
null | https://openreview.net/forum?id=0o95CVdNuz | @inproceedings{
zhang2024effective,
title={Effective Prompt Extraction from Language Models},
author={Yiming Zhang and Nicholas Carlini and Daphne Ippolito},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=0o95CVdNuz}
} | The text generated by large language models is commonly controlled by
prompting, where a prompt prepended to a user’s query guides the model’s
output. The prompts used by companies to guide their models are often
treated as secrets, to be hidden from the user making the query. They
have even been treated as commodities to be bought and sold on market-
places. However, anecdotal reports have shown adversarial users employ-
ing prompt extraction attacks to recover these prompts. In this paper, we
present a framework for systematically measuring the effectiveness of these
attacks. In experiments with 3 different sources of prompts and 11 underly-
ing large language models, we find that simple text-based attacks can in
fact reveal prompts with high probability. Our framework determines with
high precision whether an extracted prompt is the actual secret prompt,
rather than a model hallucination. Prompt extraction from real systems
such as Claude 3 and ChatGPT further suggest that system prompts can be
revealed by an adversary despite existing defenses in place. | Effective Prompt Extraction from Language Models | [
"Yiming Zhang",
"Nicholas Carlini",
"Daphne Ippolito"
] | Conference | Poster | 2307.06865 | [
"https://github.com/y0mingzhang/prompt-extraction"
] | https://huggingface.co/papers/2307.06865 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | 296 |
null | https://openreview.net/forum?id=0VLBwQGWpA | @inproceedings{
yang2024react,
title={ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training},
author={Zonghan Yang and Peng Li and Ming Yan and Ji Zhang and Fei Huang and Yang Liu},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=0VLBwQGWpA}
} | Language agents have demonstrated autonomous decision-making abilities by reasoning with foundation models. Recently, efforts have been made to train language agents for performance improvement, with multi-step reasoning and action trajectories as the training data. However, collecting such trajectories still requires considerable human effort, by either artificial annotation or implementations of diverse prompting frameworks. In this work, we propose A$^\mathbf{3}$T, a framework that enables the Autonomous Annotation of Agent Trajectories in the style of ReAct. The central role is an ActRe prompting agent, which explains the reason for an arbitrary action. When randomly sampling an external action, the ReAct-style agent could query the ActRe agent with the action to obtain its textual rationales. Novel trajectories are then synthesized by prepending the posterior reasoning from ActRe to the sampled action. In this way, the ReAct-style agent executes multiple trajectories for the failed tasks, and selects the successful ones to supplement its failed trajectory for contrastive self-training. Realized by policy gradient methods with binarized rewards, the contrastive self-training with accumulated trajectories facilitates a closed loop for multiple rounds of language agent self-improvement. We conduct experiments using QLoRA fine-tuning with the open-sourced Mistral-7B-Instruct-v0.2. In AlfWorld, the agent trained with A$^3$T obtains a 1-shot success rate of 96\%, and 100\% success with 4 iterative rounds. In WebShop, the 1-shot performance of the A$^3$T agent matches human average, and 4 rounds of iterative refinement lead to the performance approaching human experts. A$^3$T agents significantly outperform existing techniques, including prompting with GPT-4, advanced agent frameworks, and fully fine-tuned LLMs. | ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training | [
"Zonghan Yang",
"Peng Li",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Yang Liu"
] | Conference | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 297 |
||
null | https://openreview.net/forum?id=0UK8c2kg7c | @inproceedings{
hu2024instructav,
title={Instruct{AV}: Instruction Fine-tuning Large Language Models for Authorship Verification},
author={Yujia Hu and Zhiqiang Hu and Chun Wei Seah and Roy Ka-Wei Lee},
booktitle={First Conference on Language Modeling},
year={2024},
url={https://openreview.net/forum?id=0UK8c2kg7c}
} | Large Language Models (LLMs) have demonstrated remarkable proficiency in a wide range of NLP tasks. However, when it comes to authorship verification (AV) tasks, which involve determining whether two given texts share the same authorship, even advanced models like ChatGPT exhibit notable limitations. This paper introduces a novel approach, termed InstructAV, for authorship verification. This approach utilizes LLMs in conjunction with a parameter-efficient fine-tuning (PEFT) method to simultaneously improve accuracy and explainability. The distinctiveness of InstructAV lies in its ability to align classification decisions with transparent and understandable explanations, representing a significant progression in the field of authorship verification. Through comprehensive experiments conducted across various datasets, InstructAV demonstrates its state-of-the-art performance on the AV task, offering high classification accuracy coupled with enhanced explanation reliability. | InstructAV: Instruction Fine-tuning Large Language Models for Authorship Verification | [
"Yujia Hu",
"Zhiqiang Hu",
"Chun Wei Seah",
"Roy Ka-Wei Lee"
] | Conference | Poster | 2407.12882 | [
"https://github.com/Social-AI-Studio/InstructAV"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | 298 |