Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
be5ab5d
·
verified ·
1 Parent(s): 06d5a64

Scheduled Commit

Browse files
data/retrieval_individual-25bfac43-43c3-4e03-a7e4-e33df32ce74f.jsonl CHANGED
@@ -5,3 +5,5 @@
5
  {"tstamp": 1722570505.8326, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570505.662, "finish": 1722570505.8326, "ip": "", "conv_id": "dda318fa76fd443ab16a1186bfa8465c", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "French language model trained on web crawled data attaining state of the art", "output": [["French language model trained on web crawled data attaining state of the art", "Title: CamemBERT: a Tasty French Language Model\n\nAbstract: Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks."]], "corpus": "arxiv"}
6
  {"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "e7b6e9f57d374e85af90476a503221fb", "model_name": "GritLM/GritLM-7B", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
7
  {"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "fa9bcaad1a8342d199e634f1689f6762", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
 
 
 
5
  {"tstamp": 1722570505.8326, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570505.662, "finish": 1722570505.8326, "ip": "", "conv_id": "dda318fa76fd443ab16a1186bfa8465c", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "French language model trained on web crawled data attaining state of the art", "output": [["French language model trained on web crawled data attaining state of the art", "Title: CamemBERT: a Tasty French Language Model\n\nAbstract: Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks."]], "corpus": "arxiv"}
6
  {"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "e7b6e9f57d374e85af90476a503221fb", "model_name": "GritLM/GritLM-7B", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
7
  {"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "fa9bcaad1a8342d199e634f1689f6762", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
8
+ {"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "ef959b98a3944f128ee5903cf8240661", "model_name": "GritLM/GritLM-7B", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers\n\nAbstract: Automatic prompt optimization is an important approach to improving the performance of large language models (LLMs). Recent research demonstrates the potential of using LLMs as prompt optimizers, which can generate improved task prompts via iterative refinement. In this paper, we propose a novel perspective to investigate the design of LLM-based prompt optimizers, by drawing an analogy with gradient-based model optimizers. To connect these two approaches, we identify two pivotal factors in model parameter learning: update direction and update method. Focused on the two aspects, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies for LLM-based prompt optimizers. By systematically analyzing a rich set of improvement strategies, we further develop a capable Gradient-inspired LLM-based Prompt Optimizer called GPO. At each step, it first retrieves relevant prompts from the optimization trajectory as the update direction. Then, it utilizes the generation-based refinement strategy to perform the update, while controlling the edit distance through a cosine-based decay strategy. Extensive experiments demonstrate the effectiveness and efficiency of GPO. In particular, GPO brings an additional improvement of up to 56.8% on Big-Bench Hard and 55.3% on MMLU compared to baseline methods."]], "corpus": "arxiv"}
9
+ {"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "b2246019be7c456b939d67fbd7234a37", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Efficient Prompting Methods for Large Language Models: A Survey\n\nAbstract: Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed prompts, particularly when using lengthy and complex prompts to guide and control the behavior of LLMs. As a result, the LLM field has seen a remarkable surge in efficient prompting methods. In this paper, we present a comprehensive overview of these methods. At a high level, efficient prompting methods can broadly be categorized into two approaches: prompting with efficient computation and prompting with efficient design. The former involves various ways of compressing prompts, and the latter employs techniques for automatic prompt optimization. We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions."]], "corpus": "arxiv"}