Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
896dd0b
·
verified ·
1 Parent(s): 32ba689

Scheduled Commit

Browse files
data/retrieval_battle-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -20,3 +20,5 @@
20
  {"tstamp": 1722359302.8775, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f851a7c74e674aa2962426b5a7b47a4b", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nIt features a female humanoid automaton standing by a basin filled with water.When the user pulls the lever, the water drains and the female automaton refills the basin. Furthermore, he created a robotic musical band. According to Mark Rosheim, unlike Greek designs Arab automata worked with dramatic illusion and manipulated the human perception for practical application. The segmental gears described in The Book of Knowledge of Ingenious Mechanical Devices, published by Al-Jazari shortly before his death in 1206, appeared 100 years later in the most advanced European clocks. Al-Jazari also published instructions on the construction of humanoid automata. The first water clocks modeled on Arabic designs were constructed in Europe about 1000 CE, possibly on the basis of the information that was transmitted during Muslim-Christian contact in Sicily and Spain. Among the first recorded European water clocks is that of Gerbert of Aurillac, built in 985 CE. Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. "]], "0_corpus": "wikipedia", "1_conv_id": "487ff743d431410c884ef2cb7b8e972f", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "1_corpus": "wikipedia"}
21
  {"tstamp": 1722359487.5637, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "0_model_name": "embed-english-v3.0", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "0_corpus": "arxiv", "1_conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"}
22
  {"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"}
 
 
 
20
  {"tstamp": 1722359302.8775, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f851a7c74e674aa2962426b5a7b47a4b", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nIt features a female humanoid automaton standing by a basin filled with water.When the user pulls the lever, the water drains and the female automaton refills the basin. Furthermore, he created a robotic musical band. According to Mark Rosheim, unlike Greek designs Arab automata worked with dramatic illusion and manipulated the human perception for practical application. The segmental gears described in The Book of Knowledge of Ingenious Mechanical Devices, published by Al-Jazari shortly before his death in 1206, appeared 100 years later in the most advanced European clocks. Al-Jazari also published instructions on the construction of humanoid automata. The first water clocks modeled on Arabic designs were constructed in Europe about 1000 CE, possibly on the basis of the information that was transmitted during Muslim-Christian contact in Sicily and Spain. Among the first recorded European water clocks is that of Gerbert of Aurillac, built in 985 CE. Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. "]], "0_corpus": "wikipedia", "1_conv_id": "487ff743d431410c884ef2cb7b8e972f", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "1_corpus": "wikipedia"}
21
  {"tstamp": 1722359487.5637, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "0_model_name": "embed-english-v3.0", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "0_corpus": "arxiv", "1_conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"}
22
  {"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"}
23
+ {"tstamp": 1722359624.4824, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fdd19901cd4547e3815c9d79d5751118", "0_model_name": "BM25", "0_prompt": "How metadata is used with RAG", "0_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "0_corpus": "arxiv", "1_conv_id": "c628bf57e2434ce999a71c814ebac2e8", "1_model_name": "embed-english-v3.0", "1_prompt": "How metadata is used with RAG", "1_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "1_corpus": "arxiv"}
24
+ {"tstamp": 1722359625.838, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "cfd7524090934e63a83ff9284cc4d54c", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "0_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: StructCoder: Structure-Aware Transformer for Code Generation\n\nAbstract: There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/."]], "0_corpus": "arxiv", "1_conv_id": "1e72b400d20445c1a5f905e301166625", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "1_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: Empirical Study of Transformers for Source Code\n\nAbstract: Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model."]], "1_corpus": "arxiv"}
data/retrieval_individual-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -73,3 +73,5 @@
73
  {"tstamp": 1722359551.7443, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359549.5858, "finish": 1722359551.7443, "ip": "", "conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "model_name": "embed-english-v3.0", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "corpus": "arxiv"}
74
  {"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "fdd19901cd4547e3815c9d79d5751118", "model_name": "BM25", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
75
  {"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "c628bf57e2434ce999a71c814ebac2e8", "model_name": "embed-english-v3.0", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
 
 
 
73
  {"tstamp": 1722359551.7443, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359549.5858, "finish": 1722359551.7443, "ip": "", "conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "model_name": "embed-english-v3.0", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "corpus": "arxiv"}
74
  {"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "fdd19901cd4547e3815c9d79d5751118", "model_name": "BM25", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
75
  {"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "c628bf57e2434ce999a71c814ebac2e8", "model_name": "embed-english-v3.0", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
76
+ {"tstamp": 1722359599.5112, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722359599.2813, "finish": 1722359599.5112, "ip": "", "conv_id": "cfd7524090934e63a83ff9284cc4d54c", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: StructCoder: Structure-Aware Transformer for Code Generation\n\nAbstract: There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/."]], "corpus": "arxiv"}
77
+ {"tstamp": 1722359599.5112, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722359599.2813, "finish": 1722359599.5112, "ip": "", "conv_id": "1e72b400d20445c1a5f905e301166625", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: Empirical Study of Transformers for Source Code\n\nAbstract: Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model."]], "corpus": "arxiv"}