Muennighoff commited on
Commit
2df4202
·
verified ·
1 Parent(s): daf63cb

Scheduled Commit

Browse files
data/retrieval_battle-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -22,3 +22,4 @@
22
  {"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"}
23
  {"tstamp": 1722359624.4824, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fdd19901cd4547e3815c9d79d5751118", "0_model_name": "BM25", "0_prompt": "How metadata is used with RAG", "0_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "0_corpus": "arxiv", "1_conv_id": "c628bf57e2434ce999a71c814ebac2e8", "1_model_name": "embed-english-v3.0", "1_prompt": "How metadata is used with RAG", "1_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "1_corpus": "arxiv"}
24
  {"tstamp": 1722359625.838, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "cfd7524090934e63a83ff9284cc4d54c", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "0_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: StructCoder: Structure-Aware Transformer for Code Generation\n\nAbstract: There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/."]], "0_corpus": "arxiv", "1_conv_id": "1e72b400d20445c1a5f905e301166625", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "1_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: Empirical Study of Transformers for Source Code\n\nAbstract: Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model."]], "1_corpus": "arxiv"}
 
 
22
  {"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"}
23
  {"tstamp": 1722359624.4824, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fdd19901cd4547e3815c9d79d5751118", "0_model_name": "BM25", "0_prompt": "How metadata is used with RAG", "0_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "0_corpus": "arxiv", "1_conv_id": "c628bf57e2434ce999a71c814ebac2e8", "1_model_name": "embed-english-v3.0", "1_prompt": "How metadata is used with RAG", "1_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "1_corpus": "arxiv"}
24
  {"tstamp": 1722359625.838, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "cfd7524090934e63a83ff9284cc4d54c", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "0_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: StructCoder: Structure-Aware Transformer for Code Generation\n\nAbstract: There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/."]], "0_corpus": "arxiv", "1_conv_id": "1e72b400d20445c1a5f905e301166625", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "1_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: Empirical Study of Transformers for Source Code\n\nAbstract: Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model."]], "1_corpus": "arxiv"}
25
+ {"tstamp": 1722360183.5049, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8e4d32f5269142e68ec26538741715ea", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Why is the sky blue?", "0_output": [["Why is the sky blue?", "Sky\n\nEtymology The word sky comes from the Old Norse , meaning 'cloud, abode of God'. The Norse term is also the source of the Old English , which shares the same Indo-European base as the classical Latin , meaning 'obscure'.\nIn Old English, the term heaven was used to describe the observable expanse above the earth. During the period of Middle English, \"heaven\" began shifting toward its current, religious meaning.\nDuring daytime\nExcept for direct sunlight, most of the light in the daytime sky is caused by scattering, which is dominated by a small-particle limit called Rayleigh scattering. The scattering due to molecule-sized particles (as in air) is greater in the directions both toward and away from the source of light than it is in directions perpendicular to the incident path. Scattering is significant for light at all visible wavelengths, but is stronger at the shorter (bluer) end of the visible spectrum, meaning that the scattered light is bluer than its source: the Sun. The remaining direct sunlight, having lost some of its shorter-wavelength components, appears slightly less blue.\nScattering also occurs even more strongly in clouds. Individual water droplets refract white light into a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed-out white color."]], "0_corpus": "wikipedia", "1_conv_id": "00cd735a722943bca9ff50386cee6ee3", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Why is the sky blue?", "1_output": [["Why is the sky blue?", "Rayleigh scattering\n\nRayleigh scattering ( ), named after the 19th-century British physicist Lord Rayleigh (John William Strutt), is the predominantly elastic scattering of light, or other electromagnetic radiation, by particles with a size much smaller than the wavelength of the radiation. For light frequencies well below the resonance frequency of the scattering medium (normal dispersion regime), the amount of scattering is inversely proportional to the fourth power of the wavelength, e.g., a blue color is scattered much more than a red color as light propagates through air.\nRayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, causing them to move at the same frequency. The particle, therefore, becomes a small radiating dipole whose radiation we see as scattered light. The particles may be individual atoms or molecules; it can occur when light travels through transparent solids and liquids, but is most prominently seen in gases.\nRayleigh scattering of sunlight in Earth's atmosphere causes diffuse sky radiation, which is the reason for the blue color of the daytime and twilight sky, as well as the yellowish to reddish hue of the low Sun. Sunlight is also subject to Raman scattering, which changes the rotational state of the molecules and gives rise to polarization effects."]], "1_corpus": "wikipedia"}