Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
bc4d754
·
verified ·
1 Parent(s): b68f640

Scheduled Commit

Browse files
data/retrieval_battle-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -5,3 +5,8 @@
5
  {"tstamp": 1722357588.6391, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "d4f10aa5072847e6823253fa571550a1", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Suomen startup skene", "0_output": [["Suomen startup skene", "Startup company\n\nA startup or start-up is a company or project undertaken by an entrepreneur to seek, develop, and validate a scalable business model. While entrepreneurship includes all new businesses including self-employment and businesses that do not intend to go public, startups are new businesses that intend to grow large beyond the solo-founder. During the beginning, startups face high uncertainty and have high rates of failure, but a minority of them do go on to become successful and influential.\nActions\nStartups typically begin by a founder (solo-founder) or co-founders who have a way to solve a problem. The founder of a startup will do the market validation by problem interview, solution interview, and building a minimum viable product (MVP), i.e. a prototype, to develop and validate their business models. The startup process can take a long period of time; hence, sustaining effort is required. Over the long term, sustaining effort is especially challenging because of the high failure rates and uncertain outcomes. Having a business plan in place outlines what to do and how to plan and achieve an idea in the future. Typically, these plans outline the first three to five years of your business strategy.\nDesign principles\nModels behind startups presenting as ventures are usually associated with design science. Design science uses design principles considered to be a coherent set of normative ideas and propositions to design and construct the company's backbone. For example, one of the initial design principles is affordable loss.\nHeuristics and biases in startup actions"]], "0_corpus": "wikipedia", "1_conv_id": "7d2dccb966ad4902a22b492ec1a01a3b", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Suomen startup skene", "1_output": [["Suomen startup skene", "Suseok\n\nHistory Suseok began as votive art over 3000 years ago and became a fixture of Korean society during the Joseon Dynasty, when Confucian scholars displayed them on their writing desks. From here is where the English name \"scholar's rock\" originates. Suseok regained popularity amongst nouveau riche businessmen in the 1980s during Korea's economic boom.\nThe art usually works on three scales: large installations of monumental shaped stones as ornamental gates; medium-sized shaped stones for landscape decoration within Korean gardens; and the smaller shaped stones for scholar's tables, the most important of these scales.\nChinese gongshi influenced the development of suseok in Korea.\nEvaluation\nEarly on, important sites within landscape were marked with shaped stones, similarly to distance markers on post roads. Burial sites were also given permanent marking by large scale tumuli or mounds, often surrounded by anthropomorphic shaped stones much akin to that of Inuit or First Nations' memory markers. The animistic belief of nature being alive, and large-scaled elements of nature having souls, has led to the continued use of massive sculpted stone in natural forms throughout Korean traditional entranceways, as the firstgrowth cedarwood traditionally used for gates is now rare.\nAs Confucian scholarship ascended into the golden age of the Joseon dynasty, scholar rocks became an essential fixture of the writing tables of the yangban class of scholars, and a brilliant example of Confucian art."]], "1_corpus": "wikipedia"}
6
  {"tstamp": 1722357673.547, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "40adf2ecfae142f690cf15572d1c2a39", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "0_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "0_corpus": "arxiv", "1_conv_id": "5b7d23084cf84e698b8ae3ef7e10bdc3", "1_model_name": "embed-english-v3.0", "1_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "1_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "1_corpus": "arxiv"}
7
  {"tstamp": 1722358076.7701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "ba39a43bd1d845a29bb436a813e124f3", "0_model_name": "text-embedding-004", "0_prompt": "inpars", "0_output": [["inpars", "Title: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin."]], "0_corpus": "arxiv", "1_conv_id": "613710642ec9426ca02056c89cf5fbf7", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "inpars", "1_output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "1_corpus": "arxiv"}
 
 
 
 
 
 
5
  {"tstamp": 1722357588.6391, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "d4f10aa5072847e6823253fa571550a1", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Suomen startup skene", "0_output": [["Suomen startup skene", "Startup company\n\nA startup or start-up is a company or project undertaken by an entrepreneur to seek, develop, and validate a scalable business model. While entrepreneurship includes all new businesses including self-employment and businesses that do not intend to go public, startups are new businesses that intend to grow large beyond the solo-founder. During the beginning, startups face high uncertainty and have high rates of failure, but a minority of them do go on to become successful and influential.\nActions\nStartups typically begin by a founder (solo-founder) or co-founders who have a way to solve a problem. The founder of a startup will do the market validation by problem interview, solution interview, and building a minimum viable product (MVP), i.e. a prototype, to develop and validate their business models. The startup process can take a long period of time; hence, sustaining effort is required. Over the long term, sustaining effort is especially challenging because of the high failure rates and uncertain outcomes. Having a business plan in place outlines what to do and how to plan and achieve an idea in the future. Typically, these plans outline the first three to five years of your business strategy.\nDesign principles\nModels behind startups presenting as ventures are usually associated with design science. Design science uses design principles considered to be a coherent set of normative ideas and propositions to design and construct the company's backbone. For example, one of the initial design principles is affordable loss.\nHeuristics and biases in startup actions"]], "0_corpus": "wikipedia", "1_conv_id": "7d2dccb966ad4902a22b492ec1a01a3b", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Suomen startup skene", "1_output": [["Suomen startup skene", "Suseok\n\nHistory Suseok began as votive art over 3000 years ago and became a fixture of Korean society during the Joseon Dynasty, when Confucian scholars displayed them on their writing desks. From here is where the English name \"scholar's rock\" originates. Suseok regained popularity amongst nouveau riche businessmen in the 1980s during Korea's economic boom.\nThe art usually works on three scales: large installations of monumental shaped stones as ornamental gates; medium-sized shaped stones for landscape decoration within Korean gardens; and the smaller shaped stones for scholar's tables, the most important of these scales.\nChinese gongshi influenced the development of suseok in Korea.\nEvaluation\nEarly on, important sites within landscape were marked with shaped stones, similarly to distance markers on post roads. Burial sites were also given permanent marking by large scale tumuli or mounds, often surrounded by anthropomorphic shaped stones much akin to that of Inuit or First Nations' memory markers. The animistic belief of nature being alive, and large-scaled elements of nature having souls, has led to the continued use of massive sculpted stone in natural forms throughout Korean traditional entranceways, as the firstgrowth cedarwood traditionally used for gates is now rare.\nAs Confucian scholarship ascended into the golden age of the Joseon dynasty, scholar rocks became an essential fixture of the writing tables of the yangban class of scholars, and a brilliant example of Confucian art."]], "1_corpus": "wikipedia"}
6
  {"tstamp": 1722357673.547, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "40adf2ecfae142f690cf15572d1c2a39", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "0_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "0_corpus": "arxiv", "1_conv_id": "5b7d23084cf84e698b8ae3ef7e10bdc3", "1_model_name": "embed-english-v3.0", "1_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "1_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "1_corpus": "arxiv"}
7
  {"tstamp": 1722358076.7701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "ba39a43bd1d845a29bb436a813e124f3", "0_model_name": "text-embedding-004", "0_prompt": "inpars", "0_output": [["inpars", "Title: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin."]], "0_corpus": "arxiv", "1_conv_id": "613710642ec9426ca02056c89cf5fbf7", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "inpars", "1_output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "1_corpus": "arxiv"}
8
+ {"tstamp": 1722358092.6242, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d192f963eb874f30877d73a20a3386d7", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "are meet or are met?", "0_output": [["are meet or are met?", "Stackoverflow Stackexchange\n\nQ: Get values from TempData into view I want to get value of TempData into View\nController\n[NonAction]\n public ActionResult RedirectToMessagePage(string title, string message)\n {\n TempData[\"SuccessModel\"] = new Models.SuccessModel { Title = title, Message = message };\n\n return RedirectToAction(\"Result\", \"Common\");\n }\n\nView\nif (TempData[\"SuccessModel\"] != null)\n {\n\n <div id=\"validationMessages\">\n @{\n var errors =TempData.Values;\n }\n @if (errors != null && errors.Count() > 0)\n {\n <div style=\"position:absolute; background:Black; color:White; top:250px; left:550px;\">\n @foreach (var error in errors)\n { \n @error \n }\n </div>\n }\n </div>\n }\n\nI want to get value of Message of TempData[\"SuccessModel\"] into view and want to display it.\nHow can I display it?\n\nA: under\nvar errors =TempData.Values;\n\nyou can cast TempData[\"SuccessModel\"] and get it's message\nvar message = (TempData[\"SuccessModel\"] as Models.SuccessModel).Message;\n\nand use this message variable.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f3eef428ee4847acbdadd37fb9b77a6e", "1_model_name": "BM25", "1_prompt": "are meet or are met?", "1_output": [["are meet or are met?", "Ell Stackexchange\n\nQ: \"Meet my boss\" or \"meet with my boss\"? What is the difference between \"meet\" and \"meet with\"?\nFor example, which of these should I say?\n\n \n*\n \n*I met my boss yesterday \n \n*I met with my boss yesterday\n \n*I'm meeting my friends tonight \n \n*I'm meeting with my friends tonight\n \n\n\nA: \nI met my boss yesterday.\n\nWithout elaboration, this implies that you met your boss for the first time.\n\nI met my boss at Spago* yesterday.\n\nYou had lunch or dinner with your boss. (It was not your first encounter.)\n\nI met with my boss yesterday.\n\nYou got together with your boss for a specific purpose. It was likely to be a working meeting of some sort.\n\nI'm meeting my friends tonight.\n\nYou're going out to have fun with your friends.\n\nI'm meeting with my friends tonight.\n\nYou're getting together with friends for some shared purpose. It might be for fun, but the \"with\" is more likely to be used to suggest a special purpose to the meeting: planning, hobby, whatever.\n* Spago is an upscale restaurant, suitable for business luncheons or important dinners.\n"]], "1_corpus": "stackexchange"}
9
+ {"tstamp": 1722358096.4098, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "2412f917aa0241cfa42543f4dcf47ff5", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "When did american astronauts first walk on the moon", "0_output": [["When did american astronauts first walk on the moon", "Exploration of the Moon\n\nIn September 1968 the Soviet Union's Zond 5 sent tortoises on a circumlunar mission, followed by turtles aboard Zond 6 in November. On December 24, 1968, the crew of Apollo 8—Frank Borman, James Lovell and William Anders—became the first human beings to enter lunar orbit and see the far side of the Moon in person. Humans first landed on the Moon on July 20, 1969. The first humans to walk on the lunar surface were Neil Armstrong, commander of the U.S. mission Apollo 11 and his fellow astronaut Buzz Aldrin.\nThe first robot lunar rover to land on the Moon was the Soviet vessel Lunokhod 1 on November 17, 1970, as part of the Lunokhod programme. To date, the last human to stand on the Moon was Eugene Cernan, who as part of the Apollo 17 mission, walked on the Moon in December 1972.\nMoon rock samples were brought back to Earth by three Luna missions (Luna 16, 20, and 24) and the Apollo missions 11 through 17 (except Apollo 13, which aborted its planned lunar landing). Luna 24 in 1976 was the last Lunar mission by either the Soviet Union or the U.S. until Clementine in 1994. Focus shifted to probes to other planets, space stations, and the Shuttle program.\nBefore the \"Moon race,\" the U.S. had pre-projects for scientific and military moonbases: the Lunex Project and Project Horizon. Besides crewed landings, the abandoned Soviet crewed lunar programs included the building of a multipurpose moonbase \"Zvezda\", the first detailed project, complete with developed mockups of expedition vehicles and surface modules.\nAfter 1990"]], "0_corpus": "wikipedia", "1_conv_id": "4ca696c1ad1e44dd83714298585835db", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "When did american astronauts first walk on the moon", "1_output": [["When did american astronauts first walk on the moon", "Walking on the Moon\n\n\"Walking on the Moon\" is a reggae song by British rock band the Police, released as the second single from their second studio album, Reggatta de Blanc (1979). The song was written by the band's lead vocalist and bassist Sting. It went on to become the band's second hit in the UK.\nBackground\nSting has said that he wrote the song when he was drunk one night after a concert in Munich. The following morning, he remembered the song and wrote it down.\nIn his autobiography, Sting implies that the song was partially inspired by an early girlfriend:\nAccording to Sting, the song was originally recorded \"as a rocker\" in early versions, but it was reworked. The riff, which is played on the bass, was described as \"weird\" and \"jazzy\" by Sting. Guitarist Andy Summers came up with the chord \"which hits after the bass notes\" throughout the song.\n\"Walking on the Moon\" was released as the follow-up single to the British single \"Message in a Bottle\" in late 1979. The song was the Police's second number-one hit single in the United Kingdom. It also reached in Ireland and in Australia but did not chart in the United States."]], "1_corpus": "wikipedia"}
10
+ {"tstamp": 1722358118.4642, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "6e69d5cf6ca64ba795f4d1b52f734ef6", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "inpars", "0_output": [["inpars", "Title: ParPaRaw: Massively Parallel Parsing of Delimiter-Separated Raw Data\n\nAbstract: Parsing is essential for a wide range of use cases, such as stream processing, bulk loading, and in-situ querying of raw data. Yet, the compute-intense step often constitutes a major bottleneck in the data ingestion pipeline, since parsing of inputs that require more involved parsing rules is challenging to parallelise. This work proposes a massively parallel algorithm for parsing delimiter-separated data formats on GPUs. Other than the state-of-the-art, the proposed approach does not require an initial sequential pass over the input to determine a thread's parsing context. That is, how a thread, beginning somewhere in the middle of the input, should interpret a certain symbol (e.g., whether to interpret a comma as a delimiter or as part of a larger string enclosed in double-quotes). Instead of tailoring the approach to a single format, we are able to perform a massively parallel FSM simulation, which is more flexible and powerful, supporting more expressive parsing rules with general applicability. Achieving a parsing rate of as much as 14.2 GB/s, our experimental evaluation on a GPU with 3584 cores shows that the presented approach is able to scale to thousands of cores and beyond. With an end-to-end streaming approach, we are able to exploit the full-duplex capabilities of the PCIe bus and hide latency from data transfers. Considering the end-to-end performance, the algorithm parses 4.8 GB in as little as 0.44 seconds, including data transfers."]], "0_corpus": "arxiv", "1_conv_id": "f4c133383bd5449c9636e530f8d4fdeb", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "inpars", "1_output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "1_corpus": "arxiv"}
11
+ {"tstamp": 1722358165.7939, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "78dfb52f27f64b61929bcde3dcc9504b", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "inpars", "0_output": [["inpars", "Title: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$."]], "0_corpus": "arxiv", "1_conv_id": "59aba873f4524f3298bd6dc89efec421", "1_model_name": "BM25", "1_prompt": "inpars", "1_output": [["inpars", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"}
12
+ {"tstamp": 1722358254.6555, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "620b3c01c65e40488aebdee13adb9ab4", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "when does the stock market open pacific time", "0_output": [["when does the stock market open pacific time", "Tokyo Stock Exchange\n\nHours The exchange's normal trading sessions are from 9:00 a.m. to 11:30 a.m. and from 12:30 p.m. to 3:00 p.m. on all days of the week except Saturdays, Sundays and holidays declared by the Exchange in advance. The exchange is closed for the following holidays: New Year's Day, Coming of Age Day, National Foundation Day, Vernal Equinox Day, Shōwa Day, Constitution Memorial Day, Greenery Day, Children's Day, Marine Day, Respect for the Aged Day, Autumnal Equinox, Health and Sports Day, Culture Day, Labour Thanksgiving Day, and The Emperor's Birthday.\nStock market\nFirst, Second and other Sections until April 4, 2022\nUntil April 4, 2022, corporate shares are listed and traded at Tokyo Stock Exchange in five market sections: the First Section which started when Tokyo Stock Exchange was re-established in 1949 and includes mainly large companies; the Second Section which started in 1961 and includes mainly mid-sized companies; JASDAQ (established in 1991, acquired by Osaka Stock Exchange in 2010, and absorbed into TSE in 2013) and Mothers (Market of the high-growth and emerging stocks, established at TSE in 1999) which are both for emerging companies; and TOKYO PRO Market which was established in 2009 jointly with London Stock Exchange as an Alternative Investment Market (Tokyo AIM) for small companies.\nThere were a total of 3,821 companies listed in Tokyo Stock Exchange, as of March 31, 2022.\nPrime, Standard and Growth markets since April 4, 2022"]], "0_corpus": "wikipedia", "1_conv_id": "0d61a1651b204d21ad4de10b5702eaf5", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "when does the stock market open pacific time", "1_output": [["when does the stock market open pacific time", "Philadelphia Stock Exchange\n\nIn March 2020, the PHLX announced plans to temporarily move to all-electronic trading on March 23, 2020, due to the COVID-19 pandemic. Along with the NYSE and the BSE, the PHLX reopened on May 26, 2020.\nLocations\n1790-1834 – Merchants Coffee House (City Tavern), 138 South 2nd Street at Walnut Street\n1834-1876 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1876-1888 – Girard Bank Building, 120 South 3rd Street, between Walnut and Chestnut Streets\n1888-1902 – Drexel Building, near Chestnut and South 5th Streets\n1902-1912 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1913-1951 – 1411 Walnut Street, between South Broad and South 15th Streets\n1951-1966 – Central Penn Bank Building, 1401 Walnut Street, between South Broad and South 15th Streets\n1966-1981 – 120 South 17th Street, between Sansom Street and Stock Exchange Place (Ionic Street)\nJan-Feb 1969 – Drecker Building, Bala Cynwyd, Pennsylvania (trading floor only)\n1981-2017 – Market and South 19th Streets\n2017-now – FMC Tower at Cira Centre South, 2929 Walnut Street, between Schuylkill Expressway and South 30th Street\nHours\nThe exchange's normal trading sessions are from 9:30 am to 4:00 pm on all days of the week except Saturdays, Sundays and holidays declared by the exchange in advance."]], "1_corpus": "wikipedia"}
data/retrieval_individual-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -30,3 +30,16 @@
30
  {"tstamp": 1722357648.0654, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722357647.8007, "finish": 1722357648.0654, "ip": "", "conv_id": "5b7d23084cf84e698b8ae3ef7e10bdc3", "model_name": "embed-english-v3.0", "prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "corpus": "arxiv"}
31
  {"tstamp": 1722358062.7324, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722358060.516, "finish": 1722358062.7324, "ip": "", "conv_id": "ba39a43bd1d845a29bb436a813e124f3", "model_name": "text-embedding-004", "prompt": "inpars", "output": [["inpars", "Title: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin."]], "corpus": "arxiv"}
32
  {"tstamp": 1722358062.7324, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722358060.516, "finish": 1722358062.7324, "ip": "", "conv_id": "613710642ec9426ca02056c89cf5fbf7", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "inpars", "output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  {"tstamp": 1722357648.0654, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722357647.8007, "finish": 1722357648.0654, "ip": "", "conv_id": "5b7d23084cf84e698b8ae3ef7e10bdc3", "model_name": "embed-english-v3.0", "prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "corpus": "arxiv"}
31
  {"tstamp": 1722358062.7324, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722358060.516, "finish": 1722358062.7324, "ip": "", "conv_id": "ba39a43bd1d845a29bb436a813e124f3", "model_name": "text-embedding-004", "prompt": "inpars", "output": [["inpars", "Title: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin."]], "corpus": "arxiv"}
32
  {"tstamp": 1722358062.7324, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722358060.516, "finish": 1722358062.7324, "ip": "", "conv_id": "613710642ec9426ca02056c89cf5fbf7", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "inpars", "output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "corpus": "arxiv"}
33
+ {"tstamp": 1722358085.3103, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722358085.114, "finish": 1722358085.3103, "ip": "", "conv_id": "2412f917aa0241cfa42543f4dcf47ff5", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "When did american astronauts first walk on the moon", "output": [["When did american astronauts first walk on the moon", "Exploration of the Moon\n\nIn September 1968 the Soviet Union's Zond 5 sent tortoises on a circumlunar mission, followed by turtles aboard Zond 6 in November. On December 24, 1968, the crew of Apollo 8—Frank Borman, James Lovell and William Anders—became the first human beings to enter lunar orbit and see the far side of the Moon in person. Humans first landed on the Moon on July 20, 1969. The first humans to walk on the lunar surface were Neil Armstrong, commander of the U.S. mission Apollo 11 and his fellow astronaut Buzz Aldrin.\nThe first robot lunar rover to land on the Moon was the Soviet vessel Lunokhod 1 on November 17, 1970, as part of the Lunokhod programme. To date, the last human to stand on the Moon was Eugene Cernan, who as part of the Apollo 17 mission, walked on the Moon in December 1972.\nMoon rock samples were brought back to Earth by three Luna missions (Luna 16, 20, and 24) and the Apollo missions 11 through 17 (except Apollo 13, which aborted its planned lunar landing). Luna 24 in 1976 was the last Lunar mission by either the Soviet Union or the U.S. until Clementine in 1994. Focus shifted to probes to other planets, space stations, and the Shuttle program.\nBefore the \"Moon race,\" the U.S. had pre-projects for scientific and military moonbases: the Lunex Project and Project Horizon. Besides crewed landings, the abandoned Soviet crewed lunar programs included the building of a multipurpose moonbase \"Zvezda\", the first detailed project, complete with developed mockups of expedition vehicles and surface modules.\nAfter 1990"]], "corpus": "wikipedia"}
34
+ {"tstamp": 1722358085.3103, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722358085.114, "finish": 1722358085.3103, "ip": "", "conv_id": "4ca696c1ad1e44dd83714298585835db", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "When did american astronauts first walk on the moon", "output": [["When did american astronauts first walk on the moon", "Walking on the Moon\n\n\"Walking on the Moon\" is a reggae song by British rock band the Police, released as the second single from their second studio album, Reggatta de Blanc (1979). The song was written by the band's lead vocalist and bassist Sting. It went on to become the band's second hit in the UK.\nBackground\nSting has said that he wrote the song when he was drunk one night after a concert in Munich. The following morning, he remembered the song and wrote it down.\nIn his autobiography, Sting implies that the song was partially inspired by an early girlfriend:\nAccording to Sting, the song was originally recorded \"as a rocker\" in early versions, but it was reworked. The riff, which is played on the bass, was described as \"weird\" and \"jazzy\" by Sting. Guitarist Andy Summers came up with the chord \"which hits after the bass notes\" throughout the song.\n\"Walking on the Moon\" was released as the follow-up single to the British single \"Message in a Bottle\" in late 1979. The song was the Police's second number-one hit single in the United Kingdom. It also reached in Ireland and in Australia but did not chart in the United States."]], "corpus": "wikipedia"}
35
+ {"tstamp": 1722358086.9464, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722358086.7263, "finish": 1722358086.9464, "ip": "", "conv_id": "d192f963eb874f30877d73a20a3386d7", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "are meet or are met?", "output": [["are meet or are met?", "Stackoverflow Stackexchange\n\nQ: Get values from TempData into view I want to get value of TempData into View\nController\n[NonAction]\n public ActionResult RedirectToMessagePage(string title, string message)\n {\n TempData[\"SuccessModel\"] = new Models.SuccessModel { Title = title, Message = message };\n\n return RedirectToAction(\"Result\", \"Common\");\n }\n\nView\nif (TempData[\"SuccessModel\"] != null)\n {\n\n <div id=\"validationMessages\">\n @{\n var errors =TempData.Values;\n }\n @if (errors != null && errors.Count() > 0)\n {\n <div style=\"position:absolute; background:Black; color:White; top:250px; left:550px;\">\n @foreach (var error in errors)\n { \n @error \n }\n </div>\n }\n </div>\n }\n\nI want to get value of Message of TempData[\"SuccessModel\"] into view and want to display it.\nHow can I display it?\n\nA: under\nvar errors =TempData.Values;\n\nyou can cast TempData[\"SuccessModel\"] and get it's message\nvar message = (TempData[\"SuccessModel\"] as Models.SuccessModel).Message;\n\nand use this message variable.\n"]], "corpus": "stackexchange"}
36
+ {"tstamp": 1722358086.9464, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722358086.7263, "finish": 1722358086.9464, "ip": "", "conv_id": "f3eef428ee4847acbdadd37fb9b77a6e", "model_name": "BM25", "prompt": "are meet or are met?", "output": [["are meet or are met?", "Ell Stackexchange\n\nQ: \"Meet my boss\" or \"meet with my boss\"? What is the difference between \"meet\" and \"meet with\"?\nFor example, which of these should I say?\n\n \n*\n \n*I met my boss yesterday \n \n*I met with my boss yesterday\n \n*I'm meeting my friends tonight \n \n*I'm meeting with my friends tonight\n \n\n\nA: \nI met my boss yesterday.\n\nWithout elaboration, this implies that you met your boss for the first time.\n\nI met my boss at Spago* yesterday.\n\nYou had lunch or dinner with your boss. (It was not your first encounter.)\n\nI met with my boss yesterday.\n\nYou got together with your boss for a specific purpose. It was likely to be a working meeting of some sort.\n\nI'm meeting my friends tonight.\n\nYou're going out to have fun with your friends.\n\nI'm meeting with my friends tonight.\n\nYou're getting together with friends for some shared purpose. It might be for fun, but the \"with\" is more likely to be used to suggest a special purpose to the meeting: planning, hobby, whatever.\n* Spago is an upscale restaurant, suitable for business luncheons or important dinners.\n"]], "corpus": "stackexchange"}
37
+ {"tstamp": 1722358112.9885, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722358112.8187, "finish": 1722358112.9885, "ip": "", "conv_id": "6e69d5cf6ca64ba795f4d1b52f734ef6", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "inpars", "output": [["inpars", "Title: ParPaRaw: Massively Parallel Parsing of Delimiter-Separated Raw Data\n\nAbstract: Parsing is essential for a wide range of use cases, such as stream processing, bulk loading, and in-situ querying of raw data. Yet, the compute-intense step often constitutes a major bottleneck in the data ingestion pipeline, since parsing of inputs that require more involved parsing rules is challenging to parallelise. This work proposes a massively parallel algorithm for parsing delimiter-separated data formats on GPUs. Other than the state-of-the-art, the proposed approach does not require an initial sequential pass over the input to determine a thread's parsing context. That is, how a thread, beginning somewhere in the middle of the input, should interpret a certain symbol (e.g., whether to interpret a comma as a delimiter or as part of a larger string enclosed in double-quotes). Instead of tailoring the approach to a single format, we are able to perform a massively parallel FSM simulation, which is more flexible and powerful, supporting more expressive parsing rules with general applicability. Achieving a parsing rate of as much as 14.2 GB/s, our experimental evaluation on a GPU with 3584 cores shows that the presented approach is able to scale to thousands of cores and beyond. With an end-to-end streaming approach, we are able to exploit the full-duplex capabilities of the PCIe bus and hide latency from data transfers. Considering the end-to-end performance, the algorithm parses 4.8 GB in as little as 0.44 seconds, including data transfers."]], "corpus": "arxiv"}
38
+ {"tstamp": 1722358112.9885, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722358112.8187, "finish": 1722358112.9885, "ip": "", "conv_id": "f4c133383bd5449c9636e530f8d4fdeb", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "inpars", "output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "corpus": "arxiv"}
39
+ {"tstamp": 1722358147.5955, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722358147.3814, "finish": 1722358147.5955, "ip": "", "conv_id": "fca24e7cbfba4ce78613bc51183a257c", "model_name": "BM25", "prompt": "inpars", "output": [["inpars", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "corpus": "arxiv"}
40
+ {"tstamp": 1722358161.5695, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722358161.3915, "finish": 1722358161.5695, "ip": "", "conv_id": "78dfb52f27f64b61929bcde3dcc9504b", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "inpars", "output": [["inpars", "Title: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$."]], "corpus": "arxiv"}
41
+ {"tstamp": 1722358161.5695, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722358161.3915, "finish": 1722358161.5695, "ip": "", "conv_id": "59aba873f4524f3298bd6dc89efec421", "model_name": "BM25", "prompt": "inpars", "output": [["inpars", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "corpus": "arxiv"}
42
+ {"tstamp": 1722358196.1882, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722358195.9486, "finish": 1722358196.1882, "ip": "", "conv_id": "620b3c01c65e40488aebdee13adb9ab4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "when does the stock market open pacific time", "output": [["when does the stock market open pacific time", "Tokyo Stock Exchange\n\nHours The exchange's normal trading sessions are from 9:00 a.m. to 11:30 a.m. and from 12:30 p.m. to 3:00 p.m. on all days of the week except Saturdays, Sundays and holidays declared by the Exchange in advance. The exchange is closed for the following holidays: New Year's Day, Coming of Age Day, National Foundation Day, Vernal Equinox Day, Shōwa Day, Constitution Memorial Day, Greenery Day, Children's Day, Marine Day, Respect for the Aged Day, Autumnal Equinox, Health and Sports Day, Culture Day, Labour Thanksgiving Day, and The Emperor's Birthday.\nStock market\nFirst, Second and other Sections until April 4, 2022\nUntil April 4, 2022, corporate shares are listed and traded at Tokyo Stock Exchange in five market sections: the First Section which started when Tokyo Stock Exchange was re-established in 1949 and includes mainly large companies; the Second Section which started in 1961 and includes mainly mid-sized companies; JASDAQ (established in 1991, acquired by Osaka Stock Exchange in 2010, and absorbed into TSE in 2013) and Mothers (Market of the high-growth and emerging stocks, established at TSE in 1999) which are both for emerging companies; and TOKYO PRO Market which was established in 2009 jointly with London Stock Exchange as an Alternative Investment Market (Tokyo AIM) for small companies.\nThere were a total of 3,821 companies listed in Tokyo Stock Exchange, as of March 31, 2022.\nPrime, Standard and Growth markets since April 4, 2022"]], "corpus": "wikipedia"}
43
+ {"tstamp": 1722358196.1882, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722358195.9486, "finish": 1722358196.1882, "ip": "", "conv_id": "0d61a1651b204d21ad4de10b5702eaf5", "model_name": "GritLM/GritLM-7B", "prompt": "when does the stock market open pacific time", "output": [["when does the stock market open pacific time", "Philadelphia Stock Exchange\n\nIn March 2020, the PHLX announced plans to temporarily move to all-electronic trading on March 23, 2020, due to the COVID-19 pandemic. Along with the NYSE and the BSE, the PHLX reopened on May 26, 2020.\nLocations\n1790-1834 – Merchants Coffee House (City Tavern), 138 South 2nd Street at Walnut Street\n1834-1876 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1876-1888 – Girard Bank Building, 120 South 3rd Street, between Walnut and Chestnut Streets\n1888-1902 – Drexel Building, near Chestnut and South 5th Streets\n1902-1912 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1913-1951 – 1411 Walnut Street, between South Broad and South 15th Streets\n1951-1966 – Central Penn Bank Building, 1401 Walnut Street, between South Broad and South 15th Streets\n1966-1981 – 120 South 17th Street, between Sansom Street and Stock Exchange Place (Ionic Street)\nJan-Feb 1969 – Drecker Building, Bala Cynwyd, Pennsylvania (trading floor only)\n1981-2017 – Market and South 19th Streets\n2017-now – FMC Tower at Cira Centre South, 2929 Walnut Street, between Schuylkill Expressway and South 30th Street\nHours\nThe exchange's normal trading sessions are from 9:30 am to 4:00 pm on all days of the week except Saturdays, Sundays and holidays declared by the exchange in advance."]], "corpus": "wikipedia"}
44
+ {"tstamp": 1722358255.9393, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722358255.3124, "finish": 1722358255.9393, "ip": "", "conv_id": "03dd0aa3bdf54262bdffef0f851db34b", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Research on combining sentiment analysis and market data for volatility forecasting.", "output": [["Research on combining sentiment analysis and market data for volatility forecasting.", "Title: Solitons, Surfaces, Curves, and the Spin Description of Nonlinear Evolution Equations\n\nAbstract: The briefly review on the common spin description of the nonlinear evolution equations."]], "corpus": "arxiv"}
45
+ {"tstamp": 1722358255.9393, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722358255.3124, "finish": 1722358255.9393, "ip": "", "conv_id": "bd60351f21534ac9acfcf7ee77f8eba3", "model_name": "text-embedding-004", "prompt": "Research on combining sentiment analysis and market data for volatility forecasting.", "output": [["Research on combining sentiment analysis and market data for volatility forecasting.", "Title: A Sentiment Analysis Approach to the Prediction of Market Volatility\n\nAbstract: Prediction and quantification of future volatility and returns play an important role in financial modelling, both in portfolio optimization and risk management. Natural language processing today allows to process news and social media comments to detect signals of investors' confidence. We have explored the relationship between sentiment extracted from financial news and tweets and FTSE100 movements. We investigated the strength of the correlation between sentiment measures on a given day and market volatility and returns observed the next day. The findings suggest that there is evidence of correlation between sentiment and stock market movements: the sentiment captured from news headlines could be used as a signal to predict market returns; the same does not apply for volatility. Also, in a surprising finding, for the sentiment found in Twitter comments we obtained a correlation coefficient of -0.7, and p-value below 0.05, which indicates a strong negative correlation between positive sentiment captured from the tweets on a given day and the volatility observed the next day. We developed an accurate classifier for the prediction of market volatility in response to the arrival of new information by deploying topic modelling, based on Latent Dirichlet Allocation, to extract feature vectors from a collection of tweets and financial news. The obtained features were used as additional input to the classifier. Thanks to the combination of sentiment and topic modelling our classifier achieved a directional prediction accuracy for volatility of 63%."]], "corpus": "arxiv"}
data/retrieval_single_choice-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl ADDED
@@ -0,0 +1 @@
 
 
1
+ {"tstamp": 1722358150.7236, "task_type": "retrieval", "type": "upvote", "models": "BM25", "ip": "", "conv_id": "fca24e7cbfba4ce78613bc51183a257c", "model_name": "BM25", "prompt": "inpars", "output": [["inpars", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "corpus": "arxiv"}