{"tstamp": 1722356200.3701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "2719b38b30a44cb08b58c23cfaaac5fe", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "0_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Self-Edit: Fault-Aware Code Editor for Code Generation\n\nAbstract: Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task. We execute the generated code on the example test case provided in the question and wrap execution results into a supplementary comment. Utilizing this comment as guidance, our fault-aware code editor is employed to correct errors in the generated code. We perform extensive evaluations across two competitive programming datasets with nine different LLMs. Compared to directly generating from LLMs, our approach can improve the average of pass@1 by 89\\% on APPS-dev, 31\\% on APPS-test, and 48\\% on HumanEval over nine popular code generation LLMs with parameter sizes ranging from 110M to 175B. Compared to other post-processing methods, our method demonstrates superior accuracy and efficiency."]], "0_corpus": "arxiv", "1_conv_id": "9f392c55ce0242e699bcfa03f3da8923", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "1_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Enhancement of the superconducting transition temperature in La2-xSrxCuO4 bilayers: Role of pairing and phase stiffness\n\nAbstract: The superconducting transition temperature, Tc, of bilayers comprising underdoped La2-xSrxCuO4 films capped by a thin heavily overdoped metallic La1.65Sr0.35CuO4 layer, is found to increase with respect to Tc of the bare underdoped films. The highest Tc is achieved for x = 0.12, close to the 'anomalous' 1/8 doping level, and exceeds that of the optimally-doped bare film. Our data suggest that the enhanced superconductivity is confined to the interface between the layers. We attribute the effect to a combination of the high pairing scale in the underdoped layer with an enhanced phase stiffness induced by the overdoped film."]], "1_corpus": "arxiv"} {"tstamp": 1722356865.2174, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b3ad6f75148142cca47ea52be709f00e", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "0_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: Diffusion in normal and critical transient chaos\n\nAbstract: In this paper we investigate deterministic diffusion in systems which are spatially extended in certain directions but are restricted in size and open in other directions, consequently particles can escape. We introduce besides the diffusion coefficient D on the chaotic repeller a coefficient ${\\hat D}$ which measures the broadening of the distribution of trajectories during the transient chaotic motion. Both coefficients are explicitly computed for one-dimensional models, and they are found to be different in most cases. We show furthermore that a jump develops in both of the coefficients for most of the initial distributions when we approach the critical borderline where the escape rate equals the Liapunov exponent of a periodic orbit."]], "0_corpus": "arxiv", "1_conv_id": "bf0d0a6ae2d04641ac0d1978086695e0", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "1_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: BERT Meets Chinese Word Segmentation\n\nAbstract: Chinese word segmentation (CWS) is a fundamental task for Chinese language understanding. Recently, neural network-based models have attained superior performance in solving the in-domain CWS task. Last year, Bidirectional Encoder Representation from Transformers (BERT), a new language representation model, has been proposed as a backbone model for many natural language tasks and redefined the corresponding performance. The excellent performance of BERT motivates us to apply it to solve the CWS task. By conducting intensive experiments in the benchmark datasets from the second International Chinese Word Segmentation Bake-off, we obtain several keen observations. BERT can slightly improve the performance even when the datasets contain the issue of labeling inconsistency. When applying sufficiently learned features, Softmax, a simpler classifier, can attain the same performance as that of a more complicated classifier, e.g., Conditional Random Field (CRF). The performance of BERT usually increases as the model size increases. The features extracted by BERT can be also applied as good candidates for other neural network models."]], "1_corpus": "arxiv"} {"tstamp": 1722357180.6325, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "42394a8aba7e4beda1c9a6de11159583", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "How do people think about the current Iran president?", "0_output": [["How do people think about the current Iran president?", "President of Iran\n\nThe president of Iran () is the head of government of the Islamic Republic of Iran and the second highest-ranking official, after the Supreme Leader. The first election was held in 1980 and was won by Abulhassan Banisadr. Mohammad Mokhber currently serves as the acting President of Iran following the death of Ebrahim Raisi, the former president, on 19 May 2024. He is expected to be succeeded by Masoud Pezeshkian on July 30, who won the 2024 Iranian presidential election.\nHistory\nAfter the Iranian Revolution of 1979 and 1979 Iranian Islamic Republic referendum on March 29 and 30, the new government needed to craft a new constitution. Supreme Leader Ruhollah Khomeini, ordered an election for the Assembly of Experts, the body tasked with writing the constitution. The assembly presented the constitution on October 24, 1979, and Supreme Leader Ruhollah Khomeini and Prime Minister Mehdi Bazargan approved it."]], "0_corpus": "wikipedia", "1_conv_id": "52214b21d0284944897b648b75b8af91", "1_model_name": "voyage-multilingual-2", "1_prompt": "How do people think about the current Iran president?", "1_output": [["How do people think about the current Iran president?", "History of the Islamic Republic of Iran\n\n2009 election controversy Ahmadinejad's 2009 election victory was hotly disputed and marred by large protests that formed the \"greatest domestic challenge\" to the leadership of the Islamic Republic \"in 30 years\". Despite high turnout and large enthusiastic crowds for reformist opponent Mir-Hossein Mousavi, Ahmadinejad was officially declared to have won by a 2–1 margin against three opponents. Allegations of voting irregularities and protest by Mousavi his supporters were immediate and by 1 July 2009 1000 people had been arrested and 20 killed in street demonstrations. Supreme Leader Ali Khamenei and others Islamic officials blamed foreign powers for fomenting the protest. However, according to World Public Opinion (a United States poll), the protest does not mean Iran is in a \"pre-revolutionary\" situation as a WPO poll of Iranians taken in early September 2009 found high levels of satisfaction with the regime. 80% of the Iranians respondents said President Ahmadinejad was honest, 64% expressed a lot of confidence in him, and nine in ten said they were satisfied with Iran's system of government.\nPublic opinion\nAccording to the (U.S.) International Peace Institute's 2010-poll conducted in Persian by a representative sample of the Iranian people:\nIranians are divided on the government's performance.\nDissatisfied with the economy.\nWorry over sanctions and isolation.\nWant to focus on domestic affairs.\nFavor closer ties to the West.\nRising tensions sparked hostility toward the US, Europe, and U.N.\nFavor nuclear arms and do not want to back deals to halt enrichment.\nIndependent polls do not contradict official turnout of 2009 election, which gave around 60% of the vote to Ahmadinejad.\nRouhani Administration\nHassan Rouhani was elected as President of Iran on 12 June 2013 and took office on 3 August. He is known as a moderate left-leaner, supported by reformists in the election. He has open ideas in the area of economics and a high-level foreign policy, as he served as a diplomat before his election. He has moved quickly to engage in diplomatic negotiations with Western countries, seeking the lifting of crippling economic sanctions on oil exports in exchange for Iran's cooperation with UN treaties regarding the development of nuclear weapons. Economic conditions improved during Rouhani's first term and led to his reelection in 2017 with his reformist agenda."]], "1_corpus": "wikipedia"} {"tstamp": 1722357524.4041, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "3d09c01e7dfb4691b24743a8201671cf", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Winter war", "0_output": [["Winter war", "Winter War\n\nOn 30 November 1939, Soviet forces invaded Finland with 21 divisions, totalling 450,000 men, and bombed Helsinki, killing about 100 citizens and destroying more than 50 buildings. In response to international criticism, Soviet Foreign Minister Vyacheslav Molotov stated that the Soviet Air Force was not bombing Finnish cities but rather dropping humanitarian aid to the starving Finnish population; the bombs were sarcastically dubbed Molotov bread baskets by Finns. The Finnish statesman J. K. Paasikivi commented that the Soviet attack without a declaration of war violated three separate non-aggression pacts: the Treaty of Tartu, which was signed in 1920, the non-aggression pact between Finland and the Soviet Union, which was signed in 1932 and again in 1934; and also the Covenant of the League of Nations, which the Soviet Union signed in 1934. Field Marshal C.G.E. Mannerheim was appointed Commander-in-Chief of the Finnish Defence Forces after the Soviet attack. In a further reshuffling, Aimo Cajander's caretaker cabinet was replaced by Risto Ryti and his cabinet, with Väinö Tanner as foreign minister because of opposition to Cajander's prewar politics. Finland brought the matter of the Soviet invasion before the League of Nations. The League expelled the Soviet Union on 14 December 1939 and exhorted its members to aid Finland.\nHeaded by Otto Wille Kuusinen, the Finnish Democratic Republic puppet government operated in the parts of Finnish Karelia occupied by the Soviets, and was also referred to as the \"Terijoki Government\", after the village of Terijoki, the first settlement captured by the advancing Red Army. After the war, the puppet government was reabsorbed into the Soviet Union. From the very outset of the war, working-class Finns stood behind the legitimate government in Helsinki. Finnish national unity against the Soviet invasion was later called the spirit of the Winter War.\nFirst battles and Soviet advance to Mannerheim Line"]], "0_corpus": "wikipedia", "1_conv_id": "b082f7b0fd6a44e9b625a592b31c17dc", "1_model_name": "BM25", "1_prompt": "Winter war", "1_output": [["Winter war", "Arab Winter\n\nThe Arab Winter () is a term referring to the resurgence of authoritarianism and Islamic extremism in some Arab countries in the 2010s in the aftermath of the Arab Spring protests. The term \"Arab Winter\" refers to the events across Arab League countries in the Middle East and North Africa, including the Syrian civil war, the Iraqi insurgency and the subsequent War in Iraq, the Egyptian Crisis, the First Libyan Civil War and the subsequent Second Libyan Civil War, and the Yemeni civil war. Events referred to as the Arab Winter include those in Egypt that led to the removal from office in 2013 of Mohamed Morsi and the subsequent election in 2014 of Abdel Fattah el-Sisi.\nThe term was first coined by Chinese political scientist Zhang Weiwei during a debate with American political scientist Francis Fukuyama on June 27, 2011. Fukuyama believed the Arab Spring movement would inevitably spread to China, while Zhang predicted the Arab Spring would soon turn into an Arab Winter.\nAccording to scholars of the University of Warsaw, the Arab Spring fully devolved into the Arab Winter four years after its onset, in 2014. The Arab Winter is characterized by the emergence of multiple regional wars, mounting regional instability, economic and demographic decline of Arab countries, and ethno-religious sectarian strife. According to a study by the American University of Beirut, by the summer of 2014, the Arab Winter had resulted in nearly a quarter of a million deaths and millions of refugees. Perhaps the most significant event in the Arab Winter was the rise of the extremist group Islamic State of Iraq and the Levant, which controlled swathes of land in the region from 2014 to 2019."]], "1_corpus": "wikipedia"} {"tstamp": 1722357588.6391, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "d4f10aa5072847e6823253fa571550a1", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Suomen startup skene", "0_output": [["Suomen startup skene", "Startup company\n\nA startup or start-up is a company or project undertaken by an entrepreneur to seek, develop, and validate a scalable business model. While entrepreneurship includes all new businesses including self-employment and businesses that do not intend to go public, startups are new businesses that intend to grow large beyond the solo-founder. During the beginning, startups face high uncertainty and have high rates of failure, but a minority of them do go on to become successful and influential.\nActions\nStartups typically begin by a founder (solo-founder) or co-founders who have a way to solve a problem. The founder of a startup will do the market validation by problem interview, solution interview, and building a minimum viable product (MVP), i.e. a prototype, to develop and validate their business models. The startup process can take a long period of time; hence, sustaining effort is required. Over the long term, sustaining effort is especially challenging because of the high failure rates and uncertain outcomes. Having a business plan in place outlines what to do and how to plan and achieve an idea in the future. Typically, these plans outline the first three to five years of your business strategy.\nDesign principles\nModels behind startups presenting as ventures are usually associated with design science. Design science uses design principles considered to be a coherent set of normative ideas and propositions to design and construct the company's backbone. For example, one of the initial design principles is affordable loss.\nHeuristics and biases in startup actions"]], "0_corpus": "wikipedia", "1_conv_id": "7d2dccb966ad4902a22b492ec1a01a3b", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Suomen startup skene", "1_output": [["Suomen startup skene", "Suseok\n\nHistory Suseok began as votive art over 3000 years ago and became a fixture of Korean society during the Joseon Dynasty, when Confucian scholars displayed them on their writing desks. From here is where the English name \"scholar's rock\" originates. Suseok regained popularity amongst nouveau riche businessmen in the 1980s during Korea's economic boom.\nThe art usually works on three scales: large installations of monumental shaped stones as ornamental gates; medium-sized shaped stones for landscape decoration within Korean gardens; and the smaller shaped stones for scholar's tables, the most important of these scales.\nChinese gongshi influenced the development of suseok in Korea.\nEvaluation\nEarly on, important sites within landscape were marked with shaped stones, similarly to distance markers on post roads. Burial sites were also given permanent marking by large scale tumuli or mounds, often surrounded by anthropomorphic shaped stones much akin to that of Inuit or First Nations' memory markers. The animistic belief of nature being alive, and large-scaled elements of nature having souls, has led to the continued use of massive sculpted stone in natural forms throughout Korean traditional entranceways, as the firstgrowth cedarwood traditionally used for gates is now rare.\nAs Confucian scholarship ascended into the golden age of the Joseon dynasty, scholar rocks became an essential fixture of the writing tables of the yangban class of scholars, and a brilliant example of Confucian art."]], "1_corpus": "wikipedia"} {"tstamp": 1722357673.547, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "40adf2ecfae142f690cf15572d1c2a39", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "0_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "0_corpus": "arxiv", "1_conv_id": "5b7d23084cf84e698b8ae3ef7e10bdc3", "1_model_name": "embed-english-v3.0", "1_prompt": "A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "1_output": [["A paper with a dataset and methods for Chinese LFQA including real-time web search interaction ", "Title: WebCPM: Interactive Web Search for Chinese Long-form Question Answering\n\nAbstract: Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 14,315 supporting facts and 121,330 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader, respectively."]], "1_corpus": "arxiv"} {"tstamp": 1722358076.7701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "ba39a43bd1d845a29bb436a813e124f3", "0_model_name": "text-embedding-004", "0_prompt": "inpars", "0_output": [["inpars", "Title: Gamma-Ray Bursts observed by INTEGRAL\n\nAbstract: During the first six months of operations, six Gamma Ray Bursts (GRBs) have been detected in the field of view of the INTEGRAL instruments and localized by the INTEGRAL Burst Alert System (IBAS): a software for the automatic search of GRBs and the rapid distribution of their coordinates. I describe the current performances of IBAS and review the main results obtained so far. The coordinates of the latest burst localized by IBAS, GRB 031203, have been distributed within 20 s from the burst onset and with an uncertainty radius of only 2.7 arcmin."]], "0_corpus": "arxiv", "1_conv_id": "613710642ec9426ca02056c89cf5fbf7", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "inpars", "1_output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "1_corpus": "arxiv"} {"tstamp": 1722358092.6242, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d192f963eb874f30877d73a20a3386d7", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "are meet or are met?", "0_output": [["are meet or are met?", "Stackoverflow Stackexchange\n\nQ: Get values from TempData into view I want to get value of TempData into View\nController\n[NonAction]\n public ActionResult RedirectToMessagePage(string title, string message)\n {\n TempData[\"SuccessModel\"] = new Models.SuccessModel { Title = title, Message = message };\n\n return RedirectToAction(\"Result\", \"Common\");\n }\n\nView\nif (TempData[\"SuccessModel\"] != null)\n {\n\n
\n @{\n var errors =TempData.Values;\n }\n @if (errors != null && errors.Count() > 0)\n {\n
\n @foreach (var error in errors)\n { \n @error \n }\n
\n }\n
\n }\n\nI want to get value of Message of TempData[\"SuccessModel\"] into view and want to display it.\nHow can I display it?\n\nA: under\nvar errors =TempData.Values;\n\nyou can cast TempData[\"SuccessModel\"] and get it's message\nvar message = (TempData[\"SuccessModel\"] as Models.SuccessModel).Message;\n\nand use this message variable.\n"]], "0_corpus": "stackexchange", "1_conv_id": "f3eef428ee4847acbdadd37fb9b77a6e", "1_model_name": "BM25", "1_prompt": "are meet or are met?", "1_output": [["are meet or are met?", "Ell Stackexchange\n\nQ: \"Meet my boss\" or \"meet with my boss\"? What is the difference between \"meet\" and \"meet with\"?\nFor example, which of these should I say?\n\n \n*\n \n*I met my boss yesterday \n \n*I met with my boss yesterday\n \n*I'm meeting my friends tonight \n \n*I'm meeting with my friends tonight\n \n\n\nA: \nI met my boss yesterday.\n\nWithout elaboration, this implies that you met your boss for the first time.\n\nI met my boss at Spago* yesterday.\n\nYou had lunch or dinner with your boss. (It was not your first encounter.)\n\nI met with my boss yesterday.\n\nYou got together with your boss for a specific purpose. It was likely to be a working meeting of some sort.\n\nI'm meeting my friends tonight.\n\nYou're going out to have fun with your friends.\n\nI'm meeting with my friends tonight.\n\nYou're getting together with friends for some shared purpose. It might be for fun, but the \"with\" is more likely to be used to suggest a special purpose to the meeting: planning, hobby, whatever.\n* Spago is an upscale restaurant, suitable for business luncheons or important dinners.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722358096.4098, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "2412f917aa0241cfa42543f4dcf47ff5", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "When did american astronauts first walk on the moon", "0_output": [["When did american astronauts first walk on the moon", "Exploration of the Moon\n\nIn September 1968 the Soviet Union's Zond 5 sent tortoises on a circumlunar mission, followed by turtles aboard Zond 6 in November. On December 24, 1968, the crew of Apollo 8—Frank Borman, James Lovell and William Anders—became the first human beings to enter lunar orbit and see the far side of the Moon in person. Humans first landed on the Moon on July 20, 1969. The first humans to walk on the lunar surface were Neil Armstrong, commander of the U.S. mission Apollo 11 and his fellow astronaut Buzz Aldrin.\nThe first robot lunar rover to land on the Moon was the Soviet vessel Lunokhod 1 on November 17, 1970, as part of the Lunokhod programme. To date, the last human to stand on the Moon was Eugene Cernan, who as part of the Apollo 17 mission, walked on the Moon in December 1972.\nMoon rock samples were brought back to Earth by three Luna missions (Luna 16, 20, and 24) and the Apollo missions 11 through 17 (except Apollo 13, which aborted its planned lunar landing). Luna 24 in 1976 was the last Lunar mission by either the Soviet Union or the U.S. until Clementine in 1994. Focus shifted to probes to other planets, space stations, and the Shuttle program.\nBefore the \"Moon race,\" the U.S. had pre-projects for scientific and military moonbases: the Lunex Project and Project Horizon. Besides crewed landings, the abandoned Soviet crewed lunar programs included the building of a multipurpose moonbase \"Zvezda\", the first detailed project, complete with developed mockups of expedition vehicles and surface modules.\nAfter 1990"]], "0_corpus": "wikipedia", "1_conv_id": "4ca696c1ad1e44dd83714298585835db", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "When did american astronauts first walk on the moon", "1_output": [["When did american astronauts first walk on the moon", "Walking on the Moon\n\n\"Walking on the Moon\" is a reggae song by British rock band the Police, released as the second single from their second studio album, Reggatta de Blanc (1979). The song was written by the band's lead vocalist and bassist Sting. It went on to become the band's second hit in the UK.\nBackground\nSting has said that he wrote the song when he was drunk one night after a concert in Munich. The following morning, he remembered the song and wrote it down.\nIn his autobiography, Sting implies that the song was partially inspired by an early girlfriend:\nAccording to Sting, the song was originally recorded \"as a rocker\" in early versions, but it was reworked. The riff, which is played on the bass, was described as \"weird\" and \"jazzy\" by Sting. Guitarist Andy Summers came up with the chord \"which hits after the bass notes\" throughout the song.\n\"Walking on the Moon\" was released as the follow-up single to the British single \"Message in a Bottle\" in late 1979. The song was the Police's second number-one hit single in the United Kingdom. It also reached in Ireland and in Australia but did not chart in the United States."]], "1_corpus": "wikipedia"} {"tstamp": 1722358118.4642, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "6e69d5cf6ca64ba795f4d1b52f734ef6", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "inpars", "0_output": [["inpars", "Title: ParPaRaw: Massively Parallel Parsing of Delimiter-Separated Raw Data\n\nAbstract: Parsing is essential for a wide range of use cases, such as stream processing, bulk loading, and in-situ querying of raw data. Yet, the compute-intense step often constitutes a major bottleneck in the data ingestion pipeline, since parsing of inputs that require more involved parsing rules is challenging to parallelise. This work proposes a massively parallel algorithm for parsing delimiter-separated data formats on GPUs. Other than the state-of-the-art, the proposed approach does not require an initial sequential pass over the input to determine a thread's parsing context. That is, how a thread, beginning somewhere in the middle of the input, should interpret a certain symbol (e.g., whether to interpret a comma as a delimiter or as part of a larger string enclosed in double-quotes). Instead of tailoring the approach to a single format, we are able to perform a massively parallel FSM simulation, which is more flexible and powerful, supporting more expressive parsing rules with general applicability. Achieving a parsing rate of as much as 14.2 GB/s, our experimental evaluation on a GPU with 3584 cores shows that the presented approach is able to scale to thousands of cores and beyond. With an end-to-end streaming approach, we are able to exploit the full-duplex capabilities of the PCIe bus and hide latency from data transfers. Considering the end-to-end performance, the algorithm parses 4.8 GB in as little as 0.44 seconds, including data transfers."]], "0_corpus": "arxiv", "1_conv_id": "f4c133383bd5449c9636e530f8d4fdeb", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "inpars", "1_output": [["inpars", "Title: Matrix Model Fixed Point of Noncommutative Phi-Four\n\nAbstract: In this article we exhibit explicitly the matrix model ($\\theta=\\infty$) fixed point of phi-four theory on noncommutative spacetime with only two noncommuting directions using the Wilson renormalization group recursion formula and the 1/N expansion of the zero dimensional reduction and then calculate the mass critical exponent $\\nu$ and the anomalous dimension $\\eta$ in various dimensions ."]], "1_corpus": "arxiv"} {"tstamp": 1722358165.7939, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "78dfb52f27f64b61929bcde3dcc9504b", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "inpars", "0_output": [["inpars", "Title: Phonon Assisted Multimagnon Optical Absorption and Long Lived Two-Magnon States in Undoped Lamellar Copper Oxides\n\nAbstract: We calculate the effective charge for multimagnon infrared (IR) absorption assisted by phonons in the parent insulating compounds of cuprate superconductors and the spectra for two-magnon absorption using interacting spin-wave theory. Recent measured bands in the mid IR [Perkins et al. Phys. Rev. Lett. {\\bf 71} 1621 (1993)] are interpreted as involving one phonon plus a two-magnon virtual bound state, and one phonon plus higher multimagnon absorption processes. The virtual bound state consists of a narrow resonance occurring when the magnon pair has total momentum close to $(\\pi,0)$."]], "0_corpus": "arxiv", "1_conv_id": "59aba873f4524f3298bd6dc89efec421", "1_model_name": "BM25", "1_prompt": "inpars", "1_output": [["inpars", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"} {"tstamp": 1722358254.6555, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "620b3c01c65e40488aebdee13adb9ab4", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "when does the stock market open pacific time", "0_output": [["when does the stock market open pacific time", "Tokyo Stock Exchange\n\nHours The exchange's normal trading sessions are from 9:00 a.m. to 11:30 a.m. and from 12:30 p.m. to 3:00 p.m. on all days of the week except Saturdays, Sundays and holidays declared by the Exchange in advance. The exchange is closed for the following holidays: New Year's Day, Coming of Age Day, National Foundation Day, Vernal Equinox Day, Shōwa Day, Constitution Memorial Day, Greenery Day, Children's Day, Marine Day, Respect for the Aged Day, Autumnal Equinox, Health and Sports Day, Culture Day, Labour Thanksgiving Day, and The Emperor's Birthday.\nStock market\nFirst, Second and other Sections until April 4, 2022\nUntil April 4, 2022, corporate shares are listed and traded at Tokyo Stock Exchange in five market sections: the First Section which started when Tokyo Stock Exchange was re-established in 1949 and includes mainly large companies; the Second Section which started in 1961 and includes mainly mid-sized companies; JASDAQ (established in 1991, acquired by Osaka Stock Exchange in 2010, and absorbed into TSE in 2013) and Mothers (Market of the high-growth and emerging stocks, established at TSE in 1999) which are both for emerging companies; and TOKYO PRO Market which was established in 2009 jointly with London Stock Exchange as an Alternative Investment Market (Tokyo AIM) for small companies.\nThere were a total of 3,821 companies listed in Tokyo Stock Exchange, as of March 31, 2022.\nPrime, Standard and Growth markets since April 4, 2022"]], "0_corpus": "wikipedia", "1_conv_id": "0d61a1651b204d21ad4de10b5702eaf5", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "when does the stock market open pacific time", "1_output": [["when does the stock market open pacific time", "Philadelphia Stock Exchange\n\nIn March 2020, the PHLX announced plans to temporarily move to all-electronic trading on March 23, 2020, due to the COVID-19 pandemic. Along with the NYSE and the BSE, the PHLX reopened on May 26, 2020.\nLocations\n1790-1834 – Merchants Coffee House (City Tavern), 138 South 2nd Street at Walnut Street\n1834-1876 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1876-1888 – Girard Bank Building, 120 South 3rd Street, between Walnut and Chestnut Streets\n1888-1902 – Drexel Building, near Chestnut and South 5th Streets\n1902-1912 – Merchants' Exchange Building, 143 South 3rd Streets at Dock Street\n1913-1951 – 1411 Walnut Street, between South Broad and South 15th Streets\n1951-1966 – Central Penn Bank Building, 1401 Walnut Street, between South Broad and South 15th Streets\n1966-1981 – 120 South 17th Street, between Sansom Street and Stock Exchange Place (Ionic Street)\nJan-Feb 1969 – Drecker Building, Bala Cynwyd, Pennsylvania (trading floor only)\n1981-2017 – Market and South 19th Streets\n2017-now – FMC Tower at Cira Centre South, 2929 Walnut Street, between Schuylkill Expressway and South 30th Street\nHours\nThe exchange's normal trading sessions are from 9:30 am to 4:00 pm on all days of the week except Saturdays, Sundays and holidays declared by the exchange in advance."]], "1_corpus": "wikipedia"} {"tstamp": 1722358305.208, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bcf54224c20940c9be3f305f57cbea90", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how many indian prime ministers were born in independent india", "0_output": [["how many indian prime ministers were born in independent india", "List of prime ministers of India\n\nSince 1947, India has had 14 prime ministers. Jawaharlal Nehru was India's first prime minister, serving as prime minister of the Dominion of India from 15 August 1947 until 26 January 1950, and thereafter of the Republic of India until his death in May 1964. (India conducted its first post-independence general elections in 1952). Earlier, Nehru had served as prime minister of the Interim Government of India during the British Raj from 2 September 1946 until 14 August 1947, his party, the Indian National Congress having won the 1946 Indian provincial elections. Nehru was succeeded by Lal Bahadur Shastri, whose 1 year 7-month term ended in his death in Tashkent, then in the USSR, where he had signed the Tashkent Declaration between India and Pakistan. Indira Gandhi, Nehru's daughter, succeeded Shastri in 1966 to become the country's first female prime minister. Eleven years later, her party the Indian National Congress lost the 1977 Indian general election to the Janata Party, whose leader Morarji Desai became the first non-Congress prime minister. After Desai resigned in 1979, his former associate Charan Singh briefly held office until the Congress won the 1980 Indian general election and Indira Gandhi returned as prime minister. Her second term as prime minister ended five years later on 31 October 1984, when she was assassinated by her bodyguards. Her son Rajiv Gandhi was sworn in as India's youngest premier. Members of NehruGandhi family have been prime minister for approximately 38 years."]], "0_corpus": "wikipedia", "1_conv_id": "b780dab7e4904028b9644772aa8f3c0c", "1_model_name": "text-embedding-004", "1_prompt": "how many indian prime ministers were born in independent india", "1_output": [["how many indian prime ministers were born in independent india", "List of prime ministers of India\n\nSince 1947, India has had 14 prime ministers. Jawaharlal Nehru was India's first prime minister, serving as prime minister of the Dominion of India from 15 August 1947 until 26 January 1950, and thereafter of the Republic of India until his death in May 1964. (India conducted its first post-independence general elections in 1952). Earlier, Nehru had served as prime minister of the Interim Government of India during the British Raj from 2 September 1946 until 14 August 1947, his party, the Indian National Congress having won the 1946 Indian provincial elections. Nehru was succeeded by Lal Bahadur Shastri, whose 1 year 7-month term ended in his death in Tashkent, then in the USSR, where he had signed the Tashkent Declaration between India and Pakistan. Indira Gandhi, Nehru's daughter, succeeded Shastri in 1966 to become the country's first female prime minister. Eleven years later, her party the Indian National Congress lost the 1977 Indian general election to the Janata Party, whose leader Morarji Desai became the first non-Congress prime minister. After Desai resigned in 1979, his former associate Charan Singh briefly held office until the Congress won the 1980 Indian general election and Indira Gandhi returned as prime minister. Her second term as prime minister ended five years later on 31 October 1984, when she was assassinated by her bodyguards. Her son Rajiv Gandhi was sworn in as India's youngest premier. Members of NehruGandhi family have been prime minister for approximately 38 years."]], "1_corpus": "wikipedia"} {"tstamp": 1722358344.5843, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "db7e05b430d14921bc34cc60599755eb", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "what is the difference between phd and pdf?", "0_output": [["what is the difference between phd and pdf?", "Math Stackexchange\n\nQ: Difference between a PDF and PMF What is the difference between a probability density function and a probability mass function?\n\nA: One is for discrete variables, the other for continuous.\nThey are also interpreted differently. The PDF is a probability density. If f(x) is a PDF, f(x) doesn't tell you the probability of getting x (In fact, the probability of getting precisely x is 0).\nThe way you use a PDF is through integration. So, the probability of getting a value that is in a dx area around x is approximately f(x)*dx.\n"]], "0_corpus": "stackexchange", "1_conv_id": "0f78bc246a144b02832e702ee547b20a", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "what is the difference between phd and pdf?", "1_output": [["what is the difference between phd and pdf?", "Stackoverflow Stackexchange\n\nQ: Can't find kubeadm token after initializing master When i run kubeadm init it shows a token that I should use from the other hosts to connect to this host. Where is that token stored? I'm trying to figure out more ways to automate this by adding kubeadm inside of Terraform. \nThanks for any help!\n\nA: \n--token\nBy default, kubeadm init automatically generates the token used to\n initialise each new node. If you would like to manually specify this\n token, you can use the --token flag. The token must be of the format\n ‘<6 character string>.<16 character string>’.\n\nyou can see token value on master node using command:\ncat /etc/kubernetes/pki/tokens.csv\n\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722358379.1948, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "fe1ce490872a499fb54784a3637c1683", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "are all achiral molecules meso?", "0_output": [["are all achiral molecules meso?", "Stackoverflow Stackexchange\n\nQ: Test If An NSString Contains a Letter I'm working on an iPhone project and I need to check if the user's input in a UITextfield contains a letter. More generally if an NSString contains a letter.\nI tried this with a giant if loop with the rangeofstring:@\"A\".location == NSNotFound and then did OR rangeofstring:@\"B\".location == NSNotFound\nand so on....\nBut:\n\n\n*\n\n*It doesn't seem to work\n\n*There has to be a simple line of code to check if the NSString contains letters.\n\n\nI have been searching this for hours... Can someone please answer this question???\n\nA: If you want to make sure the text has a certain letter in it (as opposed to just ANY letter), use the rangeOfString: message. For example, to ensure the text contains the letter \"Q\":\nNSString *string = @\"poQduu\";\n\n\nif ([string rangeOfString:@\"Q\"].location != NSNotFound) {\n DLog (@\"Yes, we have a Q at location %i\", [string rangeOfString:@\"Q\"].location );\n}\n\nAs others (Rob Napier) note, if you want to find ANY letter, use the rangeOfCharacterFromSet: message.\nif ([string rangeOfCharacterFromSet:[NSCharacterSet letterCharacterSet]].location != NSNotFound) ...\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "8090cc0cc0b946bc8818f151b0cfcdfd", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all achiral molecules meso?", "1_output": [["are all achiral molecules meso?", "Physics Stackexchange\n\nQ: Why can a solution show optical rotation? Why can a solution show optical rotation? A solution, as a liquid, is rotationally isotropic, right? So, even if the molecules are chiral, because of the random orientation of the molecules, shouldn't the effect cancel out? \n\nA: Your claim:\nSo, even if the molecules are chiral, because of the random orientation of the molecules, should not the effect cancel out?\nis not correct.\nOne should think in a way that molecules have different dispersion for left and right circularly polarized light. Hence one of the circular polarization will be retarded with respect to another which will result in rotated linear polarization.\nIf molecules also have different absorption the exiting light will be elliptic\nEDIT:\nTo visualize the effect take a spring and rotate it - you will see - no matter from which side you look it will be still left (or right), in a sense of clock or -counterclockwise rotation of the spring.\nThus if you have only left springs - you will have more retardation for the left light.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722358420.2689, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "31f178365d4f46f68c5dc6cfef04f2c0", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "do tonics stack in bioshock 2?", "0_output": [["do tonics stack in bioshock 2?", "Gaming Stackexchange\n\nQ: Can multiple gene tonics stack in Bioshock 2? If I have a specific gene tonic in level 1 and level 2, is there any benefit to equipping both of them at the same time? Will I get the full 1+2 bonuses? Or will I get just somewhat over the 2nd bonus? Or is the level 1 tonic completely useless?\n\nA: The bonuses from the gene tonics stack, so if you equip a level 1 and a level 2 tonic, you will get the full bonuses from both.\n"]], "0_corpus": "stackexchange", "1_conv_id": "35ba2fd72c234df5816ca5c243d4e6de", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "do tonics stack in bioshock 2?", "1_output": [["do tonics stack in bioshock 2?", "Gaming Stackexchange\n\nQ: Can multiple gene tonics stack in Bioshock 2? If I have a specific gene tonic in level 1 and level 2, is there any benefit to equipping both of them at the same time? Will I get the full 1+2 bonuses? Or will I get just somewhat over the 2nd bonus? Or is the level 1 tonic completely useless?\n\nA: The bonuses from the gene tonics stack, so if you equip a level 1 and a level 2 tonic, you will get the full bonuses from both.\n"]], "1_corpus": "stackexchange"} {"tstamp": 1722358443.207, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "051c9eaffd8c4a1f88d6a7b072e3a3da", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "bf3763ae7d234c86b92a4acbf172612e", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Like Sparrows on a Clothes Line: The Self-Organization of Random Number Sequences\n\nAbstract: We study sequences of random numbers {Z[1],Z[2],Z[3],...,Z[n]} -- which can be considered random walks with reflecting barriers -- and define their \"types\" according to whether Z[i] > Z[i+1], (a down-movement), or Z[i] < Z[i+1] (up-movement). This paper examines the means, xi, to which the Zi converge, when a large number of sequences of the same type is considered. It is shown that these means organize themselves in such a way that, between two turning points of the sequence, they are equidistant from one another. We also show that m steps in one direction tend to offset one step in the other direction, as m -> infinity. Key words:random number sequence, self-organization, random walk, reflecting barriers."]], "1_corpus": "arxiv"} {"tstamp": 1722358779.586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f897a82cc1c4bb6877d9de92b903c30", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Who was temujin's greatest rival before 1207?", "0_output": [["Who was temujin's greatest rival before 1207?", "Rise of Genghis Khan\n\nThe period of Temüjin's life from 1177 to 1191 is largely unknown except that Temüjin often fought the Taichiud, Salji'ut, Khadagin, and Tatars with mixed results. One of the clans following Temüjin eventually left and was defeated by the Taichiud, after which they joined Jamukha. During the 1180s there was a drought in Mongolia which increased conflict between the tribes but Temüjin only played a limited part in these affairs.\nAttack on the Merkit (1191)\nThe previous attack on the Merkit that resulted in the rescue of Börte may have been a separate campaign that occurred in 1191, and was confused in the sources. In 1191, Jamukha, Temüjin, and Toghrul and his brother Jakha Gambhu decided to attack the Merkit. However Temüjin and Toghrul were late to the rendezvous area by three days, which greatly angered Jamukha. Together the 40,000 strong allied forces made a detour east to attack the enemy camp from the north-east. A fisherman spotted them and warned the Merkit of the impending attack, but the Merkit failed to make the necessary preparations, and dispersed.\nBattle of Dalan Baljut (1187/1193)\nTemüjin and Jamukha stayed together for a while after the battle. At one point they sent an embassy to the Khadagin and Salji'ut to try to win them over against the Taichiud. In 1187 or 1193, Temüjin and Jamukha split when retainers of the two camps clashed over horse thievery. Temüjin took with him 41 tribal leaders and 10,000 men from Jamukha and was elected khan by his followers in the following days. The Taichiud became frightened by Temüjin's power and joined Jamukha."]], "0_corpus": "wikipedia", "1_conv_id": "8b89df9e59e0464fbb8397b16371fca3", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Who was temujin's greatest rival before 1207?", "1_output": [["Who was temujin's greatest rival before 1207?", "Emperor Ōjin\n\n, also known as (alternatively spelled ) or , was the 15th (possibly legendary) Emperor of Japan, according to the traditional order of succession. Both the Kojiki, and the Nihon Shoki (collectively known as the Kiki) record events that took place during Ōjin's alleged lifetime. Ōjin is traditionally listed as the first emperor of the Kofun period, and is primarily known for being the controversial son of Empress Jingū. Historians have mixed views on his factual existence, if Ōjin was indeed a historical figure then it's assumed he reigned much later than he is attested.\nNo firm dates can be assigned to Ōjin's life or reign, but he is traditionally considered to have reigned from 270 to 310. According to the Shinto religion and Buddhism in Japan, Emperor Ōjin is the divine spirit of the deity Hachiman (八幡神). While the location of Ōjin's grave (if any) is unknown, he is traditionally venerated at a memorial Shinto tomb. Modern historians have come to the conclusion that the title of \"Emperor\" and the name \"Ōjin\" was used by later generations to describe this possibly legendary Emperor."]], "1_corpus": "wikipedia"} {"tstamp": 1722358884.9334, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b0d36170b64947a5b92f47145da1c137", "0_model_name": "voyage-multilingual-2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f5ce1163f07c459f918462330844dde2", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "1_corpus": "wikipedia"} {"tstamp": 1722359302.8775, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f851a7c74e674aa2962426b5a7b47a4b", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nIt features a female humanoid automaton standing by a basin filled with water.When the user pulls the lever, the water drains and the female automaton refills the basin. Furthermore, he created a robotic musical band. According to Mark Rosheim, unlike Greek designs Arab automata worked with dramatic illusion and manipulated the human perception for practical application. The segmental gears described in The Book of Knowledge of Ingenious Mechanical Devices, published by Al-Jazari shortly before his death in 1206, appeared 100 years later in the most advanced European clocks. Al-Jazari also published instructions on the construction of humanoid automata. The first water clocks modeled on Arabic designs were constructed in Europe about 1000 CE, possibly on the basis of the information that was transmitted during Muslim-Christian contact in Sicily and Spain. Among the first recorded European water clocks is that of Gerbert of Aurillac, built in 985 CE. Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. "]], "0_corpus": "wikipedia", "1_conv_id": "487ff743d431410c884ef2cb7b8e972f", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "1_corpus": "wikipedia"} {"tstamp": 1722359487.5637, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "0_model_name": "embed-english-v3.0", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "0_corpus": "arxiv", "1_conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"} {"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"} {"tstamp": 1722359624.4824, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fdd19901cd4547e3815c9d79d5751118", "0_model_name": "BM25", "0_prompt": "How metadata is used with RAG", "0_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "0_corpus": "arxiv", "1_conv_id": "c628bf57e2434ce999a71c814ebac2e8", "1_model_name": "embed-english-v3.0", "1_prompt": "How metadata is used with RAG", "1_output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "1_corpus": "arxiv"} {"tstamp": 1722359625.838, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "cfd7524090934e63a83ff9284cc4d54c", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "0_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: StructCoder: Structure-Aware Transformer for Code Generation\n\nAbstract: There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code's syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/."]], "0_corpus": "arxiv", "1_conv_id": "1e72b400d20445c1a5f905e301166625", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "1_output": [["Paper that improves code prediction accuracy using a syntactically-aware Transformer model.", "Title: Empirical Study of Transformers for Source Code\n\nAbstract: Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model."]], "1_corpus": "arxiv"} {"tstamp": 1722360183.5049, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "8e4d32f5269142e68ec26538741715ea", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Why is the sky blue?", "0_output": [["Why is the sky blue?", "Sky\n\nEtymology The word sky comes from the Old Norse , meaning 'cloud, abode of God'. The Norse term is also the source of the Old English , which shares the same Indo-European base as the classical Latin , meaning 'obscure'.\nIn Old English, the term heaven was used to describe the observable expanse above the earth. During the period of Middle English, \"heaven\" began shifting toward its current, religious meaning.\nDuring daytime\nExcept for direct sunlight, most of the light in the daytime sky is caused by scattering, which is dominated by a small-particle limit called Rayleigh scattering. The scattering due to molecule-sized particles (as in air) is greater in the directions both toward and away from the source of light than it is in directions perpendicular to the incident path. Scattering is significant for light at all visible wavelengths, but is stronger at the shorter (bluer) end of the visible spectrum, meaning that the scattered light is bluer than its source: the Sun. The remaining direct sunlight, having lost some of its shorter-wavelength components, appears slightly less blue.\nScattering also occurs even more strongly in clouds. Individual water droplets refract white light into a set of colored rings. If a cloud is thick enough, scattering from multiple water droplets will wash out the set of colored rings and create a washed-out white color."]], "0_corpus": "wikipedia", "1_conv_id": "00cd735a722943bca9ff50386cee6ee3", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Why is the sky blue?", "1_output": [["Why is the sky blue?", "Rayleigh scattering\n\nRayleigh scattering ( ), named after the 19th-century British physicist Lord Rayleigh (John William Strutt), is the predominantly elastic scattering of light, or other electromagnetic radiation, by particles with a size much smaller than the wavelength of the radiation. For light frequencies well below the resonance frequency of the scattering medium (normal dispersion regime), the amount of scattering is inversely proportional to the fourth power of the wavelength, e.g., a blue color is scattered much more than a red color as light propagates through air.\nRayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, causing them to move at the same frequency. The particle, therefore, becomes a small radiating dipole whose radiation we see as scattered light. The particles may be individual atoms or molecules; it can occur when light travels through transparent solids and liquids, but is most prominently seen in gases.\nRayleigh scattering of sunlight in Earth's atmosphere causes diffuse sky radiation, which is the reason for the blue color of the daytime and twilight sky, as well as the yellowish to reddish hue of the low Sun. Sunlight is also subject to Raman scattering, which changes the rotational state of the molecules and gives rise to polarization effects."]], "1_corpus": "wikipedia"} {"tstamp": 1722360252.5966, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "1b169b7653ab4c8091ecb4bf186579e2", "0_model_name": "embed-english-v3.0", "0_prompt": "Are there grass or fiber eating birds?", "0_output": [["Are there grass or fiber eating birds?", "Bird\n\nDiet and feeding are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Some species such as pigeons and some psittacine species do not have a gallbladder. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein stored in many parts of their bodies, including protein from the intestines, as additional energy during migration.\nBirds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Avian foraging strategies can vary widely by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually.\nNectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Loons, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers."]], "0_corpus": "wikipedia", "1_conv_id": "bf7123c3bf574070873b3438c66a7145", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Are there grass or fiber eating birds?", "1_output": [["Are there grass or fiber eating birds?", "Bird\n\nDiet and feeding are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Some species such as pigeons and some psittacine species do not have a gallbladder. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein stored in many parts of their bodies, including protein from the intestines, as additional energy during migration.\nBirds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Avian foraging strategies can vary widely by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually.\nNectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Loons, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers."]], "1_corpus": "wikipedia"} {"tstamp": 1722360320.5742, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b454985e65da4a54967a4a82ac91170c", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Can plants get cancer?", "0_output": [["Can plants get cancer?", "Blue agave\n\nTequila production Tequila is made only from a specific cultivar of Agave tequilana called 'Weber Azul'. The plant is from the Asparagaceae family. This cultivar is larger and blue-gray in color compared to the smaller and green normal A. tequilana. It is a rapid grower and prolific offsetter in keeping with its agricultural advantages. Tequila is produced by removing the heart (piña) of the plant in its seventh to fourteenth year (depending on growth rate and whims of harvester). Harvested piñas normally weigh . This heart is stripped of its leaves and heated to convert the inulin to sugars. Then the roasted core is pressed or crushed to release the sugary clear liquid called aguamiel, which is, in turn, fermented and distilled into alcohol. Tequila is also made with a sugar formulation of 51% agave and 49% other sugars. These tequilas are referred to as Mixtos.\nPathogens affecting the plant\nAs agave production has moved to an industrial scale since the end of the 1980s, diseases and pests, collectively referred to as TMA (tristeza y muerte de agave, \"wilting and death of agave\"), have hit the crops. Through the 1990s, diseases spread, particularly Fusarium fungi and Erwinia bacteria, exacerbated by the low genetic diversity of the agave plants. Other problems include the agave weevil, Scyphophorus acupunctatus, and a fungus, Thielaviopsis paradoxa.\nAccording to a 2004 study, additional pathogens, Erwinia carotovora, Enterobacter agglomerans, Pseudomonas mendocina, and Serratia spp. are responsible for continued rot."]], "0_corpus": "wikipedia", "1_conv_id": "8e0a2106198b416899abf77d45576c31", "1_model_name": "voyage-multilingual-2", "1_prompt": "Can plants get cancer?", "1_output": [["Can plants get cancer?", "List of unproven and disproven cancer treatments\n\nVenus flytrap – a carnivorous plant, the extract of which is promoted as a treatment for a variety of human ailments including skin cancer. According to the American Cancer Society, \"available scientific evidence does not support claims that extract from the Venus flytrap plant is effective in treating skin cancer or any other type of cancer\".\nWalnuts – large, hard edible seeds of any tree of the genus Juglans. Black walnut has been promoted as a cancer cure on the basis it kills a \"parasite\" responsible for the disease. However, according to the American Cancer Society, \"available scientific evidence does not support claims that hulls from black walnuts remove parasites from the intestinal tract or that they are effective in treating cancer or any other disease\".\nWheatgrass – a food made from grains of wheat. According to the American Cancer Society, although some wheatgrass champions claim it can \"shrink\" cancer tumors, \"available scientific evidence does not support the idea that wheatgrass or the wheatgrass diet can cure or prevent disease\".\nWild yam (or Chinese yam) – types of yam, the roots of which are made into creams and dietary supplements that are promoted for a variety of medicinal purposes, including cancer prevention. The American Cancer Society says of these products, \"available scientific evidence does not support claims that they are safe or effective.\""]], "1_corpus": "wikipedia"} {"tstamp": 1722360986.7523, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "0cf03b7dcc2d4c42aaec89dce08c87b5", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "What is inpars?", "0_output": [["What is inpars?", "Title: On the conical refraction of hydromagnetic waves in plasma with anisotropic thermal pressure\n\nAbstract: A phenomenon analogous to the conical refraction widely known in the crystalooptics and crystaloacoustics is discovered for the magnetohydrodynamical waves in the collisionless plasma with anisotropic thermal pressure. Angle of the conical refraction is calculated for the medium under study which is predicted to be $18^{\\circ}26^{\\prime}$. Possible experimental corroborating of the discovered phenomenon is discussed."]], "0_corpus": "arxiv", "1_conv_id": "ea89ad4b67514cca9ee93d3de356758b", "1_model_name": "BM25", "1_prompt": "What is inpars?", "1_output": [["What is inpars?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"}