Scheduled Commit
Browse files
data/retrieval_battle-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl
CHANGED
@@ -18,3 +18,5 @@
|
|
18 |
{"tstamp": 1722358779.586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f897a82cc1c4bb6877d9de92b903c30", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Who was temujin's greatest rival before 1207?", "0_output": [["Who was temujin's greatest rival before 1207?", "Rise of Genghis Khan\n\nThe period of Temüjin's life from 1177 to 1191 is largely unknown except that Temüjin often fought the Taichiud, Salji'ut, Khadagin, and Tatars with mixed results. One of the clans following Temüjin eventually left and was defeated by the Taichiud, after which they joined Jamukha. During the 1180s there was a drought in Mongolia which increased conflict between the tribes but Temüjin only played a limited part in these affairs.\nAttack on the Merkit (1191)\nThe previous attack on the Merkit that resulted in the rescue of Börte may have been a separate campaign that occurred in 1191, and was confused in the sources. In 1191, Jamukha, Temüjin, and Toghrul and his brother Jakha Gambhu decided to attack the Merkit. However Temüjin and Toghrul were late to the rendezvous area by three days, which greatly angered Jamukha. Together the 40,000 strong allied forces made a detour east to attack the enemy camp from the north-east. A fisherman spotted them and warned the Merkit of the impending attack, but the Merkit failed to make the necessary preparations, and dispersed.\nBattle of Dalan Baljut (1187/1193)\nTemüjin and Jamukha stayed together for a while after the battle. At one point they sent an embassy to the Khadagin and Salji'ut to try to win them over against the Taichiud. In 1187 or 1193, Temüjin and Jamukha split when retainers of the two camps clashed over horse thievery. Temüjin took with him 41 tribal leaders and 10,000 men from Jamukha and was elected khan by his followers in the following days. The Taichiud became frightened by Temüjin's power and joined Jamukha."]], "0_corpus": "wikipedia", "1_conv_id": "8b89df9e59e0464fbb8397b16371fca3", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Who was temujin's greatest rival before 1207?", "1_output": [["Who was temujin's greatest rival before 1207?", "Emperor Ōjin\n\n, also known as (alternatively spelled ) or , was the 15th (possibly legendary) Emperor of Japan, according to the traditional order of succession. Both the Kojiki, and the Nihon Shoki (collectively known as the Kiki) record events that took place during Ōjin's alleged lifetime. Ōjin is traditionally listed as the first emperor of the Kofun period, and is primarily known for being the controversial son of Empress Jingū. Historians have mixed views on his factual existence, if Ōjin was indeed a historical figure then it's assumed he reigned much later than he is attested.\nNo firm dates can be assigned to Ōjin's life or reign, but he is traditionally considered to have reigned from 270 to 310. According to the Shinto religion and Buddhism in Japan, Emperor Ōjin is the divine spirit of the deity Hachiman (八幡神). While the location of Ōjin's grave (if any) is unknown, he is traditionally venerated at a memorial Shinto tomb. Modern historians have come to the conclusion that the title of \"Emperor\" and the name \"Ōjin\" was used by later generations to describe this possibly legendary Emperor."]], "1_corpus": "wikipedia"}
|
19 |
{"tstamp": 1722358884.9334, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b0d36170b64947a5b92f47145da1c137", "0_model_name": "voyage-multilingual-2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f5ce1163f07c459f918462330844dde2", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "1_corpus": "wikipedia"}
|
20 |
{"tstamp": 1722359302.8775, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f851a7c74e674aa2962426b5a7b47a4b", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nIt features a female humanoid automaton standing by a basin filled with water.When the user pulls the lever, the water drains and the female automaton refills the basin. Furthermore, he created a robotic musical band. According to Mark Rosheim, unlike Greek designs Arab automata worked with dramatic illusion and manipulated the human perception for practical application. The segmental gears described in The Book of Knowledge of Ingenious Mechanical Devices, published by Al-Jazari shortly before his death in 1206, appeared 100 years later in the most advanced European clocks. Al-Jazari also published instructions on the construction of humanoid automata. The first water clocks modeled on Arabic designs were constructed in Europe about 1000 CE, possibly on the basis of the information that was transmitted during Muslim-Christian contact in Sicily and Spain. Among the first recorded European water clocks is that of Gerbert of Aurillac, built in 985 CE. Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. "]], "0_corpus": "wikipedia", "1_conv_id": "487ff743d431410c884ef2cb7b8e972f", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "1_corpus": "wikipedia"}
|
|
|
|
|
|
18 |
{"tstamp": 1722358779.586, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "5f897a82cc1c4bb6877d9de92b903c30", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Who was temujin's greatest rival before 1207?", "0_output": [["Who was temujin's greatest rival before 1207?", "Rise of Genghis Khan\n\nThe period of Temüjin's life from 1177 to 1191 is largely unknown except that Temüjin often fought the Taichiud, Salji'ut, Khadagin, and Tatars with mixed results. One of the clans following Temüjin eventually left and was defeated by the Taichiud, after which they joined Jamukha. During the 1180s there was a drought in Mongolia which increased conflict between the tribes but Temüjin only played a limited part in these affairs.\nAttack on the Merkit (1191)\nThe previous attack on the Merkit that resulted in the rescue of Börte may have been a separate campaign that occurred in 1191, and was confused in the sources. In 1191, Jamukha, Temüjin, and Toghrul and his brother Jakha Gambhu decided to attack the Merkit. However Temüjin and Toghrul were late to the rendezvous area by three days, which greatly angered Jamukha. Together the 40,000 strong allied forces made a detour east to attack the enemy camp from the north-east. A fisherman spotted them and warned the Merkit of the impending attack, but the Merkit failed to make the necessary preparations, and dispersed.\nBattle of Dalan Baljut (1187/1193)\nTemüjin and Jamukha stayed together for a while after the battle. At one point they sent an embassy to the Khadagin and Salji'ut to try to win them over against the Taichiud. In 1187 or 1193, Temüjin and Jamukha split when retainers of the two camps clashed over horse thievery. Temüjin took with him 41 tribal leaders and 10,000 men from Jamukha and was elected khan by his followers in the following days. The Taichiud became frightened by Temüjin's power and joined Jamukha."]], "0_corpus": "wikipedia", "1_conv_id": "8b89df9e59e0464fbb8397b16371fca3", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Who was temujin's greatest rival before 1207?", "1_output": [["Who was temujin's greatest rival before 1207?", "Emperor Ōjin\n\n, also known as (alternatively spelled ) or , was the 15th (possibly legendary) Emperor of Japan, according to the traditional order of succession. Both the Kojiki, and the Nihon Shoki (collectively known as the Kiki) record events that took place during Ōjin's alleged lifetime. Ōjin is traditionally listed as the first emperor of the Kofun period, and is primarily known for being the controversial son of Empress Jingū. Historians have mixed views on his factual existence, if Ōjin was indeed a historical figure then it's assumed he reigned much later than he is attested.\nNo firm dates can be assigned to Ōjin's life or reign, but he is traditionally considered to have reigned from 270 to 310. According to the Shinto religion and Buddhism in Japan, Emperor Ōjin is the divine spirit of the deity Hachiman (八幡神). While the location of Ōjin's grave (if any) is unknown, he is traditionally venerated at a memorial Shinto tomb. Modern historians have come to the conclusion that the title of \"Emperor\" and the name \"Ōjin\" was used by later generations to describe this possibly legendary Emperor."]], "1_corpus": "wikipedia"}
|
19 |
{"tstamp": 1722358884.9334, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b0d36170b64947a5b92f47145da1c137", "0_model_name": "voyage-multilingual-2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "f5ce1163f07c459f918462330844dde2", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "1_corpus": "wikipedia"}
|
20 |
{"tstamp": 1722359302.8775, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f851a7c74e674aa2962426b5a7b47a4b", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nIt features a female humanoid automaton standing by a basin filled with water.When the user pulls the lever, the water drains and the female automaton refills the basin. Furthermore, he created a robotic musical band. According to Mark Rosheim, unlike Greek designs Arab automata worked with dramatic illusion and manipulated the human perception for practical application. The segmental gears described in The Book of Knowledge of Ingenious Mechanical Devices, published by Al-Jazari shortly before his death in 1206, appeared 100 years later in the most advanced European clocks. Al-Jazari also published instructions on the construction of humanoid automata. The first water clocks modeled on Arabic designs were constructed in Europe about 1000 CE, possibly on the basis of the information that was transmitted during Muslim-Christian contact in Sicily and Spain. Among the first recorded European water clocks is that of Gerbert of Aurillac, built in 985 CE. Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. "]], "0_corpus": "wikipedia", "1_conv_id": "487ff743d431410c884ef2cb7b8e972f", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "1_corpus": "wikipedia"}
|
21 |
+
{"tstamp": 1722359487.5637, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "0_model_name": "embed-english-v3.0", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "0_corpus": "arxiv", "1_conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "1_corpus": "arxiv"}
|
22 |
+
{"tstamp": 1722359560.756, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a0556f3c9cad49b99d474ca023c060e7", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What is InPars-v2 made to do?", "0_output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "0_corpus": "arxiv", "1_conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "1_model_name": "embed-english-v3.0", "1_prompt": "What is InPars-v2 made to do?", "1_output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl
CHANGED
@@ -69,3 +69,7 @@
|
|
69 |
{"tstamp": 1722359293.0069, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722359292.8075, "finish": 1722359293.0069, "ip": "", "conv_id": "487ff743d431410c884ef2cb7b8e972f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
|
70 |
{"tstamp": 1722359469.9302, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359467.6719, "finish": 1722359469.9302, "ip": "", "conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "model_name": "embed-english-v3.0", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "corpus": "arxiv"}
|
71 |
{"tstamp": 1722359469.9302, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722359467.6719, "finish": 1722359469.9302, "ip": "", "conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
69 |
{"tstamp": 1722359293.0069, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722359292.8075, "finish": 1722359293.0069, "ip": "", "conv_id": "487ff743d431410c884ef2cb7b8e972f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
|
70 |
{"tstamp": 1722359469.9302, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359467.6719, "finish": 1722359469.9302, "ip": "", "conv_id": "85bfbae8a8b04f2690900a739f4a2a2c", "model_name": "embed-english-v3.0", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "corpus": "arxiv"}
|
71 |
{"tstamp": 1722359469.9302, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722359467.6719, "finish": 1722359469.9302, "ip": "", "conv_id": "bf0dc2d5a59f4c348d1534b4b4121ac7", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval\n\nAbstract: Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. These synthetic query-document pairs can then be used to train a retriever. However, InPars and, more recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to generate such datasets. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. To allow researchers to further improve our method, we open source the code, synthetic data, and finetuned models: https://github.com/zetaalphavector/inPars/tree/master/tpu"]], "corpus": "arxiv"}
|
72 |
+
{"tstamp": 1722359551.7443, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722359549.5858, "finish": 1722359551.7443, "ip": "", "conv_id": "a0556f3c9cad49b99d474ca023c060e7", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: Chiral Solitons in Generalized Korteweg-de Vries Equations\n\nAbstract: Generalizations of the Korteweg-de Vries equation are considered, and some explicit solutions are presented. There are situations where solutions engender the interesting property of being chiral, that is, of having velocity determined in terms of the parameters that define the generalized equation, with a definite sign."]], "corpus": "arxiv"}
|
73 |
+
{"tstamp": 1722359551.7443, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359549.5858, "finish": 1722359551.7443, "ip": "", "conv_id": "fbeacc6314704f0c9a33a8f327ceb770", "model_name": "embed-english-v3.0", "prompt": "What is InPars-v2 made to do?", "output": [["What is InPars-v2 made to do?", "Title: The INFN-FBK Phase-2 R{\\&}D Program\n\nAbstract: We report on the 3-year INFN ATLAS-CMS joint research activity in collaboration with FBK, started in 2014, and aimed at the development of new thin pixel detectors for the High Luminosity LHC Phase-2 upgrades. The program is concerned with both 3D and planar active-edge pixel sensors to be made on 6-inch p-type wafers. The technology and the design will be optimized and qualified for extreme radiation hardness (2e16 neq cm-2). Pixel layouts compatible with present (for testing) and future (RD53 65nm) front-end chips of ATLAS and CMS are considered. The paper covers the main aspects of the research program, from the sensor design and fabrication technology, to the results of initial tests performed on the first prototypes."]], "corpus": "arxiv"}
|
74 |
+
{"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "fdd19901cd4547e3815c9d79d5751118", "model_name": "BM25", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
|
75 |
+
{"tstamp": 1722359580.4895, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722359580.242, "finish": 1722359580.4895, "ip": "", "conv_id": "c628bf57e2434ce999a71c814ebac2e8", "model_name": "embed-english-v3.0", "prompt": "How metadata is used with RAG", "output": [["How metadata is used with RAG", "Title: Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata\n\nAbstract: The retrieval-augmented generation (RAG) enables retrieval of relevant information from an external knowledge source and allows large language models (LLMs) to answer queries over previously unseen document collections. However, it was demonstrated that traditional RAG applications perform poorly in answering multi-hop questions, which require retrieving and reasoning over multiple elements of supporting evidence. We introduce a new method called Multi-Meta-RAG, which uses database filtering with LLM-extracted metadata to improve the RAG selection of the relevant documents from various sources, relevant to the question. While database filtering is specific to a set of questions from a particular domain and format, we found out that Multi-Meta-RAG greatly improves the results on the MultiHop-RAG benchmark. The code is available at https://github.com/mxpoliakov/Multi-Meta-RAG."]], "corpus": "arxiv"}
|