Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -80,3 +80,4 @@
|
|
80 |
{"tstamp": 1722575077.532, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "9245f99d83c74c4d9a4dbf5b87dc5535", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "dialog systems that use embeddings, for LLM-based chatbots", "0_output": [["dialog systems that use embeddings, for LLM-based chatbots", "Title: Exploring and Characterizing Large Language Models For Embedded System Development and Debugging\n\nAbstract: Large language models (LLMs) have shown remarkable abilities to generate code, however their ability to develop software for embedded systems, which requires cross-domain knowledge of hardware and software has not been studied. In this paper we develop an extensible, open source hardware-in-the-loop framework to systematically evaluate leading LLMs (GPT-3.5, GPT-4, PaLM 2) to assess their capabilities and limitations for embedded system development. We observe through our study that even when these tools fail to produce working code, they consistently generate helpful reasoning about embedded design tasks. We leverage this finding to study how human programmers interact with these tools, and develop an human-AI based software engineering workflow for building embedded systems. Our evaluation platform for verifying LLM generated programs uses sensor actuator pairs for physical evaluation. We compare all three models with N=450 experiments and find surprisingly that GPT-4 especially shows an exceptional level of cross-domain understanding and reasoning, in some cases generating fully correct programs from a single prompt. In N=50 trials, GPT-4 produces functional I2C interfaces 66% of the time. GPT-4 also produces register-level drivers, code for LoRa communication, and context-specific power optimizations for an nRF52 program resulting in over 740x current reduction to 12.2uA. We also characterize the models' limitations to develop a generalizable human-AI workflow for using LLMs in embedded system development. We evaluate our workflow with 15 users including novice and expert programmers. We find that our workflow improves productivity for all users and increases the success rate for building a LoRa environmental sensor from 25% to 100%, including for users with zero hardware or C/C++ experience."]], "0_corpus": "arxiv", "1_conv_id": "61900cd85b4241d8a19119f6f1b16464", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "dialog systems that use embeddings, for LLM-based chatbots", "1_output": [["dialog systems that use embeddings, for LLM-based chatbots", "Title: Domain-specific ChatBots for Science using Embeddings\n\nAbstract: Large language models (LLMs) have emerged as powerful machine-learning systems capable of handling a myriad of tasks. Tuned versions of these systems have been turned into chatbots that can respond to user queries on a vast diversity of topics, providing informative and creative replies. However, their application to physical science research remains limited owing to their incomplete knowledge in these areas, contrasted with the needs of rigor and sourcing in science domains. Here, we demonstrate how existing methods and software tools can be easily combined to yield a domain-specific chatbot. The system ingests scientific documents in existing formats, and uses text embedding lookup to provide the LLM with domain-specific contextual information when composing its reply. We similarly demonstrate that existing image embedding methods can be used for search and retrieval across publication figures. These results confirm that LLMs are already suitable for use by physical scientists in accelerating their research efforts."]], "1_corpus": "arxiv"}
|
81 |
{"tstamp": 1722579503.8059, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c6a86245ccb44e83a15f4767cef256fc", "0_model_name": "voyage-multilingual-2", "0_prompt": "who helped them recapture mycenae once they were old enough to fight", "0_output": [["who helped them recapture mycenae once they were old enough to fight", "Mycenae\n\nAtreid dynasty The people of Mycenae had received advice from an oracle that they should choose a new king from among the Pelopids. The two contenders were Atreus and his brother, Thyestes. The latter was chosen at first. At this moment nature intervened and the sun appeared to reverse direction by setting in the east. Atreus argued that because the sun had reversed its path, the election of Thyestes should be reversed. The argument was heeded, and Atreus became king. His first move was to pursue Thyestes and all his family – that is, his own kin – but Thyestes managed to escape from Mycenae.\nIn legend, Atreus had two sons, Agamemnon and Menelaus, the Atreids. Aegisthus, the son of Thyestes, killed Atreus and restored Thyestes to the throne. With the help of King Tyndareus of Sparta, the Atreids drove Thyestes again into exile. Tyndareus had two ill-starred daughters, Helen and Clytemnestra, whom Menelaus and Agamemnon married, respectively. Agamemnon inherited Mycenae and Menelaus became king of Sparta.\nHomeric Poems\nSoon, Helen eloped with Paris of Troy. Agamemnon conducted a 10-year war against Troy to get her back for his brother. Because of lack of wind, the warships could not sail to Troy. In order to please the gods so that they might make the winds start to blow, Agamemnon sacrificed his daughter Iphigenia. According to some versions of the legend, the hunting goddess Artemis replaced her at the very last moment with a deer on the altar, and took Iphigenia to Tauris (see Iphigenia in Tauris by Euripides). The deities, having been satisfied by such a sacrifice, made the winds blow and the Greek fleet departed."]], "0_corpus": "wikipedia", "1_conv_id": "732db75064584027960d547439a205ca", "1_model_name": "text-embedding-004", "1_prompt": "who helped them recapture mycenae once they were old enough to fight", "1_output": [["who helped them recapture mycenae once they were old enough to fight", "Mycenae\n\nLegend tells us that the long and arduous Trojan War, although nominally a Greek victory, brought anarchy, piracy, and ruin; already before the Greek fleet set sail for Troy, the conflict had divided the gods as well, and this contributed to curses and acts of vengeance following many of the Greek heroes. After the war Agamemnon returned to Mycenae and was greeted royally with a red carpet rolled out for him. Shortly thereafter, he was slain by Clytemnestra, who hated him bitterly for having ordered the sacrifice of their daughter Iphigenia in order to gain favorable winds to Troy. Clytemnestra was aided in her crime by Aegistheus, her lover, who reigned subsequently, but Orestes, her son by Agamemnon, was smuggled out to Phocis. He returned as an adult with his sister Electra to slay Clytemnestra and Aegistheus. He then fled to Athens to evade justice and a matricide, and became insane for a time. Meanwhile, the throne of Mycenae went to Aletes, son of Aegistheus, but not for long. Recovering, Orestes returned to Mycenae with Electra to kill Aletes and took the throne. This story is told in numerous plays, including the Oresteia, Sophocles' Electra, and Euripides' Electra.\nEnd of the Atreids\nOrestes then built a larger state in the Peloponnese, but he died in Arcadia from a snake bite. His son, Tisamenus, the last of the Atreid dynasty, was killed by the Heracleidae on their return to the Peloponnesus. They claimed the right of the Perseids to inherit the various kingdoms of the Peloponnese and cast lots for the dominion of them, thus leaving the Atreids as the final rulers of Legendary Mycenae."]], "1_corpus": "wikipedia"}
|
82 |
{"tstamp": 1722580525.0932, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4c79178fcee5443abea551c4946d3806", "0_model_name": "BM25", "0_prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "0_output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stats Stackexchange\n\nQ: Books on designing clinical trials for psychotherapy research I'm designing a clinical trial that will evaluate a behavioral therapy against a control condition, and I'm looking for some good texts on this topic. There are a lot of books on clinical trials in medicine, but I've had a hard time finding one that also covers the topics specific to psychotherapy research. \nFor instance, topics like choosing an appropriate control group and evaluating the therapist's adherence and competence are different in psychotherapy research than in drug trials. Any tips on a comprehensive book on this topic would be welcome. \n\nA: I think that much of the methodology of general books on clinical trial design would apply to psychotherapy research. But in hunting on the internet I did find this one book on methodology for medical research in clinical psychology.\nResearch Design in Clinical Psychology\n"]], "0_corpus": "stackexchange", "1_conv_id": "43d39fdb5a8e45f3aea2d9c14a9ead1d", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "1_output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stackoverflow Stackexchange\n\nQ: Extracting information from PDFs of research papers I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.\nAt the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.\nIdeally this would be an open source solution. \nThe problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.\nI know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.\n\nA: We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.\nhttps://code.google.com/p/pdfssa4met/\n"]], "1_corpus": "stackexchange"}
|
|
|
|
80 |
{"tstamp": 1722575077.532, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "9245f99d83c74c4d9a4dbf5b87dc5535", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "dialog systems that use embeddings, for LLM-based chatbots", "0_output": [["dialog systems that use embeddings, for LLM-based chatbots", "Title: Exploring and Characterizing Large Language Models For Embedded System Development and Debugging\n\nAbstract: Large language models (LLMs) have shown remarkable abilities to generate code, however their ability to develop software for embedded systems, which requires cross-domain knowledge of hardware and software has not been studied. In this paper we develop an extensible, open source hardware-in-the-loop framework to systematically evaluate leading LLMs (GPT-3.5, GPT-4, PaLM 2) to assess their capabilities and limitations for embedded system development. We observe through our study that even when these tools fail to produce working code, they consistently generate helpful reasoning about embedded design tasks. We leverage this finding to study how human programmers interact with these tools, and develop an human-AI based software engineering workflow for building embedded systems. Our evaluation platform for verifying LLM generated programs uses sensor actuator pairs for physical evaluation. We compare all three models with N=450 experiments and find surprisingly that GPT-4 especially shows an exceptional level of cross-domain understanding and reasoning, in some cases generating fully correct programs from a single prompt. In N=50 trials, GPT-4 produces functional I2C interfaces 66% of the time. GPT-4 also produces register-level drivers, code for LoRa communication, and context-specific power optimizations for an nRF52 program resulting in over 740x current reduction to 12.2uA. We also characterize the models' limitations to develop a generalizable human-AI workflow for using LLMs in embedded system development. We evaluate our workflow with 15 users including novice and expert programmers. We find that our workflow improves productivity for all users and increases the success rate for building a LoRa environmental sensor from 25% to 100%, including for users with zero hardware or C/C++ experience."]], "0_corpus": "arxiv", "1_conv_id": "61900cd85b4241d8a19119f6f1b16464", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "dialog systems that use embeddings, for LLM-based chatbots", "1_output": [["dialog systems that use embeddings, for LLM-based chatbots", "Title: Domain-specific ChatBots for Science using Embeddings\n\nAbstract: Large language models (LLMs) have emerged as powerful machine-learning systems capable of handling a myriad of tasks. Tuned versions of these systems have been turned into chatbots that can respond to user queries on a vast diversity of topics, providing informative and creative replies. However, their application to physical science research remains limited owing to their incomplete knowledge in these areas, contrasted with the needs of rigor and sourcing in science domains. Here, we demonstrate how existing methods and software tools can be easily combined to yield a domain-specific chatbot. The system ingests scientific documents in existing formats, and uses text embedding lookup to provide the LLM with domain-specific contextual information when composing its reply. We similarly demonstrate that existing image embedding methods can be used for search and retrieval across publication figures. These results confirm that LLMs are already suitable for use by physical scientists in accelerating their research efforts."]], "1_corpus": "arxiv"}
|
81 |
{"tstamp": 1722579503.8059, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c6a86245ccb44e83a15f4767cef256fc", "0_model_name": "voyage-multilingual-2", "0_prompt": "who helped them recapture mycenae once they were old enough to fight", "0_output": [["who helped them recapture mycenae once they were old enough to fight", "Mycenae\n\nAtreid dynasty The people of Mycenae had received advice from an oracle that they should choose a new king from among the Pelopids. The two contenders were Atreus and his brother, Thyestes. The latter was chosen at first. At this moment nature intervened and the sun appeared to reverse direction by setting in the east. Atreus argued that because the sun had reversed its path, the election of Thyestes should be reversed. The argument was heeded, and Atreus became king. His first move was to pursue Thyestes and all his family – that is, his own kin – but Thyestes managed to escape from Mycenae.\nIn legend, Atreus had two sons, Agamemnon and Menelaus, the Atreids. Aegisthus, the son of Thyestes, killed Atreus and restored Thyestes to the throne. With the help of King Tyndareus of Sparta, the Atreids drove Thyestes again into exile. Tyndareus had two ill-starred daughters, Helen and Clytemnestra, whom Menelaus and Agamemnon married, respectively. Agamemnon inherited Mycenae and Menelaus became king of Sparta.\nHomeric Poems\nSoon, Helen eloped with Paris of Troy. Agamemnon conducted a 10-year war against Troy to get her back for his brother. Because of lack of wind, the warships could not sail to Troy. In order to please the gods so that they might make the winds start to blow, Agamemnon sacrificed his daughter Iphigenia. According to some versions of the legend, the hunting goddess Artemis replaced her at the very last moment with a deer on the altar, and took Iphigenia to Tauris (see Iphigenia in Tauris by Euripides). The deities, having been satisfied by such a sacrifice, made the winds blow and the Greek fleet departed."]], "0_corpus": "wikipedia", "1_conv_id": "732db75064584027960d547439a205ca", "1_model_name": "text-embedding-004", "1_prompt": "who helped them recapture mycenae once they were old enough to fight", "1_output": [["who helped them recapture mycenae once they were old enough to fight", "Mycenae\n\nLegend tells us that the long and arduous Trojan War, although nominally a Greek victory, brought anarchy, piracy, and ruin; already before the Greek fleet set sail for Troy, the conflict had divided the gods as well, and this contributed to curses and acts of vengeance following many of the Greek heroes. After the war Agamemnon returned to Mycenae and was greeted royally with a red carpet rolled out for him. Shortly thereafter, he was slain by Clytemnestra, who hated him bitterly for having ordered the sacrifice of their daughter Iphigenia in order to gain favorable winds to Troy. Clytemnestra was aided in her crime by Aegistheus, her lover, who reigned subsequently, but Orestes, her son by Agamemnon, was smuggled out to Phocis. He returned as an adult with his sister Electra to slay Clytemnestra and Aegistheus. He then fled to Athens to evade justice and a matricide, and became insane for a time. Meanwhile, the throne of Mycenae went to Aletes, son of Aegistheus, but not for long. Recovering, Orestes returned to Mycenae with Electra to kill Aletes and took the throne. This story is told in numerous plays, including the Oresteia, Sophocles' Electra, and Euripides' Electra.\nEnd of the Atreids\nOrestes then built a larger state in the Peloponnese, but he died in Arcadia from a snake bite. His son, Tisamenus, the last of the Atreid dynasty, was killed by the Heracleidae on their return to the Peloponnesus. They claimed the right of the Perseids to inherit the various kingdoms of the Peloponnese and cast lots for the dominion of them, thus leaving the Atreids as the final rulers of Legendary Mycenae."]], "1_corpus": "wikipedia"}
|
82 |
{"tstamp": 1722580525.0932, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4c79178fcee5443abea551c4946d3806", "0_model_name": "BM25", "0_prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "0_output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stats Stackexchange\n\nQ: Books on designing clinical trials for psychotherapy research I'm designing a clinical trial that will evaluate a behavioral therapy against a control condition, and I'm looking for some good texts on this topic. There are a lot of books on clinical trials in medicine, but I've had a hard time finding one that also covers the topics specific to psychotherapy research. \nFor instance, topics like choosing an appropriate control group and evaluating the therapist's adherence and competence are different in psychotherapy research than in drug trials. Any tips on a comprehensive book on this topic would be welcome. \n\nA: I think that much of the methodology of general books on clinical trial design would apply to psychotherapy research. But in hunting on the internet I did find this one book on methodology for medical research in clinical psychology.\nResearch Design in Clinical Psychology\n"]], "0_corpus": "stackexchange", "1_conv_id": "43d39fdb5a8e45f3aea2d9c14a9ead1d", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "1_output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stackoverflow Stackexchange\n\nQ: Extracting information from PDFs of research papers I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.\nAt the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.\nIdeally this would be an open source solution. \nThe problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.\nI know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.\n\nA: We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.\nhttps://code.google.com/p/pdfssa4met/\n"]], "1_corpus": "stackexchange"}
|
83 |
+
{"tstamp": 1722582404.1801, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "fc5e2cb264fc4ccf8b5db6c172e6d356", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What distinguished global and specialized tuberculosis strains?", "0_output": [["What distinguished global and specialized tuberculosis strains?", "Mycobacterium tuberculosis\n\nLineages 2, 3 and 4 all share a unique deletion event (tbD1) and thus form a monophyletic group. Types 5 and 6 are closely related to the animal strains of MTBC, which do not normally infect humans. Lineage 3 has been divided into two clades: CAS-Kili (found in Tanzania) and CAS-Delhi (found in India and Saudi Arabia).\nLineage 4 is also known as the Euro-American lineage. Subtypes within this type include Latin American Mediterranean, Uganda I, Uganda II, Haarlem, X, and Congo.\nA much cited study reported that M. tuberculosis has co-evolved with human populations, and that the most recent common ancestor of the M. tuberculosis complex evolved between 40,000 and 70,000 years ago. However, a later study that included genome sequences from M. tuberculosis complex members extracted from three 1,000-year-old Peruvian mummies, came to quite different conclusions. If the most recent common ancestor of the M. tuberculosis complex were 40,000 to 70,000 years old, this would necessitate an evolutionary rate much lower than any estimates produced by genomic analyses of heterochronous samples, suggesting a far more recent common ancestor of the M. tuberculosis complex as little as 6000 years ago.\nAn analysis of over 3000 strains of M. bovis from 35 countries suggested an Africa origin for this species."]], "0_corpus": "wikipedia", "1_conv_id": "09f738a5f93f485f94f313b688846bbd", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "What distinguished global and specialized tuberculosis strains?", "1_output": [["What distinguished global and specialized tuberculosis strains?", "Mycobacterium tuberculosis\n\nUntil the early 2000s, M. tuberculosis strains were typed by pulsed field gel electrophoresis. This has now been superseded by variable numbers of tandem repeats (VNTR), which is technically easier to perform and allows better discrimination between strains. This method makes use of the presence of repeated DNA sequences within the M. tuberculosis genome.\nThree generations of VNTR typing for M. tuberculosis are noted. The first scheme, called exact tandem repeat, used only five loci, but the resolution afforded by these five loci was not as good as PFGE. The second scheme, called mycobacterial interspersed repetitive unit, had discrimination as good as PFGE. The third generation (mycobacterial interspersed repetitive unit – 2) added a further nine loci to bring the total to 24. This provides a degree of resolution greater than PFGE and is currently the standard for typing M. tuberculosis. However, with regard to archaeological remains, additional evidence may be required because of possible contamination from related soil bacteria.\nAntibiotic resistance in M. tuberculosis typically occurs due to either the accumulation of mutations in the genes targeted by the antibiotic or a change in titration of the drug. M. tuberculosis is considered to be multidrug-resistant (MDR TB) if it has developed drug resistance to both rifampicin and isoniazid, which are the most important antibiotics used in treatment. Additionally, extensively drug-resistant M. tuberculosis (XDR TB) is characterized by resistance to both isoniazid and rifampin, plus any fluoroquinolone and at least one of three injectable second-line drugs (i.e., amikacin, kanamycin, or capreomycin)."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -301,3 +301,7 @@
|
|
301 |
{"tstamp": 1722580477.1041, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722580476.8653, "finish": 1722580477.1041, "ip": "", "conv_id": "43d39fdb5a8e45f3aea2d9c14a9ead1d", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stackoverflow Stackexchange\n\nQ: Extracting information from PDFs of research papers I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.\nAt the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.\nIdeally this would be an open source solution. \nThe problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.\nI know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.\n\nA: We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.\nhttps://code.google.com/p/pdfssa4met/\n"]], "corpus": "stackexchange"}
|
302 |
{"tstamp": 1722580558.6454, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722580556.4212, "finish": 1722580558.6454, "ip": "", "conv_id": "87ef1a3903964a95b8c075110c4c7ce8", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Title: Trialstreamer: Mapping and Browsing Medical Evidence in Real-Time\n\nAbstract: We introduce Trialstreamer, a living database of clinical trial reports. Here we mainly describe the evidence extraction component; this extracts from biomedical abstracts key pieces of information that clinicians need when appraising the literature, and also the relations between these. Specifically, the system extracts descriptions of trial participants, the treatments compared in each arm (the interventions), and which outcomes were measured. The system then attempts to infer which interventions were reported to work best by determining their relationship with identified trial outcome measures. In addition to summarizing individual trials, these extracted data elements allow automatic synthesis of results across many trials on the same topic. We apply the system at scale to all reports of randomized controlled trials indexed in MEDLINE, powering the automatic generation of evidence maps, which provide a global view of the efficacy of different interventions combining data from all relevant clinical trials on a topic. We make all code and models freely available alongside a demonstration of the web interface."]], "corpus": "arxiv"}
|
303 |
{"tstamp": 1722580558.6454, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722580556.4212, "finish": 1722580558.6454, "ip": "", "conv_id": "876460b3fbb54299a26baca5870d9ddd", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Title: A Benchmark of PDF Information Extraction Tools using a Multi-Task and Multi-Domain Evaluation Framework for Academic Documents\n\nAbstract: Extracting information from academic PDF documents is crucial for numerous indexing, retrieval, and analysis use cases. Choosing the best tool to extract specific content elements is difficult because many, technically diverse tools are available, but recent performance benchmarks are rare. Moreover, such benchmarks typically cover only a few content elements like header metadata or bibliographic references and use smaller datasets from specific academic disciplines. We provide a large and diverse evaluation framework that supports more extraction tasks than most related datasets. Our framework builds upon DocBank, a multi-domain dataset of 1.5M annotated content elements extracted from 500K pages of research papers on arXiv. Using the new framework, we benchmark ten freely available tools in extracting document metadata, bibliographic references, tables, and other content elements from academic PDF documents. GROBID achieves the best metadata and reference extraction results, followed by CERMINE and Science Parse. For table extraction, Adobe Extract outperforms other tools, even though the performance is much lower than for other content elements. All tools struggle to extract lists, footers, and equations. We conclude that more research on improving and combining tools is necessary to achieve satisfactory extraction quality for most content elements. Evaluation datasets and frameworks like the one we present support this line of research. We make our data and code publicly available to contribute toward this goal."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
301 |
{"tstamp": 1722580477.1041, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722580476.8653, "finish": 1722580477.1041, "ip": "", "conv_id": "43d39fdb5a8e45f3aea2d9c14a9ead1d", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Stackoverflow Stackexchange\n\nQ: Extracting information from PDFs of research papers I need a mechanism for extracting bibliographic metadata from PDF documents, to save people entering it by hand or cut-and-pasting it.\nAt the very least, the title and abstract. The list of authors and their affiliations would be good. Extracting out the references would be amazing.\nIdeally this would be an open source solution. \nThe problem is that not all PDF's encode the text, and many which do fail to preserve the logical order of the text, so just doing pdf2text gives you line 1 of column 1, line 1 of column 2, line 2 of column 1 etc.\nI know there's a lot of libraries. It's identifying the abstract, title authors etc. on the document that I need to solve. This is never going to be possible every time, but 80% would save a lot of human effort.\n\nA: We ran a contest to solve this problem at Dev8D in London, Feb 2010 and we got a nice little GPL tool created as a result. We've not yet integrated it into our systems but it's there in the world.\nhttps://code.google.com/p/pdfssa4met/\n"]], "corpus": "stackexchange"}
|
302 |
{"tstamp": 1722580558.6454, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722580556.4212, "finish": 1722580558.6454, "ip": "", "conv_id": "87ef1a3903964a95b8c075110c4c7ce8", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Title: Trialstreamer: Mapping and Browsing Medical Evidence in Real-Time\n\nAbstract: We introduce Trialstreamer, a living database of clinical trial reports. Here we mainly describe the evidence extraction component; this extracts from biomedical abstracts key pieces of information that clinicians need when appraising the literature, and also the relations between these. Specifically, the system extracts descriptions of trial participants, the treatments compared in each arm (the interventions), and which outcomes were measured. The system then attempts to infer which interventions were reported to work best by determining their relationship with identified trial outcome measures. In addition to summarizing individual trials, these extracted data elements allow automatic synthesis of results across many trials on the same topic. We apply the system at scale to all reports of randomized controlled trials indexed in MEDLINE, powering the automatic generation of evidence maps, which provide a global view of the efficacy of different interventions combining data from all relevant clinical trials on a topic. We make all code and models freely available alongside a demonstration of the web interface."]], "corpus": "arxiv"}
|
303 |
{"tstamp": 1722580558.6454, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722580556.4212, "finish": 1722580558.6454, "ip": "", "conv_id": "876460b3fbb54299a26baca5870d9ddd", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what are the best tools for extracting clinical trial data from pdf research papers?", "output": [["what are the best tools for extracting clinical trial data from pdf research papers?", "Title: A Benchmark of PDF Information Extraction Tools using a Multi-Task and Multi-Domain Evaluation Framework for Academic Documents\n\nAbstract: Extracting information from academic PDF documents is crucial for numerous indexing, retrieval, and analysis use cases. Choosing the best tool to extract specific content elements is difficult because many, technically diverse tools are available, but recent performance benchmarks are rare. Moreover, such benchmarks typically cover only a few content elements like header metadata or bibliographic references and use smaller datasets from specific academic disciplines. We provide a large and diverse evaluation framework that supports more extraction tasks than most related datasets. Our framework builds upon DocBank, a multi-domain dataset of 1.5M annotated content elements extracted from 500K pages of research papers on arXiv. Using the new framework, we benchmark ten freely available tools in extracting document metadata, bibliographic references, tables, and other content elements from academic PDF documents. GROBID achieves the best metadata and reference extraction results, followed by CERMINE and Science Parse. For table extraction, Adobe Extract outperforms other tools, even though the performance is much lower than for other content elements. All tools struggle to extract lists, footers, and equations. We conclude that more research on improving and combining tools is necessary to achieve satisfactory extraction quality for most content elements. Evaluation datasets and frameworks like the one we present support this line of research. We make our data and code publicly available to contribute toward this goal."]], "corpus": "arxiv"}
|
304 |
+
{"tstamp": 1722582384.337, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722582381.9846, "finish": 1722582384.337, "ip": "", "conv_id": "fc5e2cb264fc4ccf8b5db6c172e6d356", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "What distinguished global and specialized tuberculosis strains?", "output": [["What distinguished global and specialized tuberculosis strains?", "Mycobacterium tuberculosis\n\nLineages 2, 3 and 4 all share a unique deletion event (tbD1) and thus form a monophyletic group. Types 5 and 6 are closely related to the animal strains of MTBC, which do not normally infect humans. Lineage 3 has been divided into two clades: CAS-Kili (found in Tanzania) and CAS-Delhi (found in India and Saudi Arabia).\nLineage 4 is also known as the Euro-American lineage. Subtypes within this type include Latin American Mediterranean, Uganda I, Uganda II, Haarlem, X, and Congo.\nA much cited study reported that M. tuberculosis has co-evolved with human populations, and that the most recent common ancestor of the M. tuberculosis complex evolved between 40,000 and 70,000 years ago. However, a later study that included genome sequences from M. tuberculosis complex members extracted from three 1,000-year-old Peruvian mummies, came to quite different conclusions. If the most recent common ancestor of the M. tuberculosis complex were 40,000 to 70,000 years old, this would necessitate an evolutionary rate much lower than any estimates produced by genomic analyses of heterochronous samples, suggesting a far more recent common ancestor of the M. tuberculosis complex as little as 6000 years ago.\nAn analysis of over 3000 strains of M. bovis from 35 countries suggested an Africa origin for this species."]], "corpus": "wikipedia"}
|
305 |
+
{"tstamp": 1722582384.337, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722582381.9846, "finish": 1722582384.337, "ip": "", "conv_id": "09f738a5f93f485f94f313b688846bbd", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What distinguished global and specialized tuberculosis strains?", "output": [["What distinguished global and specialized tuberculosis strains?", "Mycobacterium tuberculosis\n\nUntil the early 2000s, M. tuberculosis strains were typed by pulsed field gel electrophoresis. This has now been superseded by variable numbers of tandem repeats (VNTR), which is technically easier to perform and allows better discrimination between strains. This method makes use of the presence of repeated DNA sequences within the M. tuberculosis genome.\nThree generations of VNTR typing for M. tuberculosis are noted. The first scheme, called exact tandem repeat, used only five loci, but the resolution afforded by these five loci was not as good as PFGE. The second scheme, called mycobacterial interspersed repetitive unit, had discrimination as good as PFGE. The third generation (mycobacterial interspersed repetitive unit – 2) added a further nine loci to bring the total to 24. This provides a degree of resolution greater than PFGE and is currently the standard for typing M. tuberculosis. However, with regard to archaeological remains, additional evidence may be required because of possible contamination from related soil bacteria.\nAntibiotic resistance in M. tuberculosis typically occurs due to either the accumulation of mutations in the genes targeted by the antibiotic or a change in titration of the drug. M. tuberculosis is considered to be multidrug-resistant (MDR TB) if it has developed drug resistance to both rifampicin and isoniazid, which are the most important antibiotics used in treatment. Additionally, extensively drug-resistant M. tuberculosis (XDR TB) is characterized by resistance to both isoniazid and rifampin, plus any fluoroquinolone and at least one of three injectable second-line drugs (i.e., amikacin, kanamycin, or capreomycin)."]], "corpus": "wikipedia"}
|
306 |
+
{"tstamp": 1722582435.1448, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722582432.9495, "finish": 1722582435.1448, "ip": "", "conv_id": "ee929ecb7cb64c23a3e872496a2b3001", "model_name": "GritLM/GritLM-7B", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
307 |
+
{"tstamp": 1722582435.1448, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722582432.9495, "finish": 1722582435.1448, "ip": "", "conv_id": "0daa498a214247d685dee2a2f7ed1ffa", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
data/retrieval_side_by_side-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -30,3 +30,4 @@
|
|
30 |
{"tstamp": 1722555573.1267, "task_type": "retrieval", "type": "leftvote", "models": ["text-embedding-004", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "2bafea3673fa4828b4bcd4b45ea6adbc", "0_model_name": "text-embedding-004", "0_prompt": "important papers about retrieval", "0_output": [["important papers about retrieval", "Title: Information Retrieval: Recent Advances and Beyond\n\nAbstract: In this paper, we provide a detailed overview of the models used for information retrieval in the first and second stages of the typical processing chain. We discuss the current state-of-the-art models, including methods based on terms, semantic retrieval, and neural. Additionally, we delve into the key topics related to the learning process of these models. This way, this survey offers a comprehensive understanding of the field and is of interest for for researchers and practitioners entering/working in the information retrieval domain."]], "0_corpus": "arxiv", "1_conv_id": "28a597fbaeba494db60a25995982638f", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "important papers about retrieval", "1_output": [["important papers about retrieval", "Title: A Survey on Retrieval of Mathematical Knowledge\n\nAbstract: We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques."]], "1_corpus": "arxiv"}
|
31 |
{"tstamp": 1722558555.9393, "task_type": "retrieval", "type": "tievote", "models": ["voyage-multilingual-2", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "f2e8a7c6306d4a5eb9a5e0709139d9a5", "0_model_name": "voyage-multilingual-2", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "2b7cb43b22264837ac37be95f960bd56", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
32 |
{"tstamp": 1722564545.322, "task_type": "retrieval", "type": "tievote", "models": ["BAAI/bge-large-en-v1.5", "intfloat/e5-mistral-7b-instruct"], "ip": "", "0_conv_id": "e4696176d791415eb284fe9aa16fec17", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "1b84753b45cf4c20b12fba0945e20d68", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
|
|
|
30 |
{"tstamp": 1722555573.1267, "task_type": "retrieval", "type": "leftvote", "models": ["text-embedding-004", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "2bafea3673fa4828b4bcd4b45ea6adbc", "0_model_name": "text-embedding-004", "0_prompt": "important papers about retrieval", "0_output": [["important papers about retrieval", "Title: Information Retrieval: Recent Advances and Beyond\n\nAbstract: In this paper, we provide a detailed overview of the models used for information retrieval in the first and second stages of the typical processing chain. We discuss the current state-of-the-art models, including methods based on terms, semantic retrieval, and neural. Additionally, we delve into the key topics related to the learning process of these models. This way, this survey offers a comprehensive understanding of the field and is of interest for for researchers and practitioners entering/working in the information retrieval domain."]], "0_corpus": "arxiv", "1_conv_id": "28a597fbaeba494db60a25995982638f", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "important papers about retrieval", "1_output": [["important papers about retrieval", "Title: A Survey on Retrieval of Mathematical Knowledge\n\nAbstract: We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques."]], "1_corpus": "arxiv"}
|
31 |
{"tstamp": 1722558555.9393, "task_type": "retrieval", "type": "tievote", "models": ["voyage-multilingual-2", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "f2e8a7c6306d4a5eb9a5e0709139d9a5", "0_model_name": "voyage-multilingual-2", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "2b7cb43b22264837ac37be95f960bd56", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
32 |
{"tstamp": 1722564545.322, "task_type": "retrieval", "type": "tievote", "models": ["BAAI/bge-large-en-v1.5", "intfloat/e5-mistral-7b-instruct"], "ip": "", "0_conv_id": "e4696176d791415eb284fe9aa16fec17", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "1b84753b45cf4c20b12fba0945e20d68", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
33 |
+
{"tstamp": 1722582440.3159, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "ee929ecb7cb64c23a3e872496a2b3001", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "0daa498a214247d685dee2a2f7ed1ffa", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|