Scheduled Commit
Browse files
data/retrieval_battle-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl
CHANGED
@@ -144,3 +144,5 @@
|
|
144 |
{"tstamp": 1722497499.2227, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d407fb754a174d19b8610877eb5274ec", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "a paper which evaluates LLMs abilities to roleplay as a character", "0_output": [["a paper which evaluates LLMs abilities to roleplay as a character", "Title: Virtual signatures of dark sectors in Higgs couplings\n\nAbstract: Where collider searches for resonant invisible particles loose steam, dark sectors might leave their trace as virtual effects in precision observables. Here we explore this option in the framework of Higgs portal models, where a sector of dark fermions interacts with the standard model through a strong renormalizable coupling to the Higgs boson. We show that precise measurements of Higgs-gauge and triple Higgs interactions can probe dark fermions up to the TeV scale through virtual corrections. Observation prospects at the LHC and future lepton colliders are discussed for the so-called singlet-doublet model of Majorana fermions, a generalization of the bino-higgsino scenario in supersymmetry. We advocate a two-fold search strategy for dark sectors through direct and indirect observables."]], "0_corpus": "arxiv", "1_conv_id": "7f72633aeb9b4ed6b74dde06a6dbc22e", "1_model_name": "embed-english-v3.0", "1_prompt": "a paper which evaluates LLMs abilities to roleplay as a character", "1_output": [["a paper which evaluates LLMs abilities to roleplay as a character", "Title: Character-LLM: A Trainable Agent for Role-Playing\n\nAbstract: Large language models (LLMs) can be used to serve as agents to simulate human behaviors, given the powerful ability to understand human instructions and provide high-quality generated texts. Such ability stimulates us to wonder whether LLMs can simulate a person in a higher form than simple human behaviors. Therefore, we aim to train an agent with the profile, experience, and emotional states of a specific person instead of using limited prompts to instruct ChatGPT API. In this work, we introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc. Our method focuses on editing profiles as experiences of a certain character and training models to be personal simulacra with these experiences. To assess the effectiveness of our approach, we build a test playground that interviews trained agents and evaluates whether the agents \\textit{memorize} their characters and experiences. Experimental results show interesting observations that help build future simulacra of humankind."]], "1_corpus": "arxiv"}
|
145 |
{"tstamp": 1722500138.3172, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4b176b81cf88472caae1cfc0d81ce685", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "0_corpus": "stackexchange", "1_conv_id": "303956372a97407f85c1fc9575894a34", "1_model_name": "BM25", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "1_corpus": "stackexchange"}
|
146 |
{"tstamp": 1722501793.7073, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "294c96e39b34429fbee16330a5214453", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "b3de429db3074ed59b294d0313690229", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
|
|
|
|
|
|
144 |
{"tstamp": 1722497499.2227, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "d407fb754a174d19b8610877eb5274ec", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "a paper which evaluates LLMs abilities to roleplay as a character", "0_output": [["a paper which evaluates LLMs abilities to roleplay as a character", "Title: Virtual signatures of dark sectors in Higgs couplings\n\nAbstract: Where collider searches for resonant invisible particles loose steam, dark sectors might leave their trace as virtual effects in precision observables. Here we explore this option in the framework of Higgs portal models, where a sector of dark fermions interacts with the standard model through a strong renormalizable coupling to the Higgs boson. We show that precise measurements of Higgs-gauge and triple Higgs interactions can probe dark fermions up to the TeV scale through virtual corrections. Observation prospects at the LHC and future lepton colliders are discussed for the so-called singlet-doublet model of Majorana fermions, a generalization of the bino-higgsino scenario in supersymmetry. We advocate a two-fold search strategy for dark sectors through direct and indirect observables."]], "0_corpus": "arxiv", "1_conv_id": "7f72633aeb9b4ed6b74dde06a6dbc22e", "1_model_name": "embed-english-v3.0", "1_prompt": "a paper which evaluates LLMs abilities to roleplay as a character", "1_output": [["a paper which evaluates LLMs abilities to roleplay as a character", "Title: Character-LLM: A Trainable Agent for Role-Playing\n\nAbstract: Large language models (LLMs) can be used to serve as agents to simulate human behaviors, given the powerful ability to understand human instructions and provide high-quality generated texts. Such ability stimulates us to wonder whether LLMs can simulate a person in a higher form than simple human behaviors. Therefore, we aim to train an agent with the profile, experience, and emotional states of a specific person instead of using limited prompts to instruct ChatGPT API. In this work, we introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc. Our method focuses on editing profiles as experiences of a certain character and training models to be personal simulacra with these experiences. To assess the effectiveness of our approach, we build a test playground that interviews trained agents and evaluates whether the agents \\textit{memorize} their characters and experiences. Experimental results show interesting observations that help build future simulacra of humankind."]], "1_corpus": "arxiv"}
|
145 |
{"tstamp": 1722500138.3172, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4b176b81cf88472caae1cfc0d81ce685", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "0_corpus": "stackexchange", "1_conv_id": "303956372a97407f85c1fc9575894a34", "1_model_name": "BM25", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "1_corpus": "stackexchange"}
|
146 |
{"tstamp": 1722501793.7073, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "294c96e39b34429fbee16330a5214453", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "b3de429db3074ed59b294d0313690229", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
|
147 |
+
{"tstamp": 1722501902.3938, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ef7b20735cfa42188eacf5ac21411146", "0_model_name": "text-embedding-004", "0_prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "0_output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks\n\nAbstract: There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance."]], "0_corpus": "arxiv", "1_conv_id": "b7bb44f3417f4b2ab25e2480a5a2867e", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "1_output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring\n\nAbstract: The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on three existing tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks."]], "1_corpus": "arxiv"}
|
148 |
+
{"tstamp": 1722501976.821, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "3efd808ac452493cab96bf4561169945", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "0_output": [["What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "Sausage\n\nGreece Loukániko (Greek: λουκάνικο) is the common Greek word for pork sausage.\nThe name 'loukaniko' is derived from ancient Roman cuisine.\nNordic countries\nNordic sausages (, , , ) are usually made of 60–80% very finely ground pork, very sparsely spiced with pepper, nutmeg, allspice or similar sweet spices (ground mustard seed, onion and sugar may also be added). Water, lard, rind, potato starch flour and soy or milk protein are often added for binding and filling. In southern Norway, grilled and wiener sausages are often wrapped in a lompe, a potato flatbread somewhat similar to a lefse.\nVirtually all sausages will be industrially precooked and either fried or warmed in hot water by the consumer or at the hot dog stand. Since hot dog stands are ubiquitous in Denmark (known as Pølsevogn) some people regard pølser as one of the national dishes, perhaps along with medisterpølse, a fried, finely ground pork and bacon sausage. The most noticeable aspect of Danish boiled sausages (never the fried ones) is that the casing often contains a traditional bright-red dye. They are also called wienerpølser and legend has it they originate from Vienna where it was once ordered that day-old sausages be dyed as a means of warning."]], "0_corpus": "wikipedia", "1_conv_id": "9edd7da096c64b5babe284081ff1035c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "1_output": [["What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "Hot dog variations\n\nGermany Even though the type of sausage that led to the creation of the modern hot dog in the United States is undeniably German, German hot dog culture is mainly influenced by Scandinavian—mostly Danish—elements. This does not affect the type of sausage (unlike the Danish rød pølse, German hot dogs usually have frankfurt- or wiener-style sausages) but the condiments—ketchup, mustard or Danish-style remoulade depending on individual preference as well as roasted onions and thinly sliced pickles—are usually marketed as a Danish influence. While sauerkraut is a common, distinctively German, topping in the US, it is not typical in Germany. Grilling the sausage, which is common in America, is largely unknown in Germany where it is steamed like its Scandinavian counterparts.\nIceland\nThe Iceland Monitor writes that \"locals, expatriates and holiday-makers alike must all be familiar with Iceland’s national dish—hot-dogs with all the trimmings (‘pylsa með öllu’ in Icelandic). The hot-dog sausage is served in bread and liberally accompanied with fried onion, raw onion, mustard, ketchup and remoulade (a cold sauce made with mayonnaise and various condiments and herbs). This delicious snack is traditionally washed down with copious amounts of Coca-Cola.\"\nIn August 2006, the British newspaper The Guardian selected Iceland's Bæjarins beztu as the best hot dog stand in Europe. Bæjarins beztu pylsur (English: The best hot dog in town) often shortened to simply \"Bæjarins beztu,\" is a popular hot dog stand in central Reykjavík. Hot dogs from this stand are derived from the Danish hot dog. They are often ordered with \"the works,\" i.e., all condiments, or in Icelandic \"eina með öllu\". Icelandic hot dogs are made from Icelandic lamb, pork and beef and served with a mix of raw white and fried onion, ketchup, sweet brown mustard, and remoulade."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-407c4836-37e2-4f9f-8e9b-06706cc4440c.jsonl
CHANGED
@@ -378,3 +378,5 @@
|
|
378 |
{"tstamp": 1722501772.8711, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722501770.6133, "finish": 1722501772.8711, "ip": "", "conv_id": "b3de429db3074ed59b294d0313690229", "model_name": "GritLM/GritLM-7B", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
|
379 |
{"tstamp": 1722501810.1243, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722501809.8364, "finish": 1722501810.1243, "ip": "", "conv_id": "ef7b20735cfa42188eacf5ac21411146", "model_name": "text-embedding-004", "prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks\n\nAbstract: There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance."]], "corpus": "arxiv"}
|
380 |
{"tstamp": 1722501810.1243, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722501809.8364, "finish": 1722501810.1243, "ip": "", "conv_id": "b7bb44f3417f4b2ab25e2480a5a2867e", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring\n\nAbstract: The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on three existing tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks."]], "corpus": "arxiv"}
|
|
|
|
|
|
378 |
{"tstamp": 1722501772.8711, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722501770.6133, "finish": 1722501772.8711, "ip": "", "conv_id": "b3de429db3074ed59b294d0313690229", "model_name": "GritLM/GritLM-7B", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
|
379 |
{"tstamp": 1722501810.1243, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722501809.8364, "finish": 1722501810.1243, "ip": "", "conv_id": "ef7b20735cfa42188eacf5ac21411146", "model_name": "text-embedding-004", "prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks\n\nAbstract: There are two approaches for pairwise sentence scoring: Cross-encoders, which perform full-attention over the input pair, and Bi-encoders, which map each input independently to a dense vector space. While cross-encoders often achieve higher performance, they are too slow for many practical use cases. Bi-encoders, on the other hand, require substantial training data and fine-tuning over the target task to achieve competitive performance. We present a simple yet efficient data augmentation strategy called Augmented SBERT, where we use the cross-encoder to label a larger set of input pairs to augment the training data for the bi-encoder. We show that, in this process, selecting the sentence pairs is non-trivial and crucial for the success of the method. We evaluate our approach on multiple tasks (in-domain) as well as on a domain adaptation task. Augmented SBERT achieves an improvement of up to 6 points for in-domain and of up to 37 points for domain adaptation tasks compared to the original bi-encoder performance."]], "corpus": "arxiv"}
|
380 |
{"tstamp": 1722501810.1243, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722501809.8364, "finish": 1722501810.1243, "ip": "", "conv_id": "b7bb44f3417f4b2ab25e2480a5a2867e", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "output": [["Research on improving bi-encoder performance using a cross-encoder for generating additional training data.", "Title: Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring\n\nAbstract: The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on three existing tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks."]], "corpus": "arxiv"}
|
381 |
+
{"tstamp": 1722501942.3173, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722501940.0534, "finish": 1722501942.3173, "ip": "", "conv_id": "3efd808ac452493cab96bf4561169945", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "output": [["What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "Sausage\n\nGreece Loukániko (Greek: λουκάνικο) is the common Greek word for pork sausage.\nThe name 'loukaniko' is derived from ancient Roman cuisine.\nNordic countries\nNordic sausages (, , , ) are usually made of 60–80% very finely ground pork, very sparsely spiced with pepper, nutmeg, allspice or similar sweet spices (ground mustard seed, onion and sugar may also be added). Water, lard, rind, potato starch flour and soy or milk protein are often added for binding and filling. In southern Norway, grilled and wiener sausages are often wrapped in a lompe, a potato flatbread somewhat similar to a lefse.\nVirtually all sausages will be industrially precooked and either fried or warmed in hot water by the consumer or at the hot dog stand. Since hot dog stands are ubiquitous in Denmark (known as Pølsevogn) some people regard pølser as one of the national dishes, perhaps along with medisterpølse, a fried, finely ground pork and bacon sausage. The most noticeable aspect of Danish boiled sausages (never the fried ones) is that the casing often contains a traditional bright-red dye. They are also called wienerpølser and legend has it they originate from Vienna where it was once ordered that day-old sausages be dyed as a means of warning."]], "corpus": "wikipedia"}
|
382 |
+
{"tstamp": 1722501942.3173, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722501940.0534, "finish": 1722501942.3173, "ip": "", "conv_id": "9edd7da096c64b5babe284081ff1035c", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "output": [["What will I get, if I orde a 'kradser' from a Danish hot dog stand?", "Hot dog variations\n\nGermany Even though the type of sausage that led to the creation of the modern hot dog in the United States is undeniably German, German hot dog culture is mainly influenced by Scandinavian—mostly Danish—elements. This does not affect the type of sausage (unlike the Danish rød pølse, German hot dogs usually have frankfurt- or wiener-style sausages) but the condiments—ketchup, mustard or Danish-style remoulade depending on individual preference as well as roasted onions and thinly sliced pickles—are usually marketed as a Danish influence. While sauerkraut is a common, distinctively German, topping in the US, it is not typical in Germany. Grilling the sausage, which is common in America, is largely unknown in Germany where it is steamed like its Scandinavian counterparts.\nIceland\nThe Iceland Monitor writes that \"locals, expatriates and holiday-makers alike must all be familiar with Iceland’s national dish—hot-dogs with all the trimmings (‘pylsa með öllu’ in Icelandic). The hot-dog sausage is served in bread and liberally accompanied with fried onion, raw onion, mustard, ketchup and remoulade (a cold sauce made with mayonnaise and various condiments and herbs). This delicious snack is traditionally washed down with copious amounts of Coca-Cola.\"\nIn August 2006, the British newspaper The Guardian selected Iceland's Bæjarins beztu as the best hot dog stand in Europe. Bæjarins beztu pylsur (English: The best hot dog in town) often shortened to simply \"Bæjarins beztu,\" is a popular hot dog stand in central Reykjavík. Hot dogs from this stand are derived from the Danish hot dog. They are often ordered with \"the works,\" i.e., all condiments, or in Icelandic \"eina með öllu\". Icelandic hot dogs are made from Icelandic lamb, pork and beef and served with a mix of raw white and fried onion, ketchup, sweet brown mustard, and remoulade."]], "corpus": "wikipedia"}
|