Scheduled Commit
Browse files
data/retrieval_battle-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -117,3 +117,11 @@
|
|
117 |
{"tstamp": 1722602898.0651, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "85d5a43ad1af4aaf889301aace640303", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "0_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge\n\nAbstract: Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems."]], "0_corpus": "arxiv", "1_conv_id": "dc4ce6d44afb43fa9912cd2d73b33150", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "1_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "1_corpus": "arxiv"}
|
118 |
{"tstamp": 1722602908.0151, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "10cf7ab193db40edb8773c13f994b75b", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "0_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "0_corpus": "arxiv", "1_conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "1_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "1_corpus": "arxiv"}
|
119 |
{"tstamp": 1722602919.4472, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "0_model_name": "BM25", "0_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "0_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "0_corpus": "arxiv", "1_conv_id": "87f840669a704ace9c11b13737d46ad0", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "1_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
{"tstamp": 1722602898.0651, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "85d5a43ad1af4aaf889301aace640303", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "0_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge\n\nAbstract: Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems."]], "0_corpus": "arxiv", "1_conv_id": "dc4ce6d44afb43fa9912cd2d73b33150", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "1_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "1_corpus": "arxiv"}
|
118 |
{"tstamp": 1722602908.0151, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "10cf7ab193db40edb8773c13f994b75b", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "0_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "0_corpus": "arxiv", "1_conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "1_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "1_corpus": "arxiv"}
|
119 |
{"tstamp": 1722602919.4472, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "0_model_name": "BM25", "0_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "0_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "0_corpus": "arxiv", "1_conv_id": "87f840669a704ace9c11b13737d46ad0", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "1_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "1_corpus": "arxiv"}
|
120 |
+
{"tstamp": 1722602938.3167, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "ed5b98c14ac34cc5b904e9f66474f7ea", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "0_output": [["I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "Title: Simple and Superlattice Turing Patterns in Reaction-Diffusion Systems: Bifurcation, Bistability, and Parameter Collapse\n\nAbstract: This paper investigates the competition between both simple (e.g. stripes, hexagons) and ``superlattice'' (super squares, super hexagons) Turing patterns in two-component reaction-diffusion systems. ``Superlattice'' patterns are formed from eight or twelve Fourier modes, and feature structure at two different length scales. Using perturbation theory, we derive simple analytical expressions for the bifurcation equation coefficients on both rhombic and hexagonal lattices. These expressions show that, no matter how complicated the reaction kinectics, the nonlinear reaction terms reduce to just four effective terms within the bifurcation equation coefficients. Moreover, at the hexagonal degeneracy -- when the quadratic term in the hexagonal bifurcation equation disappears -- the number of effective system parameters drops to two, allowing a complete characterization of the possible bifurcation results at this degeneracy. The general results are then applied to specific model equations, to investigate the stability of different patterns within those models."]], "0_corpus": "arxiv", "1_conv_id": "8695bc0922cc428ea1b6b4fdf9b11e4a", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "1_output": [["I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "1_corpus": "arxiv"}
|
121 |
+
{"tstamp": 1722602950.6974, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4fda6bc1d6fd4287ab4ed56a82d5d721", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "what is difference between ff and latch?", "0_output": [["what is difference between ff and latch?", "Electronics Stackexchange\n\nQ: Difference between latch and flip-flop? What is the difference between a latch and a flip-flop?\nWhat I am thinking is that a latch is equivalent to a flip-flop since it is used to store bits and is also equivalent to a register which is also used to store data. But after reading some articles on the internet I found differences between latches and flip-flops based on edge triggered and level sensitive functionality?\nWhat does that mean? Is a flip-flop the same as a latch or not? \n\nA: A latch passes the input data thru directly in the open state, and freezes the output in the latched state. The latch responds to the level of the control signal.\nThere are various types of flip-flops, but basically these change state on the edge of the control signal, and in some cases the data input(s). A classic D flip-flip is most like a latch, except it only looks at the input on a particular edge of the clock and freezes the output all the remaining time.\n"]], "0_corpus": "stackexchange", "1_conv_id": "ea25b34d60f44d8dbf34041d1c2ae60c", "1_model_name": "voyage-multilingual-2", "1_prompt": "what is difference between ff and latch?", "1_output": [["what is difference between ff and latch?", "Electronics Stackexchange\n\nQ: Difference between latch and flip-flop? What is the difference between a latch and a flip-flop?\nWhat I am thinking is that a latch is equivalent to a flip-flop since it is used to store bits and is also equivalent to a register which is also used to store data. But after reading some articles on the internet I found differences between latches and flip-flops based on edge triggered and level sensitive functionality?\nWhat does that mean? Is a flip-flop the same as a latch or not? \n\nA: A latch passes the input data thru directly in the open state, and freezes the output in the latched state. The latch responds to the level of the control signal.\nThere are various types of flip-flops, but basically these change state on the edge of the control signal, and in some cases the data input(s). A classic D flip-flip is most like a latch, except it only looks at the input on a particular edge of the clock and freezes the output all the remaining time.\n"]], "1_corpus": "stackexchange"}
|
122 |
+
{"tstamp": 1722602967.9425, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "617582837ef34430831d11932519bfda", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Tool that outperforms existing baselines in Wikidata entity and relation linking", "0_output": [["Tool that outperforms existing baselines in Wikidata entity and relation linking", "Title: Time dependence and (non)commutativity of symmetries of evolution equations\n\nAbstract: We present easily verifiable sufficient conditions of time-independence and commutativity for local and nonlocal symmetries for a large class of homogeneous (1+1)-dimensional evolution systems. In contrast with the majority of known results, the verification of our conditions does not require the existence of master symmetry or hereditary recursion operator for the system in question. We also give simple sufficient conditions for the existence of infinite sets of time-independent symmetries for homogeneous (1+1)-dimensional evolution systems within slightly modified master symmetry approach."]], "0_corpus": "arxiv", "1_conv_id": "ee2f041a4bbe49869341f547fe5d7006", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Tool that outperforms existing baselines in Wikidata entity and relation linking", "1_output": [["Tool that outperforms existing baselines in Wikidata entity and relation linking", "Title: Falcon 2.0: An Entity and Relation Linking Tool over Wikidata\n\nAbstract: The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from the text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, still, there are limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, first joint entity, and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is public and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository. We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/."]], "1_corpus": "arxiv"}
|
123 |
+
{"tstamp": 1722602981.2728, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3c6848e69d6546b1ab845c55a5c5829a", "0_model_name": "BM25", "0_prompt": "what is the difference between a homegroup and network?", "0_output": [["what is the difference between a homegroup and network?", "Superuser Stackexchange\n\nQ: Is it secure to run Windows 7 HomeGroup over an unsecured wifi network? Just wondering what the security implications are of running Windows 7 HomeGroup over an unsecured (open) WiFi network.\nAre there known hacks on the HomeGroup password or is it possible to otherwise break in to HomeGroup files, resources, printers, and computers without knowing the password?\nCan someone \"sniff\" the files that are being shared?\nMy gut tells me that there's no encryption of HomeGroup traffic, but I don't know for sure.\n\nA: This is very unsecure and a very bad idea. There is no encryption of traffic passing between computers on a HomeGroup network and by passing it over an unencrypted network makes it very easy to listen in on that data. While even an encrypted wifi is not foolproof, it's far, far better than leaving it open and makes it much harder for hackers to find an in to your network. \n"]], "0_corpus": "stackexchange", "1_conv_id": "8041d3fd3b964738a0ef325790ffebe6", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "what is the difference between a homegroup and network?", "1_output": [["what is the difference between a homegroup and network?", "Superuser Stackexchange\n\nQ: Difference between Public networks and Home networks What is the difference exactly between a public network and a private network in windows, i.e., when switching network type from public to home in windows networking center, are there more ports open? Do firewall rules change and how? Does that allow a hacker in my network to perform more actions than before?\n\nA: The main difference is this:\n\nComputers on a home network can belong to a homegroup. Network discovery is turned on for home networks, which allows you to see other computers and devices on the network and allows other network users to see your computer.\n\nSee: Windows: Choosing a network location\nIn Windows Firewall, there may be different rules for Home and Public networks so you could possibly have more open ports.\n"]], "1_corpus": "stackexchange"}
|
124 |
+
{"tstamp": 1722603055.531, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "50076083067c400a9e743c4716c4b03a", "0_model_name": "embed-english-v3.0", "0_prompt": "leader of carthage in the first punic war", "0_output": [["leader of carthage in the first punic war", "History of Tunisia\n\nMago (6th century) was King of Carthage, Punic MLK or malik (Greek basileus), not merely a SFT or Suffet, which then was only a minor official. Mago as MLK was head of state and war leader; being MLK was also a religious office. His family was considered to possess a sacred quality. Mago's office was somewhat similar to that of Pharaoh, but although kept in a family it was not hereditary, it was limited by legal consent; however, the council of elders and the popular assembly are late institutions. Carthage was founded by the King of Tyre who had a royal monopoly on this trading venture. Accordingly, royal authority was the traditional source of power the MLK of Carthage possessed. Later, as other Phoenician ship companies entered the trading region, and so associated with the city-state, the MLK of Carthage had to keep order among a rich variety of powerful merchants in their negotiations over risky commerce across the seas. The office of MLK began to be transformed, yet it was not until the aristocrats of Carthage became landowners that a council of elders was institutionalized.\nPunic Wars with Rome\nThe emergence of the Roman Republic and its developing foreign interests led to sustained rivalry with Carthage for dominion of the western Mediterranean. As early as 509 BC Carthage and Rome had entered into treaty status, but eventually their opposing positions led to disagreement, alienation, and conflict.\nThe First Punic War (264–241 BC) started in Sicily. It developed into a naval war in which the Romans learned how to fight at sea and prevailed. Carthage lost Sardinia and its western portion of Sicily. Following their defeat, the Mercenary revolt threatened the social order of Carthage, which they survived under their opposing leaders Hanno II the Great, and Hamilcar Barca, father of Hannibal."]], "0_corpus": "wikipedia", "1_conv_id": "d1316680113045cbbd0c7b783255ffb1", "1_model_name": "voyage-multilingual-2", "1_prompt": "leader of carthage in the first punic war", "1_output": [["leader of carthage in the first punic war", "Hannibal\n\nHannibal (; ; 247 – between 183 and 181 BC) was a Carthaginian general and statesman who commanded the forces of Carthage in their battle against the Roman Republic during the Second Punic War.\nHannibal's father, Hamilcar Barca, was a leading Carthaginian general during the First Punic War. His younger brothers were Mago and Hasdrubal; his brother-in-law was Hasdrubal the Fair, who commanded other Carthaginian armies. Hannibal lived during a period of great tension in the Mediterranean Basin, triggered by the emergence of the Roman Republic as a great power with its defeat of Carthage in the First Punic War. Revanchism prevailed in Carthage, symbolized by the pledge that Hannibal made to his father to \"never be a friend of Rome\"."]], "1_corpus": "wikipedia"}
|
125 |
+
{"tstamp": 1722603080.1551, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "25717cf1cbec48b384c1fec5855b198f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "what is the difference between cell line and cell strain?", "0_output": [["what is the difference between cell line and cell strain?", "Ell Stackexchange\n\nQ: \"line\" & \"range\" What is the difference in meaning about the words (\"line\" and \"range\") used as follows:\n\"a product line\" and \"a product range\".\nIf there is no difference what is the most natural way of expressing a set of products?\n\nA: \"Product line\" is generally used to refer to a set of related products (especially the products manufactured by a particular company). For example, if your company makes pet food, one line of products might be cat food, and the individual products would be a 5-pound bag, a 10-pound bag, and a 25-pound bag; your dog food would be a different product line.\nA \"range of products\" (not a \"product range\") is what a store would generally have to offer, meaning a variety of different brands, types, and sizes of various products. \nIf you were a store owner, you might say something like \"we sell a wide range of products, including Company X's entire product line.\"\nThere may be other ways to express a set of products, depending on your context.\n"]], "0_corpus": "stackexchange", "1_conv_id": "649fcdec1cdb4b52a73fee331229d7aa", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "what is the difference between cell line and cell strain?", "1_output": [["what is the difference between cell line and cell strain?", "Stackoverflow Stackexchange\n\nQ: Difference between % and %% in ipython magic commands What difference does it make to use %timeit and %%timeit in ipython? Because when I read the documentation using ?%timeit and ?%%timeit it was the same documentation. So, what difference does adding % as prefix make? \n\nA: In general, one percentage sign is referred to as line magic and applies just to code that follows it on that same line. Two percentage signs is referred to as cell magic and applies to everything that follows in that entire cell.\nAs nicely put in The Data Science Handbook:\n\nMagic commands come in two flavors: line magics, which are denoted by\na single % prefix and operate on a single line of input, and cell\nmagics, which are denoted by a double %% prefix and operate on\nmultiple lines of input.\n\nSome magic commands, like timeit, can work as line magic or cell magic:\nUsed as line magic:\n%timeit y = 2 if x < 3 else 4\n\nUsed as cell magic:\n%%timeit\nif x < 3:\n y=2\nelse:\n y=4\n\n"]], "1_corpus": "stackexchange"}
|
126 |
+
{"tstamp": 1722603088.2807, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "5abb88dc0172449f95ccc69ac22a69f5", "0_model_name": "BM25", "0_prompt": "Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "0_output": [["Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "Title: Pyserini: An Easy-to-Use Python Toolkit to Support Replicable IR Research with Sparse and Dense Representations\n\nAbstract: Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections. We aim to support, out of the box, the entire research lifecycle of efforts aimed at improving ranking with modern neural approaches. In particular, Pyserini supports sparse retrieval (e.g., BM25 scoring using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well as hybrid retrieval that integrates both approaches. This paper provides an overview of toolkit features and presents empirical results that illustrate its effectiveness on two popular ranking tasks. We also describe how our group has built a culture of replicability through shared norms and tools that enable rigorous automated testing."]], "0_corpus": "arxiv", "1_conv_id": "e2c25721b48f4aa995a3944800d91a10", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "1_output": [["Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "Title: Pyserini: An Easy-to-Use Python Toolkit to Support Replicable IR Research with Sparse and Dense Representations\n\nAbstract: Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections. We aim to support, out of the box, the entire research lifecycle of efforts aimed at improving ranking with modern neural approaches. In particular, Pyserini supports sparse retrieval (e.g., BM25 scoring using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well as hybrid retrieval that integrates both approaches. This paper provides an overview of toolkit features and presents empirical results that illustrate its effectiveness on two popular ranking tasks. We also describe how our group has built a culture of replicability through shared norms and tools that enable rigorous automated testing."]], "1_corpus": "arxiv"}
|
127 |
+
{"tstamp": 1722603099.7582, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "df75b88936a64102986c9e8ab355c203", "0_model_name": "text-embedding-004", "0_prompt": "A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "0_output": [["A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "Title: Code Llama: Open Foundation Models for Code\n\nAbstract: We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use."]], "0_corpus": "arxiv", "1_conv_id": "2877253f26e543c6af39298808ca2d7b", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "1_output": [["A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "Title: Physicists attempt to scale the ivory towers of finance\n\nAbstract: Physicists have recently begun doing research in finance, and even though this movement is less than five years old, interesting and useful contributions have already emerged. This article reviews these developments in four areas, including empirical statistical properties of prices, random-process models for price dynamics, agent-based modeling, and practical applications."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -401,3 +401,19 @@
|
|
401 |
{"tstamp": 1722602904.5217, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722602904.3912, "finish": 1722602904.5217, "ip": "", "conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "corpus": "arxiv"}
|
402 |
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "model_name": "BM25", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|
403 |
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "87f840669a704ace9c11b13737d46ad0", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
401 |
{"tstamp": 1722602904.5217, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722602904.3912, "finish": 1722602904.5217, "ip": "", "conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "corpus": "arxiv"}
|
402 |
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "model_name": "BM25", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|
403 |
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "87f840669a704ace9c11b13737d46ad0", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|
404 |
+
{"tstamp": 1722602929.6257, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722602929.431, "finish": 1722602929.6257, "ip": "", "conv_id": "ed5b98c14ac34cc5b904e9f66474f7ea", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "output": [["I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "Title: Simple and Superlattice Turing Patterns in Reaction-Diffusion Systems: Bifurcation, Bistability, and Parameter Collapse\n\nAbstract: This paper investigates the competition between both simple (e.g. stripes, hexagons) and ``superlattice'' (super squares, super hexagons) Turing patterns in two-component reaction-diffusion systems. ``Superlattice'' patterns are formed from eight or twelve Fourier modes, and feature structure at two different length scales. Using perturbation theory, we derive simple analytical expressions for the bifurcation equation coefficients on both rhombic and hexagonal lattices. These expressions show that, no matter how complicated the reaction kinectics, the nonlinear reaction terms reduce to just four effective terms within the bifurcation equation coefficients. Moreover, at the hexagonal degeneracy -- when the quadratic term in the hexagonal bifurcation equation disappears -- the number of effective system parameters drops to two, allowing a complete characterization of the possible bifurcation results at this degeneracy. The general results are then applied to specific model equations, to investigate the stability of different patterns within those models."]], "corpus": "arxiv"}
|
405 |
+
{"tstamp": 1722602929.6257, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722602929.431, "finish": 1722602929.6257, "ip": "", "conv_id": "8695bc0922cc428ea1b6b4fdf9b11e4a", "model_name": "GritLM/GritLM-7B", "prompt": "I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "output": [["I'm looking for the research that tests chain-of-thought reasoning in multilingual settings using a new benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "corpus": "arxiv"}
|
406 |
+
{"tstamp": 1722602947.491, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722602945.2836, "finish": 1722602947.491, "ip": "", "conv_id": "4fda6bc1d6fd4287ab4ed56a82d5d721", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "what is difference between ff and latch?", "output": [["what is difference between ff and latch?", "Electronics Stackexchange\n\nQ: Difference between latch and flip-flop? What is the difference between a latch and a flip-flop?\nWhat I am thinking is that a latch is equivalent to a flip-flop since it is used to store bits and is also equivalent to a register which is also used to store data. But after reading some articles on the internet I found differences between latches and flip-flops based on edge triggered and level sensitive functionality?\nWhat does that mean? Is a flip-flop the same as a latch or not? \n\nA: A latch passes the input data thru directly in the open state, and freezes the output in the latched state. The latch responds to the level of the control signal.\nThere are various types of flip-flops, but basically these change state on the edge of the control signal, and in some cases the data input(s). A classic D flip-flip is most like a latch, except it only looks at the input on a particular edge of the clock and freezes the output all the remaining time.\n"]], "corpus": "stackexchange"}
|
407 |
+
{"tstamp": 1722602947.491, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722602945.2836, "finish": 1722602947.491, "ip": "", "conv_id": "ea25b34d60f44d8dbf34041d1c2ae60c", "model_name": "voyage-multilingual-2", "prompt": "what is difference between ff and latch?", "output": [["what is difference between ff and latch?", "Electronics Stackexchange\n\nQ: Difference between latch and flip-flop? What is the difference between a latch and a flip-flop?\nWhat I am thinking is that a latch is equivalent to a flip-flop since it is used to store bits and is also equivalent to a register which is also used to store data. But after reading some articles on the internet I found differences between latches and flip-flops based on edge triggered and level sensitive functionality?\nWhat does that mean? Is a flip-flop the same as a latch or not? \n\nA: A latch passes the input data thru directly in the open state, and freezes the output in the latched state. The latch responds to the level of the control signal.\nThere are various types of flip-flops, but basically these change state on the edge of the control signal, and in some cases the data input(s). A classic D flip-flip is most like a latch, except it only looks at the input on a particular edge of the clock and freezes the output all the remaining time.\n"]], "corpus": "stackexchange"}
|
408 |
+
{"tstamp": 1722602957.8357, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722602955.7097, "finish": 1722602957.8357, "ip": "", "conv_id": "617582837ef34430831d11932519bfda", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Tool that outperforms existing baselines in Wikidata entity and relation linking", "output": [["Tool that outperforms existing baselines in Wikidata entity and relation linking", "Title: Time dependence and (non)commutativity of symmetries of evolution equations\n\nAbstract: We present easily verifiable sufficient conditions of time-independence and commutativity for local and nonlocal symmetries for a large class of homogeneous (1+1)-dimensional evolution systems. In contrast with the majority of known results, the verification of our conditions does not require the existence of master symmetry or hereditary recursion operator for the system in question. We also give simple sufficient conditions for the existence of infinite sets of time-independent symmetries for homogeneous (1+1)-dimensional evolution systems within slightly modified master symmetry approach."]], "corpus": "arxiv"}
|
409 |
+
{"tstamp": 1722602957.8357, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722602955.7097, "finish": 1722602957.8357, "ip": "", "conv_id": "ee2f041a4bbe49869341f547fe5d7006", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Tool that outperforms existing baselines in Wikidata entity and relation linking", "output": [["Tool that outperforms existing baselines in Wikidata entity and relation linking", "Title: Falcon 2.0: An Entity and Relation Linking Tool over Wikidata\n\nAbstract: The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from the text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, still, there are limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, first joint entity, and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is public and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository. We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/."]], "corpus": "arxiv"}
|
410 |
+
{"tstamp": 1722602975.1281, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722602974.957, "finish": 1722602975.1281, "ip": "", "conv_id": "3c6848e69d6546b1ab845c55a5c5829a", "model_name": "BM25", "prompt": "what is the difference between a homegroup and network?", "output": [["what is the difference between a homegroup and network?", "Superuser Stackexchange\n\nQ: Is it secure to run Windows 7 HomeGroup over an unsecured wifi network? Just wondering what the security implications are of running Windows 7 HomeGroup over an unsecured (open) WiFi network.\nAre there known hacks on the HomeGroup password or is it possible to otherwise break in to HomeGroup files, resources, printers, and computers without knowing the password?\nCan someone \"sniff\" the files that are being shared?\nMy gut tells me that there's no encryption of HomeGroup traffic, but I don't know for sure.\n\nA: This is very unsecure and a very bad idea. There is no encryption of traffic passing between computers on a HomeGroup network and by passing it over an unencrypted network makes it very easy to listen in on that data. While even an encrypted wifi is not foolproof, it's far, far better than leaving it open and makes it much harder for hackers to find an in to your network. \n"]], "corpus": "stackexchange"}
|
411 |
+
{"tstamp": 1722602975.1281, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602974.957, "finish": 1722602975.1281, "ip": "", "conv_id": "8041d3fd3b964738a0ef325790ffebe6", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what is the difference between a homegroup and network?", "output": [["what is the difference between a homegroup and network?", "Superuser Stackexchange\n\nQ: Difference between Public networks and Home networks What is the difference exactly between a public network and a private network in windows, i.e., when switching network type from public to home in windows networking center, are there more ports open? Do firewall rules change and how? Does that allow a hacker in my network to perform more actions than before?\n\nA: The main difference is this:\n\nComputers on a home network can belong to a homegroup. Network discovery is turned on for home networks, which allows you to see other computers and devices on the network and allows other network users to see your computer.\n\nSee: Windows: Choosing a network location\nIn Windows Firewall, there may be different rules for Home and Public networks so you could possibly have more open ports.\n"]], "corpus": "stackexchange"}
|
412 |
+
{"tstamp": 1722603009.1407, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722602987.0979, "finish": 1722603009.1407, "ip": "", "conv_id": "50076083067c400a9e743c4716c4b03a", "model_name": "embed-english-v3.0", "prompt": "leader of carthage in the first punic war", "output": [["leader of carthage in the first punic war", "History of Tunisia\n\nMago (6th century) was King of Carthage, Punic MLK or malik (Greek basileus), not merely a SFT or Suffet, which then was only a minor official. Mago as MLK was head of state and war leader; being MLK was also a religious office. His family was considered to possess a sacred quality. Mago's office was somewhat similar to that of Pharaoh, but although kept in a family it was not hereditary, it was limited by legal consent; however, the council of elders and the popular assembly are late institutions. Carthage was founded by the King of Tyre who had a royal monopoly on this trading venture. Accordingly, royal authority was the traditional source of power the MLK of Carthage possessed. Later, as other Phoenician ship companies entered the trading region, and so associated with the city-state, the MLK of Carthage had to keep order among a rich variety of powerful merchants in their negotiations over risky commerce across the seas. The office of MLK began to be transformed, yet it was not until the aristocrats of Carthage became landowners that a council of elders was institutionalized.\nPunic Wars with Rome\nThe emergence of the Roman Republic and its developing foreign interests led to sustained rivalry with Carthage for dominion of the western Mediterranean. As early as 509 BC Carthage and Rome had entered into treaty status, but eventually their opposing positions led to disagreement, alienation, and conflict.\nThe First Punic War (264–241 BC) started in Sicily. It developed into a naval war in which the Romans learned how to fight at sea and prevailed. Carthage lost Sardinia and its western portion of Sicily. Following their defeat, the Mercenary revolt threatened the social order of Carthage, which they survived under their opposing leaders Hanno II the Great, and Hamilcar Barca, father of Hannibal."]], "corpus": "wikipedia"}
|
413 |
+
{"tstamp": 1722603009.1407, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722602987.0979, "finish": 1722603009.1407, "ip": "", "conv_id": "d1316680113045cbbd0c7b783255ffb1", "model_name": "voyage-multilingual-2", "prompt": "leader of carthage in the first punic war", "output": [["leader of carthage in the first punic war", "Hannibal\n\nHannibal (; ; 247 – between 183 and 181 BC) was a Carthaginian general and statesman who commanded the forces of Carthage in their battle against the Roman Republic during the Second Punic War.\nHannibal's father, Hamilcar Barca, was a leading Carthaginian general during the First Punic War. His younger brothers were Mago and Hasdrubal; his brother-in-law was Hasdrubal the Fair, who commanded other Carthaginian armies. Hannibal lived during a period of great tension in the Mediterranean Basin, triggered by the emergence of the Roman Republic as a great power with its defeat of Carthage in the First Punic War. Revanchism prevailed in Carthage, symbolized by the pledge that Hannibal made to his father to \"never be a friend of Rome\"."]], "corpus": "wikipedia"}
|
414 |
+
{"tstamp": 1722603064.8421, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722603062.6018, "finish": 1722603064.8421, "ip": "", "conv_id": "25717cf1cbec48b384c1fec5855b198f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "what is the difference between cell line and cell strain?", "output": [["what is the difference between cell line and cell strain?", "Ell Stackexchange\n\nQ: \"line\" & \"range\" What is the difference in meaning about the words (\"line\" and \"range\") used as follows:\n\"a product line\" and \"a product range\".\nIf there is no difference what is the most natural way of expressing a set of products?\n\nA: \"Product line\" is generally used to refer to a set of related products (especially the products manufactured by a particular company). For example, if your company makes pet food, one line of products might be cat food, and the individual products would be a 5-pound bag, a 10-pound bag, and a 25-pound bag; your dog food would be a different product line.\nA \"range of products\" (not a \"product range\") is what a store would generally have to offer, meaning a variety of different brands, types, and sizes of various products. \nIf you were a store owner, you might say something like \"we sell a wide range of products, including Company X's entire product line.\"\nThere may be other ways to express a set of products, depending on your context.\n"]], "corpus": "stackexchange"}
|
415 |
+
{"tstamp": 1722603064.8421, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722603062.6018, "finish": 1722603064.8421, "ip": "", "conv_id": "649fcdec1cdb4b52a73fee331229d7aa", "model_name": "GritLM/GritLM-7B", "prompt": "what is the difference between cell line and cell strain?", "output": [["what is the difference between cell line and cell strain?", "Stackoverflow Stackexchange\n\nQ: Difference between % and %% in ipython magic commands What difference does it make to use %timeit and %%timeit in ipython? Because when I read the documentation using ?%timeit and ?%%timeit it was the same documentation. So, what difference does adding % as prefix make? \n\nA: In general, one percentage sign is referred to as line magic and applies just to code that follows it on that same line. Two percentage signs is referred to as cell magic and applies to everything that follows in that entire cell.\nAs nicely put in The Data Science Handbook:\n\nMagic commands come in two flavors: line magics, which are denoted by\na single % prefix and operate on a single line of input, and cell\nmagics, which are denoted by a double %% prefix and operate on\nmultiple lines of input.\n\nSome magic commands, like timeit, can work as line magic or cell magic:\nUsed as line magic:\n%timeit y = 2 if x < 3 else 4\n\nUsed as cell magic:\n%%timeit\nif x < 3:\n y=2\nelse:\n y=4\n\n"]], "corpus": "stackexchange"}
|
416 |
+
{"tstamp": 1722603085.7153, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722603085.5678, "finish": 1722603085.7153, "ip": "", "conv_id": "5abb88dc0172449f95ccc69ac22a69f5", "model_name": "BM25", "prompt": "Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "output": [["Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "Title: Pyserini: An Easy-to-Use Python Toolkit to Support Replicable IR Research with Sparse and Dense Representations\n\nAbstract: Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections. We aim to support, out of the box, the entire research lifecycle of efforts aimed at improving ranking with modern neural approaches. In particular, Pyserini supports sparse retrieval (e.g., BM25 scoring using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well as hybrid retrieval that integrates both approaches. This paper provides an overview of toolkit features and presents empirical results that illustrate its effectiveness on two popular ranking tasks. We also describe how our group has built a culture of replicability through shared norms and tools that enable rigorous automated testing."]], "corpus": "arxiv"}
|
417 |
+
{"tstamp": 1722603085.7153, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722603085.5678, "finish": 1722603085.7153, "ip": "", "conv_id": "e2c25721b48f4aa995a3944800d91a10", "model_name": "GritLM/GritLM-7B", "prompt": "Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "output": [["Toolkit for BM25 and transformer-encoded neural retrieval in IR with pre-built indexes and evaluation scripts", "Title: Pyserini: An Easy-to-Use Python Toolkit to Support Replicable IR Research with Sparse and Dense Representations\n\nAbstract: Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections. We aim to support, out of the box, the entire research lifecycle of efforts aimed at improving ranking with modern neural approaches. In particular, Pyserini supports sparse retrieval (e.g., BM25 scoring using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well as hybrid retrieval that integrates both approaches. This paper provides an overview of toolkit features and presents empirical results that illustrate its effectiveness on two popular ranking tasks. We also describe how our group has built a culture of replicability through shared norms and tools that enable rigorous automated testing."]], "corpus": "arxiv"}
|
418 |
+
{"tstamp": 1722603094.8141, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722603094.1932, "finish": 1722603094.8141, "ip": "", "conv_id": "df75b88936a64102986c9e8ab355c203", "model_name": "text-embedding-004", "prompt": "A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "output": [["A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "Title: Code Llama: Open Foundation Models for Code\n\nAbstract: We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use."]], "corpus": "arxiv"}
|
419 |
+
{"tstamp": 1722603094.8141, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722603094.1932, "finish": 1722603094.8141, "ip": "", "conv_id": "2877253f26e543c6af39298808ca2d7b", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "output": [["A paper introducing Code Llama models and showcasing their performance on coding benchmarks like HumanEval and MBPP.", "Title: Physicists attempt to scale the ivory towers of finance\n\nAbstract: Physicists have recently begun doing research in finance, and even though this movement is less than five years old, interesting and useful contributions have already emerged. This article reviews these developments in four areas, including empirical statistical properties of prices, random-process models for price dynamics, agent-based modeling, and practical applications."]], "corpus": "arxiv"}
|