Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
8b97b64
·
verified ·
1 Parent(s): 2b97425

Scheduled Commit

Browse files
data/clustering_individual-23cee96f-6768-4f6e-9a62-131dedf90618.jsonl CHANGED
@@ -8,3 +8,5 @@
8
  {"tstamp": 1732204528.2616, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1732204527.9322, "finish": 1732204528.2616, "ip": "", "conv_id": "1d70af55f8ee494b8c9f36a8d4624455", "model_name": "GritLM/GritLM-7B", "prompt": ["dome", "volcanic", "fold", "water filter", "camping stove", "sleeping bag", "backpack"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
9
  {"tstamp": 1732204586.3624, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732204586.1085, "finish": 1732204586.3624, "ip": "", "conv_id": "713bf09423a4417a8720caf32aa794ec", "model_name": "embed-english-v3.0", "prompt": ["drought", "hurricane", "tornado", "fog", "Brachiosaurus", "Velociraptor", "Pteranodon", "Tyrannosaurus", "B", "O"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
10
  {"tstamp": 1732204586.3624, "task_type": "clustering", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732204586.1085, "finish": 1732204586.3624, "ip": "", "conv_id": "a8d8b1a640da47dc9c09ad6051d61127", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": ["drought", "hurricane", "tornado", "fog", "Brachiosaurus", "Velociraptor", "Pteranodon", "Tyrannosaurus", "B", "O"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
8
  {"tstamp": 1732204528.2616, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1732204527.9322, "finish": 1732204528.2616, "ip": "", "conv_id": "1d70af55f8ee494b8c9f36a8d4624455", "model_name": "GritLM/GritLM-7B", "prompt": ["dome", "volcanic", "fold", "water filter", "camping stove", "sleeping bag", "backpack"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
9
  {"tstamp": 1732204586.3624, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732204586.1085, "finish": 1732204586.3624, "ip": "", "conv_id": "713bf09423a4417a8720caf32aa794ec", "model_name": "embed-english-v3.0", "prompt": ["drought", "hurricane", "tornado", "fog", "Brachiosaurus", "Velociraptor", "Pteranodon", "Tyrannosaurus", "B", "O"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
10
  {"tstamp": 1732204586.3624, "task_type": "clustering", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732204586.1085, "finish": 1732204586.3624, "ip": "", "conv_id": "a8d8b1a640da47dc9c09ad6051d61127", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": ["drought", "hurricane", "tornado", "fog", "Brachiosaurus", "Velociraptor", "Pteranodon", "Tyrannosaurus", "B", "O"], "ncluster": 3, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
11
+ {"tstamp": 1732238191.6034, "task_type": "clustering", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732238191.2591, "finish": 1732238191.6034, "ip": "", "conv_id": "0ff0ccb84b894574b425fc722a8543de", "model_name": "voyage-multilingual-2", "prompt": ["Gemini", "Capricorn", "Aquarius", "Virgo", "Cancer", "Scorpio", "Apple", "Huawei", "OnePlus", "Xiaomi", "fascism", "conservatism", "convex", "prismatic", "concave", "progressive"], "ncluster": 4, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
12
+ {"tstamp": 1732238191.6034, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732238191.2591, "finish": 1732238191.6034, "ip": "", "conv_id": "3ad3bd16adfb4cadb656f11618f4ccd7", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["Gemini", "Capricorn", "Aquarius", "Virgo", "Cancer", "Scorpio", "Apple", "Huawei", "OnePlus", "Xiaomi", "fascism", "conservatism", "convex", "prismatic", "concave", "progressive"], "ncluster": 4, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-23cee96f-6768-4f6e-9a62-131dedf90618.jsonl CHANGED
@@ -25,3 +25,5 @@
25
  {"tstamp": 1732229236.4019, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4ea8c34aed2a42279cc3947c86c7dca6", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "2d2b2eac434f46fe8d041a72ad07f30f", "1_model_name": "text-embedding-004", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
26
  {"tstamp": 1732229280.7403, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c4fe433b9e924218ba5f0ce0e7a634c2", "0_model_name": "text-embedding-004", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "fd4a9e8e095e4736bacd093811311cd4", "1_model_name": "embed-english-v3.0", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
27
  {"tstamp": 1732237944.5002, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4bc9b21ef7024d5f8971fe9f62139255", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "where are apt keys stored?", "0_output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Is there any way to find out which apt key belongs to which package? I have two public signing keys from the same developer in apt-key list. How to find out which is the key of the installed package?\n\nA: A package as such has no key. It are the software sources, including PPA's, that are signed.\nYou can determine from which repository a package was installed with the command:\napt-cache policy <package>\n\nIn turn, the command apt-key list can return the key of the PPA from which the package was installed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "676b8743bb9d40ae9e4f6e6f493d981e", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "where are apt keys stored?", "1_output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Does apt have a directory where it stores all downloads? I was wondering if apt has a directory where it stores all downloaded deb files through it. Any insight?\n\nA: Yes, there is such directory. That's /var/cache/apt/archives/\nNote that there is a subtle difference between apt and apt-get in the fact that apt doesn't cache packages by default in 16.04. This can be amended by setting Binary::apt::APT::Keep-Downloaded-Packages to true in /etc/apt/apt.conf.d/01keep-debs file. As root user , do:\necho 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/01keep-debs\n\nSee also.\n"]], "1_corpus": "stackexchange"}
 
 
 
25
  {"tstamp": 1732229236.4019, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4ea8c34aed2a42279cc3947c86c7dca6", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "2d2b2eac434f46fe8d041a72ad07f30f", "1_model_name": "text-embedding-004", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
26
  {"tstamp": 1732229280.7403, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c4fe433b9e924218ba5f0ce0e7a634c2", "0_model_name": "text-embedding-004", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "fd4a9e8e095e4736bacd093811311cd4", "1_model_name": "embed-english-v3.0", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
27
  {"tstamp": 1732237944.5002, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "4bc9b21ef7024d5f8971fe9f62139255", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "where are apt keys stored?", "0_output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Is there any way to find out which apt key belongs to which package? I have two public signing keys from the same developer in apt-key list. How to find out which is the key of the installed package?\n\nA: A package as such has no key. It are the software sources, including PPA's, that are signed.\nYou can determine from which repository a package was installed with the command:\napt-cache policy <package>\n\nIn turn, the command apt-key list can return the key of the PPA from which the package was installed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "676b8743bb9d40ae9e4f6e6f493d981e", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "where are apt keys stored?", "1_output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Does apt have a directory where it stores all downloads? I was wondering if apt has a directory where it stores all downloaded deb files through it. Any insight?\n\nA: Yes, there is such directory. That's /var/cache/apt/archives/\nNote that there is a subtle difference between apt and apt-get in the fact that apt doesn't cache packages by default in 16.04. This can be amended by setting Binary::apt::APT::Keep-Downloaded-Packages to true in /etc/apt/apt.conf.d/01keep-debs file. As root user , do:\necho 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/01keep-debs\n\nSee also.\n"]], "1_corpus": "stackexchange"}
28
+ {"tstamp": 1732237997.1878, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "6fae525480fe4b50952babc0e44e5c1f", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "0_output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "0_corpus": "arxiv", "1_conv_id": "1deefe4bd4824a42812e1890c014d6d7", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "1_output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "1_corpus": "arxiv"}
29
+ {"tstamp": 1732238156.7988, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "038cf24d9a1149f1b2850ea14ce84bed", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "how much does it cost to charge a car battery at home?", "0_output": [["how much does it cost to charge a car battery at home?", "Electronics Stackexchange\n\nQ: Building a USB charger for camping I was looking into building a battery powered USB charger for camping. It occurred to me that car chargers are pre-built and run off 12 V. If I made a battery pack out of 8x 1.5 V batteries in series and wired them to the electronics in a car USB charger would that work as a simple cheap and effective solution?\nAlso can I calculate the charges from the pack by summing the mAh of the 8 batteries and dividing by the mAh of the unit to be charged or is the efficiency of the device non negligible?\n\nA: If you're car camping, I would recommend getting a solar panel to float charge your car battery.\nIf you were talking about alkaline C batteries, I would recommend 10 NiMH AA batteries instead. Lighter, about the same capacity & operates at a relatively steady 1.2V. Voltage range of 12-14V (NiMH) vs 10-12V (alkaline).\nNowadays you can also look into LiIon usb chargers that are lighter than NiMH. Most can be charged with a solar panel too!\n"]], "0_corpus": "stackexchange", "1_conv_id": "c93dc52395e940cf911fb1125ac96c42", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "how much does it cost to charge a car battery at home?", "1_output": [["how much does it cost to charge a car battery at home?", "Electronics Stackexchange\n\nQ: Building a USB charger for camping I was looking into building a battery powered USB charger for camping. It occurred to me that car chargers are pre-built and run off 12 V. If I made a battery pack out of 8x 1.5 V batteries in series and wired them to the electronics in a car USB charger would that work as a simple cheap and effective solution?\nAlso can I calculate the charges from the pack by summing the mAh of the 8 batteries and dividing by the mAh of the unit to be charged or is the efficiency of the device non negligible?\n\nA: If you're car camping, I would recommend getting a solar panel to float charge your car battery.\nIf you were talking about alkaline C batteries, I would recommend 10 NiMH AA batteries instead. Lighter, about the same capacity & operates at a relatively steady 1.2V. Voltage range of 12-14V (NiMH) vs 10-12V (alkaline).\nNowadays you can also look into LiIon usb chargers that are lighter than NiMH. Most can be charged with a solar panel too!\n"]], "1_corpus": "stackexchange"}
data/retrieval_individual-23cee96f-6768-4f6e-9a62-131dedf90618.jsonl CHANGED
@@ -88,3 +88,5 @@
88
  {"tstamp": 1732237902.6895, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1732237902.3837, "finish": 1732237902.6895, "ip": "", "conv_id": "676b8743bb9d40ae9e4f6e6f493d981e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "where are apt keys stored?", "output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Does apt have a directory where it stores all downloads? I was wondering if apt has a directory where it stores all downloaded deb files through it. Any insight?\n\nA: Yes, there is such directory. That's /var/cache/apt/archives/\nNote that there is a subtle difference between apt and apt-get in the fact that apt doesn't cache packages by default in 16.04. This can be amended by setting Binary::apt::APT::Keep-Downloaded-Packages to true in /etc/apt/apt.conf.d/01keep-debs file. As root user , do:\necho 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/01keep-debs\n\nSee also.\n"]], "corpus": "stackexchange"}
89
  {"tstamp": 1732237982.1512, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1732237981.9069, "finish": 1732237982.1512, "ip": "", "conv_id": "6fae525480fe4b50952babc0e44e5c1f", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "corpus": "arxiv"}
90
  {"tstamp": 1732237982.1512, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1732237981.9069, "finish": 1732237982.1512, "ip": "", "conv_id": "1deefe4bd4824a42812e1890c014d6d7", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "corpus": "arxiv"}
 
 
 
88
  {"tstamp": 1732237902.6895, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1732237902.3837, "finish": 1732237902.6895, "ip": "", "conv_id": "676b8743bb9d40ae9e4f6e6f493d981e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "where are apt keys stored?", "output": [["where are apt keys stored?", "Askubuntu Stackexchange\n\nQ: Does apt have a directory where it stores all downloads? I was wondering if apt has a directory where it stores all downloaded deb files through it. Any insight?\n\nA: Yes, there is such directory. That's /var/cache/apt/archives/\nNote that there is a subtle difference between apt and apt-get in the fact that apt doesn't cache packages by default in 16.04. This can be amended by setting Binary::apt::APT::Keep-Downloaded-Packages to true in /etc/apt/apt.conf.d/01keep-debs file. As root user , do:\necho 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/01keep-debs\n\nSee also.\n"]], "corpus": "stackexchange"}
89
  {"tstamp": 1732237982.1512, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1732237981.9069, "finish": 1732237982.1512, "ip": "", "conv_id": "6fae525480fe4b50952babc0e44e5c1f", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "corpus": "arxiv"}
90
  {"tstamp": 1732237982.1512, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1732237981.9069, "finish": 1732237982.1512, "ip": "", "conv_id": "1deefe4bd4824a42812e1890c014d6d7", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "A comprehensive review summarizing the evolution from statistical to neural machine translation.", "output": [["A comprehensive review summarizing the evolution from statistical to neural machine translation.", "Title: Neural Machine Translation: A Review and Survey\n\nAbstract: The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a survey of recent trends in the field."]], "corpus": "arxiv"}
91
+ {"tstamp": 1732238135.9916, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732238135.7132, "finish": 1732238135.9916, "ip": "", "conv_id": "038cf24d9a1149f1b2850ea14ce84bed", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "how much does it cost to charge a car battery at home?", "output": [["how much does it cost to charge a car battery at home?", "Electronics Stackexchange\n\nQ: Building a USB charger for camping I was looking into building a battery powered USB charger for camping. It occurred to me that car chargers are pre-built and run off 12 V. If I made a battery pack out of 8x 1.5 V batteries in series and wired them to the electronics in a car USB charger would that work as a simple cheap and effective solution?\nAlso can I calculate the charges from the pack by summing the mAh of the 8 batteries and dividing by the mAh of the unit to be charged or is the efficiency of the device non negligible?\n\nA: If you're car camping, I would recommend getting a solar panel to float charge your car battery.\nIf you were talking about alkaline C batteries, I would recommend 10 NiMH AA batteries instead. Lighter, about the same capacity & operates at a relatively steady 1.2V. Voltage range of 12-14V (NiMH) vs 10-12V (alkaline).\nNowadays you can also look into LiIon usb chargers that are lighter than NiMH. Most can be charged with a solar panel too!\n"]], "corpus": "stackexchange"}
92
+ {"tstamp": 1732238135.9916, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1732238135.7132, "finish": 1732238135.9916, "ip": "", "conv_id": "c93dc52395e940cf911fb1125ac96c42", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "how much does it cost to charge a car battery at home?", "output": [["how much does it cost to charge a car battery at home?", "Electronics Stackexchange\n\nQ: Building a USB charger for camping I was looking into building a battery powered USB charger for camping. It occurred to me that car chargers are pre-built and run off 12 V. If I made a battery pack out of 8x 1.5 V batteries in series and wired them to the electronics in a car USB charger would that work as a simple cheap and effective solution?\nAlso can I calculate the charges from the pack by summing the mAh of the 8 batteries and dividing by the mAh of the unit to be charged or is the efficiency of the device non negligible?\n\nA: If you're car camping, I would recommend getting a solar panel to float charge your car battery.\nIf you were talking about alkaline C batteries, I would recommend 10 NiMH AA batteries instead. Lighter, about the same capacity & operates at a relatively steady 1.2V. Voltage range of 12-14V (NiMH) vs 10-12V (alkaline).\nNowadays you can also look into LiIon usb chargers that are lighter than NiMH. Most can be charged with a solar panel too!\n"]], "corpus": "stackexchange"}