Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
11a3d80
·
verified ·
1 Parent(s): c502479

Scheduled Commit

Browse files
data/clustering_individual-112c1ce1-fe57-41e6-8919-4f1859b89f91.jsonl CHANGED
@@ -14,3 +14,5 @@
14
  {"tstamp": 1723791712.4243, "task_type": "clustering", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723791712.3778, "finish": 1723791712.4243, "ip": "", "conv_id": "e94235b30d454e46aa7c8fa7dccc0969", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": ["Which test was devised to determine whether robots can think?"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
15
  {"tstamp": 1723813892.1947, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723813891.8415, "finish": 1723813892.1947, "ip": "", "conv_id": "9ea24c7d8d954660b1411999e1328c60", "model_name": "GritLM/GritLM-7B", "prompt": ["Shanghai", "Beijing", "Shenzhen", "Hangzhou", "Seattle", "Boston", "New York", "San Francisco"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
16
  {"tstamp": 1723813892.1947, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1723813891.8415, "finish": 1723813892.1947, "ip": "", "conv_id": "48ed724bf4ce40e0aea8b6da154ffa65", "model_name": "text-embedding-004", "prompt": ["Shanghai", "Beijing", "Shenzhen", "Hangzhou", "Seattle", "Boston", "New York", "San Francisco"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
14
  {"tstamp": 1723791712.4243, "task_type": "clustering", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723791712.3778, "finish": 1723791712.4243, "ip": "", "conv_id": "e94235b30d454e46aa7c8fa7dccc0969", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": ["Which test was devised to determine whether robots can think?"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
15
  {"tstamp": 1723813892.1947, "task_type": "clustering", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723813891.8415, "finish": 1723813892.1947, "ip": "", "conv_id": "9ea24c7d8d954660b1411999e1328c60", "model_name": "GritLM/GritLM-7B", "prompt": ["Shanghai", "Beijing", "Shenzhen", "Hangzhou", "Seattle", "Boston", "New York", "San Francisco"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
16
  {"tstamp": 1723813892.1947, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1723813891.8415, "finish": 1723813892.1947, "ip": "", "conv_id": "48ed724bf4ce40e0aea8b6da154ffa65", "model_name": "text-embedding-004", "prompt": ["Shanghai", "Beijing", "Shenzhen", "Hangzhou", "Seattle", "Boston", "New York", "San Francisco"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
17
+ {"tstamp": 1723883299.9429, "task_type": "clustering", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1723883299.3404, "finish": 1723883299.9429, "ip": "", "conv_id": "a798b2a8763545b89177d4b736b9dab8", "model_name": "text-embedding-3-large", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
18
+ {"tstamp": 1723883299.9429, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723883299.3404, "finish": 1723883299.9429, "ip": "", "conv_id": "6eea0345676343d7a4e63d9f4d191b4c", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["If someone online buys something off of my Amazon wish list, do they get my full name and address?", "Package \"In Transit\" over a week. No scheduled delivery date, no locations. What's up?", "Can Amazon gift cards replace a debit card?", "Homesick GWS star Cameron McCarthy on road to recovery", "Accidently ordered 2 of an item, how do I only return 1? For free?", "Need help ASAP, someone ordering in my account", "So who's everyone tipping for Round 1?"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-112c1ce1-fe57-41e6-8919-4f1859b89f91.jsonl CHANGED
@@ -21,3 +21,4 @@
21
  {"tstamp": 1723852475.759, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "cb5ab199c90b45ddae648541ff3e6b3e", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "what is the difference between would like to and want?", "0_output": [["what is the difference between would like to and want?", "Ell Stackexchange\n\nQ: I want to vs I need to vs I would like What is the difference between these expressions :\n\n\n*\n\n*I want to do something\n\n*I need to do something\n\n*I would like to do something\n\n\nSpecially when you want to ask for help (In doing something).\nand is \"I'm wanting ...\" right?\n\nA: Wants are often distinguished from needs\nA need is something that is necessary for survival (such as food and shelter), whereas a want is simply something that a person would like to have. \n'I would like to . . .' states your preferred or desired action, and should usually be followed by the infinitive. \n"]], "0_corpus": "stackexchange", "1_conv_id": "1d2ddedb11d44ae7bebd35c459288176", "1_model_name": "BM25", "1_prompt": "what is the difference between would like to and want?", "1_output": [["what is the difference between would like to and want?", "English Stackexchange\n\nQ: \"would love/like to\" usage? What is the difference between the sentences \n\"I would love/like to play cricket\" and \" I want to play cricket.\" Is there any difference in the moods?\n\nA: Yes, the moods are different:\n\n\n*\n\n*want is in the indicative mood.\n\n*would love is (arguably) in the conditional mood, or perhaps the optative.\n\n"]], "1_corpus": "stackexchange"}
22
  {"tstamp": 1723856432.2395, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7b90074539c444e4ba449e49d01273fc", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "0_corpus": "wikipedia", "1_conv_id": "5b6e98dcf9f84194b130aa9f3cd41dea", "1_model_name": "text-embedding-3-large", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
23
  {"tstamp": 1723857828.4685, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2206a8dc418f46a8a7d69e93c0c48974", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "977facfb89fd45eb80d54bbd188f99d0", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
 
 
21
  {"tstamp": 1723852475.759, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "cb5ab199c90b45ddae648541ff3e6b3e", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "what is the difference between would like to and want?", "0_output": [["what is the difference between would like to and want?", "Ell Stackexchange\n\nQ: I want to vs I need to vs I would like What is the difference between these expressions :\n\n\n*\n\n*I want to do something\n\n*I need to do something\n\n*I would like to do something\n\n\nSpecially when you want to ask for help (In doing something).\nand is \"I'm wanting ...\" right?\n\nA: Wants are often distinguished from needs\nA need is something that is necessary for survival (such as food and shelter), whereas a want is simply something that a person would like to have. \n'I would like to . . .' states your preferred or desired action, and should usually be followed by the infinitive. \n"]], "0_corpus": "stackexchange", "1_conv_id": "1d2ddedb11d44ae7bebd35c459288176", "1_model_name": "BM25", "1_prompt": "what is the difference between would like to and want?", "1_output": [["what is the difference between would like to and want?", "English Stackexchange\n\nQ: \"would love/like to\" usage? What is the difference between the sentences \n\"I would love/like to play cricket\" and \" I want to play cricket.\" Is there any difference in the moods?\n\nA: Yes, the moods are different:\n\n\n*\n\n*want is in the indicative mood.\n\n*would love is (arguably) in the conditional mood, or perhaps the optative.\n\n"]], "1_corpus": "stackexchange"}
22
  {"tstamp": 1723856432.2395, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7b90074539c444e4ba449e49d01273fc", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "0_corpus": "wikipedia", "1_conv_id": "5b6e98dcf9f84194b130aa9f3cd41dea", "1_model_name": "text-embedding-3-large", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
23
  {"tstamp": 1723857828.4685, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2206a8dc418f46a8a7d69e93c0c48974", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "977facfb89fd45eb80d54bbd188f99d0", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
24
+ {"tstamp": 1723883105.2367, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "01acfcb83f37454c88be307e88eb34b7", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "37fa0e0c6dd04f1a829e5a024beb4eab", "1_model_name": "voyage-multilingual-2", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
data/retrieval_individual-112c1ce1-fe57-41e6-8919-4f1859b89f91.jsonl CHANGED
@@ -79,3 +79,7 @@
79
  {"tstamp": 1723857764.4907, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723857762.2489, "finish": 1723857764.4907, "ip": "", "conv_id": "977facfb89fd45eb80d54bbd188f99d0", "model_name": "GritLM/GritLM-7B", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
80
  {"tstamp": 1723857868.5087, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723857868.295, "finish": 1723857868.5087, "ip": "", "conv_id": "772e5ccf839d4932b506dee2f323c702", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Do the Machines in \"The Matrix\" Adhere to the Three Laws of Robotics? In the \"real world\" of the Matrix, do the machines follow Asimov's Three Rules? Were they even programmed with them?\nEDIT: I realize that killing people goes against the 1st law, but by keeping people in the Matrix (which the machines view as a symbiosis) rather than simply killing off these \"annoying\" creatures, I wonder if the killings can be seen as \"preserving the greater good\" or something along those lines; and thus justify violating the laws.\n\nA: \nA robot may not injure a human being or, through inaction, allow a\n human being to come to harm.\nA robot must obey the orders given to it by human beings, except where\n such orders would conflict with the First Law.\nA robot must protect its own existence as long as such protection does\n not conflict with the First or Second Laws.\n\nNo, no, and no.\n"]], "corpus": "stackexchange"}
81
  {"tstamp": 1723857868.5087, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723857868.295, "finish": 1723857868.5087, "ip": "", "conv_id": "b66562ac7ec34cc986c69d31fbca225a", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "corpus": "stackexchange"}
 
 
 
 
 
79
  {"tstamp": 1723857764.4907, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723857762.2489, "finish": 1723857764.4907, "ip": "", "conv_id": "977facfb89fd45eb80d54bbd188f99d0", "model_name": "GritLM/GritLM-7B", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
80
  {"tstamp": 1723857868.5087, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723857868.295, "finish": 1723857868.5087, "ip": "", "conv_id": "772e5ccf839d4932b506dee2f323c702", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Do the Machines in \"The Matrix\" Adhere to the Three Laws of Robotics? In the \"real world\" of the Matrix, do the machines follow Asimov's Three Rules? Were they even programmed with them?\nEDIT: I realize that killing people goes against the 1st law, but by keeping people in the Matrix (which the machines view as a symbiosis) rather than simply killing off these \"annoying\" creatures, I wonder if the killings can be seen as \"preserving the greater good\" or something along those lines; and thus justify violating the laws.\n\nA: \nA robot may not injure a human being or, through inaction, allow a\n human being to come to harm.\nA robot must obey the orders given to it by human beings, except where\n such orders would conflict with the First Law.\nA robot must protect its own existence as long as such protection does\n not conflict with the First or Second Laws.\n\nNo, no, and no.\n"]], "corpus": "stackexchange"}
81
  {"tstamp": 1723857868.5087, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723857868.295, "finish": 1723857868.5087, "ip": "", "conv_id": "b66562ac7ec34cc986c69d31fbca225a", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Why did the machines in The Matrix keep humans around?", "output": [["Why did the machines in The Matrix keep humans around?", "Scifi Stackexchange\n\nQ: Why use full brained humans as batteries? \nPossible Duplicate:\nIs the basic premise of humans as a power source in The Matrix reasonable?\nWhy Did the Machines Even Bother With a Matrix? \n\nThe machines in The Matrix series use humans as batteries. That's OK. But why did the machines not alter their livestock such that the matrix itself would be unnecessary? \nConsider the recently proposed headless chicken farm. Something similar for the human batteries would seem to be an ideal solution for logical machines. No longer would they need to spend resources maintaining a virtual reality for their batteries. The energy, physical hardware and programs running the matrix could then be put to who knows what other uses.\n\nA: The explanation for the use of human brains would seem to be the fact that our complex thoughts lead to higher level of brain activity, thus resulting in exponentially more synapses firing (i.e. more electrical activity to harvest).\nThat makes human brains far more efficient than the brains of animals, even with the overhead of maintaining the matrix. Plus, that movie would have really sucked... dumb cows and chickens revolt against the machines! ;)\n"]], "corpus": "stackexchange"}
82
+ {"tstamp": 1723883074.2327, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723883073.824, "finish": 1723883074.2327, "ip": "", "conv_id": "01acfcb83f37454c88be307e88eb34b7", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
83
+ {"tstamp": 1723883074.2327, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723883073.824, "finish": 1723883074.2327, "ip": "", "conv_id": "37fa0e0c6dd04f1a829e5a024beb4eab", "model_name": "voyage-multilingual-2", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
84
+ {"tstamp": 1723883180.4686, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1723883178.1666, "finish": 1723883180.4686, "ip": "", "conv_id": "2e68b521e7564bd7990008a8f17fd247", "model_name": "voyage-multilingual-2", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "corpus": "wikipedia"}
85
+ {"tstamp": 1723883180.4686, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1723883178.1666, "finish": 1723883180.4686, "ip": "", "conv_id": "d103e4545d534917b90867923cff524d", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}