Muennighoff commited on
Commit
ec8cfe6
·
verified ·
1 Parent(s): 4d8c145

Scheduled Commit

Browse files
data/retrieval_battle-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -1,2 +1,3 @@
1
  {"tstamp": 1722356200.3701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "2719b38b30a44cb08b58c23cfaaac5fe", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "0_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Self-Edit: Fault-Aware Code Editor for Code Generation\n\nAbstract: Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task. We execute the generated code on the example test case provided in the question and wrap execution results into a supplementary comment. Utilizing this comment as guidance, our fault-aware code editor is employed to correct errors in the generated code. We perform extensive evaluations across two competitive programming datasets with nine different LLMs. Compared to directly generating from LLMs, our approach can improve the average of pass@1 by 89\\% on APPS-dev, 31\\% on APPS-test, and 48\\% on HumanEval over nine popular code generation LLMs with parameter sizes ranging from 110M to 175B. Compared to other post-processing methods, our method demonstrates superior accuracy and efficiency."]], "0_corpus": "arxiv", "1_conv_id": "9f392c55ce0242e699bcfa03f3da8923", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "1_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Enhancement of the superconducting transition temperature in La2-xSrxCuO4 bilayers: Role of pairing and phase stiffness\n\nAbstract: The superconducting transition temperature, Tc, of bilayers comprising underdoped La2-xSrxCuO4 films capped by a thin heavily overdoped metallic La1.65Sr0.35CuO4 layer, is found to increase with respect to Tc of the bare underdoped films. The highest Tc is achieved for x = 0.12, close to the 'anomalous' 1/8 doping level, and exceeds that of the optimally-doped bare film. Our data suggest that the enhanced superconductivity is confined to the interface between the layers. We attribute the effect to a combination of the high pairing scale in the underdoped layer with an enhanced phase stiffness induced by the overdoped film."]], "1_corpus": "arxiv"}
2
  {"tstamp": 1722356865.2174, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b3ad6f75148142cca47ea52be709f00e", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "0_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: Diffusion in normal and critical transient chaos\n\nAbstract: In this paper we investigate deterministic diffusion in systems which are spatially extended in certain directions but are restricted in size and open in other directions, consequently particles can escape. We introduce besides the diffusion coefficient D on the chaotic repeller a coefficient ${\\hat D}$ which measures the broadening of the distribution of trajectories during the transient chaotic motion. Both coefficients are explicitly computed for one-dimensional models, and they are found to be different in most cases. We show furthermore that a jump develops in both of the coefficients for most of the initial distributions when we approach the critical borderline where the escape rate equals the Liapunov exponent of a periodic orbit."]], "0_corpus": "arxiv", "1_conv_id": "bf0d0a6ae2d04641ac0d1978086695e0", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "1_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: BERT Meets Chinese Word Segmentation\n\nAbstract: Chinese word segmentation (CWS) is a fundamental task for Chinese language understanding. Recently, neural network-based models have attained superior performance in solving the in-domain CWS task. Last year, Bidirectional Encoder Representation from Transformers (BERT), a new language representation model, has been proposed as a backbone model for many natural language tasks and redefined the corresponding performance. The excellent performance of BERT motivates us to apply it to solve the CWS task. By conducting intensive experiments in the benchmark datasets from the second International Chinese Word Segmentation Bake-off, we obtain several keen observations. BERT can slightly improve the performance even when the datasets contain the issue of labeling inconsistency. When applying sufficiently learned features, Softmax, a simpler classifier, can attain the same performance as that of a more complicated classifier, e.g., Conditional Random Field (CRF). The performance of BERT usually increases as the model size increases. The features extracted by BERT can be also applied as good candidates for other neural network models."]], "1_corpus": "arxiv"}
 
 
1
  {"tstamp": 1722356200.3701, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "2719b38b30a44cb08b58c23cfaaac5fe", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "0_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Self-Edit: Fault-Aware Code Editor for Code Generation\n\nAbstract: Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task. We execute the generated code on the example test case provided in the question and wrap execution results into a supplementary comment. Utilizing this comment as guidance, our fault-aware code editor is employed to correct errors in the generated code. We perform extensive evaluations across two competitive programming datasets with nine different LLMs. Compared to directly generating from LLMs, our approach can improve the average of pass@1 by 89\\% on APPS-dev, 31\\% on APPS-test, and 48\\% on HumanEval over nine popular code generation LLMs with parameter sizes ranging from 110M to 175B. Compared to other post-processing methods, our method demonstrates superior accuracy and efficiency."]], "0_corpus": "arxiv", "1_conv_id": "9f392c55ce0242e699bcfa03f3da8923", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research improving code generation models using CommitPack and HumanEvalPack datasets.", "1_output": [["Research improving code generation models using CommitPack and HumanEvalPack datasets.", "Title: Enhancement of the superconducting transition temperature in La2-xSrxCuO4 bilayers: Role of pairing and phase stiffness\n\nAbstract: The superconducting transition temperature, Tc, of bilayers comprising underdoped La2-xSrxCuO4 films capped by a thin heavily overdoped metallic La1.65Sr0.35CuO4 layer, is found to increase with respect to Tc of the bare underdoped films. The highest Tc is achieved for x = 0.12, close to the 'anomalous' 1/8 doping level, and exceeds that of the optimally-doped bare film. Our data suggest that the enhanced superconductivity is confined to the interface between the layers. We attribute the effect to a combination of the high pairing scale in the underdoped layer with an enhanced phase stiffness induced by the overdoped film."]], "1_corpus": "arxiv"}
2
  {"tstamp": 1722356865.2174, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b3ad6f75148142cca47ea52be709f00e", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "0_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: Diffusion in normal and critical transient chaos\n\nAbstract: In this paper we investigate deterministic diffusion in systems which are spatially extended in certain directions but are restricted in size and open in other directions, consequently particles can escape. We introduce besides the diffusion coefficient D on the chaotic repeller a coefficient ${\\hat D}$ which measures the broadening of the distribution of trajectories during the transient chaotic motion. Both coefficients are explicitly computed for one-dimensional models, and they are found to be different in most cases. We show furthermore that a jump develops in both of the coefficients for most of the initial distributions when we approach the critical borderline where the escape rate equals the Liapunov exponent of a periodic orbit."]], "0_corpus": "arxiv", "1_conv_id": "bf0d0a6ae2d04641ac0d1978086695e0", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "1_output": [["Research on enhancing Chinese word segmentation by leveraging Bidirectional Encoder Representations from Transformers to handle various annotation criteria across datasets.", "Title: BERT Meets Chinese Word Segmentation\n\nAbstract: Chinese word segmentation (CWS) is a fundamental task for Chinese language understanding. Recently, neural network-based models have attained superior performance in solving the in-domain CWS task. Last year, Bidirectional Encoder Representation from Transformers (BERT), a new language representation model, has been proposed as a backbone model for many natural language tasks and redefined the corresponding performance. The excellent performance of BERT motivates us to apply it to solve the CWS task. By conducting intensive experiments in the benchmark datasets from the second International Chinese Word Segmentation Bake-off, we obtain several keen observations. BERT can slightly improve the performance even when the datasets contain the issue of labeling inconsistency. When applying sufficiently learned features, Softmax, a simpler classifier, can attain the same performance as that of a more complicated classifier, e.g., Conditional Random Field (CRF). The performance of BERT usually increases as the model size increases. The features extracted by BERT can be also applied as good candidates for other neural network models."]], "1_corpus": "arxiv"}
3
+ {"tstamp": 1722357180.6325, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "42394a8aba7e4beda1c9a6de11159583", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "How do people think about the current Iran president?", "0_output": [["How do people think about the current Iran president?", "President of Iran\n\nThe president of Iran () is the head of government of the Islamic Republic of Iran and the second highest-ranking official, after the Supreme Leader. The first election was held in 1980 and was won by Abulhassan Banisadr. Mohammad Mokhber currently serves as the acting President of Iran following the death of Ebrahim Raisi, the former president, on 19 May 2024. He is expected to be succeeded by Masoud Pezeshkian on July 30, who won the 2024 Iranian presidential election.\nHistory\nAfter the Iranian Revolution of 1979 and 1979 Iranian Islamic Republic referendum on March 29 and 30, the new government needed to craft a new constitution. Supreme Leader Ruhollah Khomeini, ordered an election for the Assembly of Experts, the body tasked with writing the constitution. The assembly presented the constitution on October 24, 1979, and Supreme Leader Ruhollah Khomeini and Prime Minister Mehdi Bazargan approved it."]], "0_corpus": "wikipedia", "1_conv_id": "52214b21d0284944897b648b75b8af91", "1_model_name": "voyage-multilingual-2", "1_prompt": "How do people think about the current Iran president?", "1_output": [["How do people think about the current Iran president?", "History of the Islamic Republic of Iran\n\n2009 election controversy Ahmadinejad's 2009 election victory was hotly disputed and marred by large protests that formed the \"greatest domestic challenge\" to the leadership of the Islamic Republic \"in 30 years\". Despite high turnout and large enthusiastic crowds for reformist opponent Mir-Hossein Mousavi, Ahmadinejad was officially declared to have won by a 2–1 margin against three opponents. Allegations of voting irregularities and protest by Mousavi his supporters were immediate and by 1 July 2009 1000 people had been arrested and 20 killed in street demonstrations. Supreme Leader Ali Khamenei and others Islamic officials blamed foreign powers for fomenting the protest. However, according to World Public Opinion (a United States poll), the protest does not mean Iran is in a \"pre-revolutionary\" situation as a WPO poll of Iranians taken in early September 2009 found high levels of satisfaction with the regime. 80% of the Iranians respondents said President Ahmadinejad was honest, 64% expressed a lot of confidence in him, and nine in ten said they were satisfied with Iran's system of government.\nPublic opinion\nAccording to the (U.S.) International Peace Institute's 2010-poll conducted in Persian by a representative sample of the Iranian people:\nIranians are divided on the government's performance.\nDissatisfied with the economy.\nWorry over sanctions and isolation.\nWant to focus on domestic affairs.\nFavor closer ties to the West.\nRising tensions sparked hostility toward the US, Europe, and U.N.\nFavor nuclear arms and do not want to back deals to halt enrichment.\nIndependent polls do not contradict official turnout of 2009 election, which gave around 60% of the vote to Ahmadinejad.\nRouhani Administration\nHassan Rouhani was elected as President of Iran on 12 June 2013 and took office on 3 August. He is known as a moderate left-leaner, supported by reformists in the election. He has open ideas in the area of economics and a high-level foreign policy, as he served as a diplomat before his election. He has moved quickly to engage in diplomatic negotiations with Western countries, seeking the lifting of crippling economic sanctions on oil exports in exchange for Iran's cooperation with UN treaties regarding the development of nuclear weapons. Economic conditions improved during Rouhani's first term and led to his reelection in 2017 with his reformist agenda."]], "1_corpus": "wikipedia"}
data/retrieval_individual-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl CHANGED
@@ -6,3 +6,5 @@
6
  {"tstamp": 1722356987.8364, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722356987.4831, "finish": 1722356987.8364, "ip": "", "conv_id": "8cc615e940ea46a9bc36a6f1db56f7cb", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "trump", "output": [["trump", "Trump\n\nTrump most commonly refers to:\nDonald Trump (born 1946), President of the United States from 2017 to 2021\nTrump (card games), any playing card given an ad-hoc high rank\nTrump may also refer to:\nArts and entertainment\nFilm\nTrump: The Kremlin Candidate?, a 2017 British television film\nTrump: What's the Deal?, an American documentary first screened in 1991\nGaming\nCourt piece or trumps, a trick-taking card game related to whist\nTop Trumps, a card game series\nTrump: The Game, a board game\nMajor Arcana, special cards in a cartomantic Tarot pack, similar to trumps in a normal Tarot pack"]], "corpus": "wikipedia"}
7
  {"tstamp": 1722357066.1502, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722357065.675, "finish": 1722357066.1502, "ip": "", "conv_id": "7c2e24197d4b477499b1f7763539f4a0", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
8
  {"tstamp": 1722357066.1502, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722357065.675, "finish": 1722357066.1502, "ip": "", "conv_id": "d0f4393f2f3b4313a37f5552e2b3c499", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
 
 
 
6
  {"tstamp": 1722356987.8364, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722356987.4831, "finish": 1722356987.8364, "ip": "", "conv_id": "8cc615e940ea46a9bc36a6f1db56f7cb", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "trump", "output": [["trump", "Trump\n\nTrump most commonly refers to:\nDonald Trump (born 1946), President of the United States from 2017 to 2021\nTrump (card games), any playing card given an ad-hoc high rank\nTrump may also refer to:\nArts and entertainment\nFilm\nTrump: The Kremlin Candidate?, a 2017 British television film\nTrump: What's the Deal?, an American documentary first screened in 1991\nGaming\nCourt piece or trumps, a trick-taking card game related to whist\nTop Trumps, a card game series\nTrump: The Game, a board game\nMajor Arcana, special cards in a cartomantic Tarot pack, similar to trumps in a normal Tarot pack"]], "corpus": "wikipedia"}
7
  {"tstamp": 1722357066.1502, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722357065.675, "finish": 1722357066.1502, "ip": "", "conv_id": "7c2e24197d4b477499b1f7763539f4a0", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
8
  {"tstamp": 1722357066.1502, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722357065.675, "finish": 1722357066.1502, "ip": "", "conv_id": "d0f4393f2f3b4313a37f5552e2b3c499", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
9
+ {"tstamp": 1722357162.4058, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722357161.4757, "finish": 1722357162.4058, "ip": "", "conv_id": "42394a8aba7e4beda1c9a6de11159583", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "How do people think about the current Iran president?", "output": [["How do people think about the current Iran president?", "President of Iran\n\nThe president of Iran () is the head of government of the Islamic Republic of Iran and the second highest-ranking official, after the Supreme Leader. The first election was held in 1980 and was won by Abulhassan Banisadr. Mohammad Mokhber currently serves as the acting President of Iran following the death of Ebrahim Raisi, the former president, on 19 May 2024. He is expected to be succeeded by Masoud Pezeshkian on July 30, who won the 2024 Iranian presidential election.\nHistory\nAfter the Iranian Revolution of 1979 and 1979 Iranian Islamic Republic referendum on March 29 and 30, the new government needed to craft a new constitution. Supreme Leader Ruhollah Khomeini, ordered an election for the Assembly of Experts, the body tasked with writing the constitution. The assembly presented the constitution on October 24, 1979, and Supreme Leader Ruhollah Khomeini and Prime Minister Mehdi Bazargan approved it."]], "corpus": "wikipedia"}
10
+ {"tstamp": 1722357162.4058, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1722357161.4757, "finish": 1722357162.4058, "ip": "", "conv_id": "52214b21d0284944897b648b75b8af91", "model_name": "voyage-multilingual-2", "prompt": "How do people think about the current Iran president?", "output": [["How do people think about the current Iran president?", "History of the Islamic Republic of Iran\n\n2009 election controversy Ahmadinejad's 2009 election victory was hotly disputed and marred by large protests that formed the \"greatest domestic challenge\" to the leadership of the Islamic Republic \"in 30 years\". Despite high turnout and large enthusiastic crowds for reformist opponent Mir-Hossein Mousavi, Ahmadinejad was officially declared to have won by a 2–1 margin against three opponents. Allegations of voting irregularities and protest by Mousavi his supporters were immediate and by 1 July 2009 1000 people had been arrested and 20 killed in street demonstrations. Supreme Leader Ali Khamenei and others Islamic officials blamed foreign powers for fomenting the protest. However, according to World Public Opinion (a United States poll), the protest does not mean Iran is in a \"pre-revolutionary\" situation as a WPO poll of Iranians taken in early September 2009 found high levels of satisfaction with the regime. 80% of the Iranians respondents said President Ahmadinejad was honest, 64% expressed a lot of confidence in him, and nine in ten said they were satisfied with Iran's system of government.\nPublic opinion\nAccording to the (U.S.) International Peace Institute's 2010-poll conducted in Persian by a representative sample of the Iranian people:\nIranians are divided on the government's performance.\nDissatisfied with the economy.\nWorry over sanctions and isolation.\nWant to focus on domestic affairs.\nFavor closer ties to the West.\nRising tensions sparked hostility toward the US, Europe, and U.N.\nFavor nuclear arms and do not want to back deals to halt enrichment.\nIndependent polls do not contradict official turnout of 2009 election, which gave around 60% of the vote to Ahmadinejad.\nRouhani Administration\nHassan Rouhani was elected as President of Iran on 12 June 2013 and took office on 3 August. He is known as a moderate left-leaner, supported by reformists in the election. He has open ideas in the area of economics and a high-level foreign policy, as he served as a diplomat before his election. He has moved quickly to engage in diplomatic negotiations with Western countries, seeking the lifting of crippling economic sanctions on oil exports in exchange for Iran's cooperation with UN treaties regarding the development of nuclear weapons. Economic conditions improved during Rouhani's first term and led to his reelection in 2017 with his reformist agenda."]], "corpus": "wikipedia"}
data/retrieval_side_by_side-dd51f1d2-bf9d-4e42-9a72-5b30dd2bd338.jsonl ADDED
@@ -0,0 +1 @@
 
 
1
+ {"tstamp": 1722357077.601, "task_type": "retrieval", "type": "share", "models": ["### Model A: embed-english-v3.0", "### Model B: Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "7c2e24197d4b477499b1f7763539f4a0", "0_model_name": "embed-english-v3.0", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "d0f4393f2f3b4313a37f5552e2b3c499", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}