Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
f018399
·
verified ·
1 Parent(s): 260c989

Scheduled Commit

Browse files
data/clustering_battle-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -1 +1,2 @@
1
  {"tstamp": 1722367036.7553, "task_type": "clustering", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "1e22b57c96ae43d8ba94cae6ac46e428", "0_model_name": "embed-english-v3.0", "0_prompt": ["A", "O", "caldera", "composite", "shield", "elephant", "giraffe", "penguin", "tiger", "lion", "dolphin", "square", "rectangle", "circle", "hexagon", "triangle", "octagon", "Ursa Major", "Cygnus", "Cassiopeia", "Leo", "Scorpius", "Taurus", "Orion"], "0_ncluster": 5, "0_output": "", "0_ndim": "2D (press for 3D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "1d7ff0ec14534a8a8e417bc5e41188f7", "1_model_name": "GritLM/GritLM-7B", "1_prompt": ["A", "O", "caldera", "composite", "shield", "elephant", "giraffe", "penguin", "tiger", "lion", "dolphin", "square", "rectangle", "circle", "hexagon", "triangle", "octagon", "Ursa Major", "Cygnus", "Cassiopeia", "Leo", "Scorpius", "Taurus", "Orion"], "1_ncluster": 5, "1_output": "", "1_ndim": "2D (press for 3D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
 
 
1
  {"tstamp": 1722367036.7553, "task_type": "clustering", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "1e22b57c96ae43d8ba94cae6ac46e428", "0_model_name": "embed-english-v3.0", "0_prompt": ["A", "O", "caldera", "composite", "shield", "elephant", "giraffe", "penguin", "tiger", "lion", "dolphin", "square", "rectangle", "circle", "hexagon", "triangle", "octagon", "Ursa Major", "Cygnus", "Cassiopeia", "Leo", "Scorpius", "Taurus", "Orion"], "0_ncluster": 5, "0_output": "", "0_ndim": "2D (press for 3D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "1d7ff0ec14534a8a8e417bc5e41188f7", "1_model_name": "GritLM/GritLM-7B", "1_prompt": ["A", "O", "caldera", "composite", "shield", "elephant", "giraffe", "penguin", "tiger", "lion", "dolphin", "square", "rectangle", "circle", "hexagon", "triangle", "octagon", "Ursa Major", "Cygnus", "Cassiopeia", "Leo", "Scorpius", "Taurus", "Orion"], "1_ncluster": 5, "1_output": "", "1_ndim": "2D (press for 3D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
2
+ {"tstamp": 1722370311.0017, "task_type": "clustering", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "a98d588c2a494ea7857ef6e48f0f7bc1", "0_model_name": "text-embedding-004", "0_prompt": ["macchiato", "latte", "mocha", "cappuccino", "cold brew", "Norse", "Roman", "Egyptian", "Chinese", "Greek"], "0_ncluster": 2, "0_output": "", "0_ndim": "3D (press for 2D)", "0_dim_method": "PCA", "0_clustering_method": "KMeans", "1_conv_id": "97b4c19ddeba43febef5773f09bdb49c", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": ["macchiato", "latte", "mocha", "cappuccino", "cold brew", "Norse", "Roman", "Egyptian", "Chinese", "Greek"], "1_ncluster": 2, "1_output": "", "1_ndim": "3D (press for 2D)", "1_dim_method": "PCA", "1_clustering_method": "KMeans"}
data/clustering_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -40,3 +40,7 @@
40
  {"tstamp": 1722369158.5386, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722369158.4888, "finish": 1722369158.5386, "ip": "", "conv_id": "655d241a42174f3bad02c05df2f3e727", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
41
  {"tstamp": 1722369179.7486, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369168.9576, "finish": 1722369179.7486, "ip": "", "conv_id": "a1c84e4acb3d4ef8a7c5eebe9e5d4e4e", "model_name": "text-embedding-004", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n", "convex", "plane", "parabolic", "concave", "wool", "denim", "linen", "question mark", "colon", "comma", "exclamation point", "period", "hyphen", "jiu-jitsu", "muay thai", "kung fu", "Chinese", "French"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
42
  {"tstamp": 1722369179.7486, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722369168.9576, "finish": 1722369179.7486, "ip": "", "conv_id": "655d241a42174f3bad02c05df2f3e727", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n", "convex", "plane", "parabolic", "concave", "wool", "denim", "linen", "question mark", "colon", "comma", "exclamation point", "period", "hyphen", "jiu-jitsu", "muay thai", "kung fu", "Chinese", "French"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
 
 
40
  {"tstamp": 1722369158.5386, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722369158.4888, "finish": 1722369158.5386, "ip": "", "conv_id": "655d241a42174f3bad02c05df2f3e727", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
41
  {"tstamp": 1722369179.7486, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369168.9576, "finish": 1722369179.7486, "ip": "", "conv_id": "a1c84e4acb3d4ef8a7c5eebe9e5d4e4e", "model_name": "text-embedding-004", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n", "convex", "plane", "parabolic", "concave", "wool", "denim", "linen", "question mark", "colon", "comma", "exclamation point", "period", "hyphen", "jiu-jitsu", "muay thai", "kung fu", "Chinese", "French"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
42
  {"tstamp": 1722369179.7486, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722369168.9576, "finish": 1722369179.7486, "ip": "", "conv_id": "655d241a42174f3bad02c05df2f3e727", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["Randy Kehler (July 16, 1944 – July 21, 2024) was an American pacifist, tax resister, and social justice advocate. Kehler objected to America's involvement in the Vietnam War and refused to cooperate with the draft. He is also known for his decision, along with his wife Betsy Corner, to stop paying federal income taxes in protest of war and military spending, a decision that led to the Internal Revenue Service (IRS) seizing their house in 1989.\n\nKehler was involved in several anti-war organizations in the 1960s and 1970s, and in the early 1980s was a leader in the movement against nuclear weapons.[1]\n\nEarly life and education\nKehler was born on July 16, 1944, in Bronxville, New York, and was raised in Scarsdale.[1] He attended Phillips Exeter Academy and graduated from Harvard University in 1967 with a degree in government.[1] While at Harvard, Kehler became involved with the Harlem chapter of Congress of Racial Equality (CORE).[1] Kehler has credited Martin Luther King Jr.'s \"I Have a Dream\" during the March on Washington for Jobs and Freedom in 1963 with shaping his interest in radical politics.[1]\n\nOpposition to the Vietnam War\nIn 1969, during the Vietnam War, Kehler returned his draft card to the Selective Service System. He refused to seek exemption as a conscientious objector, because he felt that doing so would be a form of cooperation with the US government's actions in Vietnam. After being called for induction and refusing to submit, he was charged with a federal crime. Found guilty at trial, Kehler served twenty-two months of a two-year sentence.[1]\n\nA 2020 documentary film, The Boys Who Said No!, features footage of and an interview with Kehler as one of several Vietnam-era draft resisters discussing that form of anti-war activism.[2]\n\nDaniel Ellsberg's exposure to Kehler in August 1969 (as Kehler was preparing to submit to his sentence) at the 13th Triennial Meeting of the War Resisters International, held at Haverford College, was a pivotal event in Ellsberg's decision to copy and release the Pentagon Papers.[3]\n\nAnti-nuclear activism\nKehler became active in anti-nuclear proliferation and nuclear disarmament movements while leading a grassroots campaign in western Massachusetts to support the concept of a nuclear freeze. His efforts led to his meeting fellow activist Randy Forsberg, who was leading a similar effort at a national level.[4] From 1981 through 1984, Kehler served as Executive Director of the National Nuclear Weapons Freeze Campaign.[5]\n\nKehler advocated against the use of nuclear power and led campaigns for the closure of nuclear power plants, including Vermont Yankee in Vernon, Vermont.[6][7]\n\nResistance of federal income tax\nFrom 1977 onward, Kehler and his wife Betsy Corner refused to pay their federal income taxes in protest of war and military expenditures; they continued to pay their state and local taxes, and donated the money they owed in federal income taxes to charity.[8] This led to the seizure of their house in Colrain, Massachusetts by the IRS in 1989. The home was subsequently purchased by the federal government. Kehler and Corner, along with supporters from the local community, struggled for years with the government and with another couple who attempted to purchase and move in to the home. The events were documented in the 1997 documentary film An Act of Conscience.[9][10][11]\n\nKehler died at his home in Shelburne Falls, Massachusetts, on July 21, 2024, at the age of 80.[5]\n\n", "convex", "plane", "parabolic", "concave", "wool", "denim", "linen", "question mark", "colon", "comma", "exclamation point", "period", "hyphen", "jiu-jitsu", "muay thai", "kung fu", "Chinese", "French"], "ncluster": 5, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
43
+ {"tstamp": 1722370161.0728, "task_type": "clustering", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722370160.2925, "finish": 1722370161.0728, "ip": "", "conv_id": "a98d588c2a494ea7857ef6e48f0f7bc1", "model_name": "text-embedding-004", "prompt": ["macchiato", "latte", "mocha", "cappuccino", "cold brew", "Norse", "Roman", "Egyptian", "Chinese", "Greek"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
44
+ {"tstamp": 1722370161.0728, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722370160.2925, "finish": 1722370161.0728, "ip": "", "conv_id": "97b4c19ddeba43febef5773f09bdb49c", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["macchiato", "latte", "mocha", "cappuccino", "cold brew", "Norse", "Roman", "Egyptian", "Chinese", "Greek"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
45
+ {"tstamp": 1722370399.6837, "task_type": "clustering", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722370399.4228, "finish": 1722370399.6837, "ip": "", "conv_id": "404a5c72f1044a75a6df8ec3f1f787da", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": ["haiku", "free verse", "sonnet", "limerick", "epic", "ballad", "ode", "oolong", "chamomile"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
46
+ {"tstamp": 1722370399.6837, "task_type": "clustering", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722370399.4228, "finish": 1722370399.6837, "ip": "", "conv_id": "bfda17517c764bb5b572d44a6c558bfb", "model_name": "embed-english-v3.0", "prompt": ["haiku", "free verse", "sonnet", "limerick", "epic", "ballad", "ode", "oolong", "chamomile"], "ncluster": 2, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -23,3 +23,4 @@
23
  {"tstamp": 1722368932.7183, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ea5e400b4db942b7adbea39fd7263a77", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "ae5db96d81e84d8b8daa0d3142568b97", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Reading h264 stream in Python I am trying to read a video stream from a Parrot Bebop 2 drone. \nThe video stream is written as \"socket\" to a file as H264 stream.\n$ ffmpeg -i [STREAM]\nInput #0, h264, from 'stream_h264':\n Duration: N/A, bitrate: N/A\n Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 1280x720, 23.98 fps, 23.98 tbr, 1200k tbn, 47.95 tbc\n\nReading the video stream in MPlayer is not a problem using the parameters below. Playing it using VLC or ffmpeg should also not be too hard. For MPlayer the following works:\nmplayer -fs -demuxer h264es -benchmark stream_h264\n\nThis plays the stream in high-res. However my goal is to perform image processing on the frames using Python (mostly OpenCV). Therefore, I would like to read the frames into NumPy arrays. I have already considered using cv2.VideoCapture but this does not seem to work for my stream. Other (somewhat easy) to use options I am not aware of, therefore my question is whether someone recommend me how to read the video frames in Python? \nAll recommendations are more than welcome!\n"]], "1_corpus": "stackexchange"}
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
25
  {"tstamp": 1722369669.9495, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for recent work on generating text with personalities and style", "0_output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "0_corpus": "arxiv", "1_conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am looking for recent work on generating text with personalities and style", "1_output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "1_corpus": "arxiv"}
 
 
23
  {"tstamp": 1722368932.7183, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "ea5e400b4db942b7adbea39fd7263a77", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Why does it do this ? if - __name__ == '__main__' \nDuplicate of:\nWhat does if __name__== \"__main__\" do? \n\nConsider this code:\nif __name__ == '__main__':\n import pdb\n pdb.run(\"interact()\\n\")\n\nWhat does the following line mean?\nif(__name__=='__main__')\n\nI fainted.\n\nA: That is a check to see if you are directly running the script or if it is included in a library.\nWhen you run a python script like this:\npython myScript.py\n\nIt sends a parameter, telling you to run the programs first method, which is widely called \"main\", so when __name__ is __main__ you know that the program was executed from a command line or double clicked.\n"]], "0_corpus": "stackexchange", "1_conv_id": "ae5db96d81e84d8b8daa0d3142568b97", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: Reading h264 stream in Python I am trying to read a video stream from a Parrot Bebop 2 drone. \nThe video stream is written as \"socket\" to a file as H264 stream.\n$ ffmpeg -i [STREAM]\nInput #0, h264, from 'stream_h264':\n Duration: N/A, bitrate: N/A\n Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 1280x720, 23.98 fps, 23.98 tbr, 1200k tbn, 47.95 tbc\n\nReading the video stream in MPlayer is not a problem using the parameters below. Playing it using VLC or ffmpeg should also not be too hard. For MPlayer the following works:\nmplayer -fs -demuxer h264es -benchmark stream_h264\n\nThis plays the stream in high-res. However my goal is to perform image processing on the frames using Python (mostly OpenCV). Therefore, I would like to read the frames into NumPy arrays. I have already considered using cv2.VideoCapture but this does not seem to work for my stream. Other (somewhat easy) to use options I am not aware of, therefore my question is whether someone recommend me how to read the video frames in Python? \nAll recommendations are more than welcome!\n"]], "1_corpus": "stackexchange"}
24
  {"tstamp": 1722369409.8742, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "2fd990202605484187eda3a456338321", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "what is inputs?", "0_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "0_corpus": "wikipedia", "1_conv_id": "65e3bf41b47045d3b59a334b4eed1be8", "1_model_name": "embed-english-v3.0", "1_prompt": "what is inputs?", "1_output": [["what is inputs?", "Input/output\n\nIn computing, input/output (I/O, i/o, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, such as another computer system, peripherals, or a human operator. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to \"perform I/O\" is to perform an input or output operation.\nare the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. Any interaction with the system by an interactor is an input and the reaction the system responds is called the output."]], "1_corpus": "wikipedia"}
25
  {"tstamp": 1722369669.9495, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8e512b9a9fd44aa18dd3e5eb0ee834f7", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "I am looking for recent work on generating text with personalities and style", "0_output": [["I am looking for recent work on generating text with personalities and style", "Title: Cloning Ideology and Style using Deep Learning\n\nAbstract: Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset."]], "0_corpus": "arxiv", "1_conv_id": "c97244ec7bbd46318ccc6f76171f91fd", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "I am looking for recent work on generating text with personalities and style", "1_output": [["I am looking for recent work on generating text with personalities and style", "Title: Text-to-Image Synthesis for Any Artistic Styles: Advancements in Personalized Artistic Image Generation via Subdivision and Dual Binding\n\nAbstract: Recent advancements in text-to-image models, such as Stable Diffusion, have demonstrated their ability to synthesize visual images through natural language prompts. One approach of personalizing text-to-image models, exemplified by DreamBooth, fine-tunes the pre-trained model by binding unique text identifiers with a few images of a specific subject. Although existing fine-tuning methods have demonstrated competence in rendering images according to the styles of famous painters, it is still challenging to learn to produce images encapsulating distinct art styles due to abstract and broad visual perceptions of stylistic attributes such as lines, shapes, textures, and colors. In this paper, we introduce a new method, Single-StyleForge, for personalization. It fine-tunes pre-trained text-to-image diffusion models to generate diverse images in specified styles from text prompts. By using around 15-20 images of the target style, the approach establishes a foundational binding of a unique token identifier with a broad range of the target style. It also utilizes auxiliary images to strengthen this binding, resulting in offering specific guidance on representing elements such as persons in a target style-consistent manner. In addition, we present ways to improve the quality of style and text-image alignment through a method called Multi-StyleForge, which inherits the strategy used in StyleForge and learns tokens in multiple. Experimental evaluation conducted on six distinct artistic styles demonstrates substantial improvements in both the quality of generated images and the perceptual fidelity metrics, such as FID, KID, and CLIP scores."]], "1_corpus": "arxiv"}
26
+ {"tstamp": 1722370295.4157, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cd70dab8da344dee861341fe526fc33f", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "0_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "0_corpus": "arxiv", "1_conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "1_output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "1_corpus": "arxiv"}
data/retrieval_individual-8d38bb89-1e1a-471a-8b9e-35c1f784690e.jsonl CHANGED
@@ -102,3 +102,5 @@
102
  {"tstamp": 1722369688.8663, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722369688.4367, "finish": 1722369688.8663, "ip": "", "conv_id": "e15bbba2976d4d3e8c193c49e5b6170e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Donut", "output": [["Donut", "Doughnut\n\nA doughnut or donut () is a type of pastry made from leavened fried dough. It is popular in many countries and is prepared in various forms as a sweet snack that can be homemade or purchased in bakeries, supermarkets, food stalls, and franchised specialty vendors. Doughnut is the traditional spelling, while donut is the simplified version; the terms are used interchangeably.\nDoughnuts are usually deep fried from a flour dough, but other types of batters can also be used. Various toppings and flavors are used for different types, such as sugar, chocolate or maple glazing. Doughnuts may also include water, leavening, eggs, milk, sugar, oil, shortening, and natural or artificial flavors."]], "corpus": "wikipedia"}
103
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "5419d1689a664f7aaa8419ac2c4cbdc6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
104
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "057320225ef24795adb273a944b2cc91", "model_name": "text-embedding-004", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
 
 
 
102
  {"tstamp": 1722369688.8663, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722369688.4367, "finish": 1722369688.8663, "ip": "", "conv_id": "e15bbba2976d4d3e8c193c49e5b6170e", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Donut", "output": [["Donut", "Doughnut\n\nA doughnut or donut () is a type of pastry made from leavened fried dough. It is popular in many countries and is prepared in various forms as a sweet snack that can be homemade or purchased in bakeries, supermarkets, food stalls, and franchised specialty vendors. Doughnut is the traditional spelling, while donut is the simplified version; the terms are used interchangeably.\nDoughnuts are usually deep fried from a flour dough, but other types of batters can also be used. Various toppings and flavors are used for different types, such as sugar, chocolate or maple glazing. Doughnuts may also include water, leavening, eggs, milk, sugar, oil, shortening, and natural or artificial flavors."]], "corpus": "wikipedia"}
103
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "5419d1689a664f7aaa8419ac2c4cbdc6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: SentiCap: Generating Image Descriptions with Sentiments\n\nAbstract: The recent progress on image recognition and language modeling is making automatic description of image content a reality. However, stylized, non-factual aspects of the written description are missing from the current systems. One such style is descriptions with emotions, which is commonplace in everyday communication, and influences decision-making and interpersonal relationships. We design a system to describe an image with emotions, and present a model that automatically generates captions with positive or negative sentiments. We propose a novel switching recurrent neural network with word-level regularization, which is able to produce emotional image captions using only 2000+ training sentences containing sentiments. We evaluate the captions with different automatic and crowd-sourcing metrics. Our model compares favourably in common quality metrics for image captioning. In 84.6% of cases the generated positive captions were judged as being at least as descriptive as the factual captions. Of these positive captions 88% were confirmed by the crowd-sourced workers as having the appropriate sentiment."]], "corpus": "arxiv"}
104
  {"tstamp": 1722369712.2347, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722369711.5356, "finish": 1722369712.2347, "ip": "", "conv_id": "057320225ef24795adb273a944b2cc91", "model_name": "text-embedding-004", "prompt": "What are the landmark work on generating image captions with sentiments?", "output": [["What are the landmark work on generating image captions with sentiments?", "Title: Image Captioning at Will: A Versatile Scheme for Effectively Injecting Sentiments into Image Descriptions\n\nAbstract: Automatic image captioning has recently approached human-level performance due to the latest advances in computer vision and natural language understanding. However, most of the current models can only generate plain factual descriptions about the content of a given image. However, for human beings, image caption writing is quite flexible and diverse, where additional language dimensions, such as emotion, humor and language styles, are often incorporated to produce diverse, emotional, or appealing captions. In particular, we are interested in generating sentiment-conveying image descriptions, which has received little attention. The main challenge is how to effectively inject sentiments into the generated captions without altering the semantic matching between the visual content and the generated descriptions. In this work, we propose two different models, which employ different schemes for injecting sentiments into image captions. Compared with the few existing approaches, the proposed models are much simpler and yet more effective. The experimental results show that our model outperform the state-of-the-art models in generating sentimental (i.e., sentiment-bearing) image captions. In addition, we can also easily manipulate the model by assigning different sentiments to the testing image to generate captions with the corresponding sentiments."]], "corpus": "arxiv"}
105
+ {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "cd70dab8da344dee861341fe526fc33f", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}
106
+ {"tstamp": 1722370282.9393, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722370282.5878, "finish": 1722370282.9393, "ip": "", "conv_id": "44c3418ddda44ffcbe5214c14c1a99f6", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "output": [["Find a paper on a large multimodal model achieving SoTA in VQA while being used for robotics tasks", "Title: ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models\n\nAbstract: The integration of Multimodal Large Language Models (MLLMs) with robotic systems has significantly enhanced the ability of robots to interpret and act upon natural language instructions. Despite these advancements, conventional MLLMs are typically trained on generic image-text pairs, lacking essential robotics knowledge such as affordances and physical knowledge, which hampers their efficacy in manipulation tasks. To bridge this gap, we introduce ManipVQA, a novel framework designed to endow MLLMs with Manipulation-centric knowledge through a Visual Question-Answering format. This approach not only encompasses tool detection and affordance recognition but also extends to a comprehensive understanding of physical concepts. Our approach starts with collecting a varied set of images displaying interactive objects, which presents a broad range of challenges in tool object detection, affordance, and physical concept predictions. To seamlessly integrate this robotic-specific knowledge with the inherent vision-reasoning capabilities of MLLMs, we adopt a unified VQA format and devise a fine-tuning strategy that preserves the original vision-reasoning abilities while incorporating the new robotic insights. Empirical evaluations conducted in robotic simulators and across various vision task benchmarks demonstrate the robust performance of ManipVQA. Code and dataset will be made publicly available at https://github.com/SiyuanHuang95/ManipVQA."]], "corpus": "arxiv"}