repo_id
stringlengths
1
51
file_structure
stringlengths
56
247k
readme_content
stringlengths
0
287k
key_code_snippets
stringlengths
1.04k
16.8M
__index_level_0__
float64
0
7
sonic-on-ray
{"type": "directory", "name": "sonic-on-ray", "children": [{"type": "file", "name": "LICENSE"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "rollout.py"}, {"type": "file", "name": "setup.py"}, {"type": "file", "name": "sonic-autoscaler.yaml"}, {"type": "directory", "name": "sonic_on_ray", "children": [{"type": "file", "name": "sonic_on_ray.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "train_ppo.py"}, {"type": "file", "name": "train_ppo_grid_search.py"}]}
**Status:** Archive (code is provided as-is, no updates expected) # Sonic on Ray This file describes how to use Sonic with Ray and RLLib. We include instructions on how to get the training running on EC2. ## Running training on a single node Start a p2.8xlarge with the Deep Learning AMI (Ubuntu). In us-west-2, this is ami-d2c759aa. Activate the TensorFlow environment with ``` source activate tensorflow_p36 ``` Now install Ray and the RLlib requirements using ``` pip install ray opencv-python ``` Next we need to install the gym retro environment. Run ``` git clone --recursive [email protected]:openai/retro.git gym-retro cd gym-retro pip install -e . ``` Now clone this repo and install it: ``` cd ~ git clone [email protected]:openai/sonic-on-ray.git cd sonic-on-ray pip install -e . ``` You can then run the training with ``` cd ~/sonic-on-ray python train_ppo.py ``` ## Running training on a cluster First install Ray on your laptop with ``` pip install ray ``` Now clone the sonic-on-ray repo with ``` git clone [email protected]:openai/sonic-on-ray.git ``` And start a cluster with ``` ray create_or_update sonic-autoscaler.yaml ``` After the cluster has been started, you will see a message like this: ``` Started Ray on this node. You can add additional nodes to the cluster by calling ray start --redis-address 172.31.58.176:6379 from the node you wish to add. You can connect a driver to the cluster from Python by running import ray ray.init(redis_address="172.31.58.176:6379") [...] To login to the cluster, run: ssh -i ~/.ssh/ray-autoscaler_us-east-1.pem [email protected] ``` You can now start the hyperparameter search by sshing into the cluster, running ``` source activate tensorflow_p36 ``` and replacing the `ray.init()` call in `~/sonic-on-ray/train_ppo_grid_search.py` by the one printed above and then running the script.
{"setup.py": "from setuptools import setup, find_packages\n\nsetup(name='sonic_on_ray',\n packages=[package for package in find_packages()\n if package.startswith('sonic_on_ray')],\n description='Running gym retro on Ray',\n author='Philipp Moritz',\n url='https://github.com/openai/sonic-on-ray',\n author_email='[email protected]',\n version='0.0.1')\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null
sparse_attention
{"type": "directory", "name": "sparse_attention", "children": [{"type": "file", "name": "attention.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "utils.py"}]}
**Status:** Archive (code is provided as-is, no updates expected) **Update August 2020:** For an example repository that achieves state-of-the-art modeling performance on CIFAR-10 using Sparse Transformers, please see https://github.com/openai/distribution_augmentation # Sparse Attention This repository contains the sparse attention primitives used in Sparse Transformers (see [blog](https://openai.com/blog/sparse-transformer) and [paper](https://arxiv.org/abs/1904.10509)). Specifically, it includes the following: 1) A faster implementation of normal attention (the upper triangle is not computed, and many operations are fused). 2) An implementation of "strided" and "fixed" attention, as in the Sparse Transformers paper. 3) A simple recompute decorator, which can be adapted for usage with attention. We hope this code can further accelerate research into sparse attention. An example Transformer implementation which is close to the version we use internally can be found at https://github.com/openai/blocksparse/blob/master/examples/transformer/enwik8.py. # Overview of kernels The repository contains fused implementations of the attention operation, which takes in `Q`, `K`, `V` matrices (all of dimensionality `batch, time, dim`) representing the queries, keys, and values for a sequence. For every query element, a weighted sum of the values is returned, where the weightings are determined by the scaled matrix product of `Q` and `K^T`. The kernels allow specification of block sparsity in the `QK^T` matrix. This means you define a pattern of 0/1s on a `[time/blocksize, time/blocksize]` matrix of blocks, and the values where it is 0 will not be computed, and not be included in the softmax calculation. Additionally, one can define "callbacks" on the computed blocks, which will further mask out values in any given block from the softmax (though the matrix product will still be computed for those elements). Block sizes of `{8, 16, 32, 64}` are supported, and slight advantages in speed may be seen from using larger blocks. # Prerequisites For fp32 and blocksize `32`, any NVIDIA GPU past Kepler can be used (i.e. compute capability beyond 3.5). For fp16 and blocksize `8, 16, 32, 64`, a GPU with Tensor Cores (e.g. the V100 GPU, compute capability >= 7.0) is required. The primary dependency is the OpenAI [blocksparse](https://github.com/openai/blocksparse/) package. With CUDA 10 and tensorflow-gpu, you can install blocksparse with `pip install blocksparse`. For other setups, you must install blocksparse from source, and directions can be found in the [root of the repository](https://github.com/openai/blocksparse/). # Examples Run the following on a non-V100 GPU: ``` python attention.py ``` On a V100 GPU: ``` python attention.py fp16 ``` # General usage An example can be found at the bottom of `attention.py`. ```python full_attn_tf = attention_impl(q, k, v, heads=4, attn_mode="all", recompute=True) full_attn_bs = blocksparse_attention_impl(q, k, v, heads=4, attn_mode="all", recompute=True) # first step of strided attention local_attn_bs = blocksparse_attention_impl(q, k, v, heads=4, attn_mode="local", local_attn_ctx=32, recompute=True) local_attn_tf = attention_impl(q, k, v, heads=4, attn_mode="local", local_attn_ctx=32, recompute=True) # second step of strided attention strided_attn_bs = blocksparse_attention_impl(q, k, v, heads=4, attn_mode="strided", local_attn_ctx=32, recompute=True) strided_attn_tf = attention_impl(q, k, v, heads=4, attn_mode="strided", local_attn_ctx=32, recompute=True) # # the 'fixed' attention pattern fixed = blocksparse_attention_impl(q, k, v, heads=4, attn_mode="fixed", local_attn_ctx=128, num_verts=4, vertsize=1, recompute=True) ``` # Referencing this work If you find this helpful in your work, you can consider citing the following: ``` @article{child2019sparsetransformer, title={Generating Long Sequences with Sparse Transformers}, author={Child, Rewon and Gray, Scott and Radford, Alec and Sutskever, Ilya}, journal={URL https://openai.com/blog/sparse-transformers}, year={2019} } ```
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null
sparse_autoencoder
{"type": "directory", "name": "sparse_autoencoder", "children": [{"type": "file", "name": ".pre-commit-config.yaml"}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "pyproject.toml"}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "sae-viewer", "children": [{"type": "file", "name": "package-lock.json"}, {"type": "file", "name": "package.json"}, {"type": "directory", "name": "public", "children": [{"type": "file", "name": "favicon.ico"}, {"type": "file", "name": "robots.txt"}, {"type": "file", "name": "tailwind.js"}]}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "src", "children": [{"type": "file", "name": "App.css"}, {"type": "file", "name": "App.tsx"}, {"type": "file", "name": "autoencoder_registry.tsx"}, {"type": "directory", "name": "components", "children": [{"type": "file", "name": "featureInfo.tsx"}, {"type": "file", "name": "featureSelect.tsx"}, {"type": "file", "name": "histogram.tsx"}, {"type": "file", "name": "tokenAblationmap.tsx"}, {"type": "file", "name": "tokenHeatmap.tsx"}, {"type": "file", "name": "tooltip.tsx"}]}, {"type": "file", "name": "feed.tsx"}, {"type": "file", "name": "index.css"}, {"type": "file", "name": "index.html"}, {"type": "file", "name": "index.tsx"}, {"type": "file", "name": "interpAPI.ts"}, {"type": "file", "name": "types.ts"}, {"type": "file", "name": "utils.ts"}, {"type": "file", "name": "welcome.tsx"}]}, {"type": "file", "name": "tailwind.config.js"}, {"type": "file", "name": "tsconfig.json"}]}, {"type": "file", "name": "SECURITY.md"}, {"type": "directory", "name": "sparse_autoencoder", "children": [{"type": "file", "name": "explanations.py"}, {"type": "file", "name": "kernels.py"}, {"type": "file", "name": "loss.py"}, {"type": "file", "name": "model.py"}, {"type": "file", "name": "paths.py"}, {"type": "file", "name": "train.py"}, {"type": "file", "name": "__init__.py"}]}]}
# SAE viewer The easiest way to view activation patterns is through the [public website](https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html). This directory contains the implementation of that website. ## Local development Install: ```npm install``` Run: ```npm start```
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 4965b941e9eb590b00b253a2c406db1e1b193942 Hamza Amin <[email protected]> 1729337341 +0500\tclone: from https://github.com/openai/sparse_autoencoder.git\n", ".git\\refs\\heads\\main": "4965b941e9eb590b00b253a2c406db1e1b193942\n", "sae-viewer\\package.json": "{\n \"name\": \"sae-viewer\",\n \"version\": \"0.1.67\",\n \"homepage\": \"https://openaipublic.blob.core.windows.net/sparse-autoencoder/sae-viewer/index.html\",\n \"dependencies\": {\n \"@headlessui/react\": \"^1.7.8\",\n \"@headlessui/tailwindcss\": \"^0.1.2\",\n \"@types/d3-scale\": \"^4.0.3\",\n \"@types/lodash\": \"^4.14.194\",\n \"@types/react\": \"^18.0.37\",\n \"@types/react-dom\": \"^18.0.11\",\n \"d3-scale\": \"^4.0.2\",\n \"lodash\": \"^4.17.21\",\n \"plotly.js\": \"^2.31.0\",\n \"react\": \"^18.2.0\",\n \"react-dom\": \"^18.2.0\",\n \"react-plotly.js\": \"^2.6.0\",\n \"react-router-dom\": \"^6.10.0\",\n \"web-vitals\": \"^3.0.3\"\n },\n \"scripts\": {\n \"start\": \"rm -rf dist && rm -rf .parcel-cache && parcel src/index.html\",\n \"build\": \"parcel build src/index.html\",\n \"serve\": \"parcel serve src/index.html\",\n \"typecheck\": \"tsc -p .\"\n },\n \"eslintConfig\": {\n \"extends\": [\n \"react-app\"\n ]\n },\n \"alias\": {\n \"preact/jsx-dev-runtime\": \"preact/jsx-runtime\"\n },\n \"devDependencies\": {\n \"@observablehq/plot\": \"^0.6.5\",\n \"@parcel/transformer-typescript-tsc\": \"^2.8.3\",\n \"@parcel/validator-typescript\": \"^2.8.3\",\n \"buffer\": \"^5.7.1\",\n \"nodemon\": \"^2.0.22\",\n \"parcel\": \"^2.8.3\",\n \"preact\": \"^10.13.2\",\n \"process\": \"^0.11.10\",\n \"react-refresh\": \"0.10.0\",\n \"tailwindcss\": \"^3.2.4\",\n \"typescript\": \"^5.0.4\"\n }\n}\n", "sae-viewer\\src\\App.css": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\nselect {\n margin: 5px;\n}\n\n:root {\n --secondary-color: #0d978b;\n --accent-color: #efefef;\n}\n\ntable.activations-table { \n border: 1px solid gray;\n border-radius: 3px; \n border-spacing: 0;\n}\ntable.activations-table td, table.activations-table th { \n border-bottom: 1px solid gray;\n border-right: 1px solid gray;\n border-left: 1px solid gray;\n}\ntable.activations-table tr:last-child > td {\n border-bottom: none;\n}\n\n.full-width{\n width: 100vw;\n position: relative;\n margin-left: -50vw;\n left: 50%;\n }\n\n.App {\n text-align: center;\n}\n\n.center {\n text-align: center;\n}\n\n.App-logo {\n height: 40vmin;\n pointer-events: none;\n}\n\n@media (prefers-reduced-motion: no-preference) {\n .App-logo {\n animation: App-logo-spin infinite 20s linear;\n }\n}\n\n.App h1 {\n\tfont-size: 1.75rem;\n}\n\n.App-article {\n background-color: #282c34;\n min-height: 100vh;\n display: flex;\n flex-direction: column;\n align-items: center;\n justify-content: center;\n font-size: calc(10px + 2vmin);\n color: white;\n}\n\n.App-link {\n color: #61dafb;\n}\n\n@keyframes App-logo-spin {\n from {\n transform: rotate(0deg);\n }\n to {\n transform: rotate(360deg);\n }\n}\n\n\n /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n /* Structure\n /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n\n body {\n margin: 0;\n padding: 0 1em;\n font-size: 12pt;\n}\n\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Typography\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n \nh1 {\n font-size: 24pt;\n font-weight: 500;\n padding: 1em 0 0;\n display: block;\n color: #000;\n}\nh3 { padding: 0 0; }\nh2 { padding: 1em 0 0.5em 0; }\nh4, h5 {\n text-transform: uppercase;\n margin: 1em 0;\n justify-tracks: space-between;\n font-family: var(--sans-serif);\n font-size: 12pt;\n font-weight: 600;\n}\nh2, h3 { font-weight: 500; font-style: italic; }\nsubtitle {\n color: #555;\n font-size: 18pt;\n font-style: italic;\n padding: 0;\n display: block;\n margin-bottom: 1em\n}\n\na {\n transition: all .05s ease-in-out;\n color: #5c60c3 !important;\n font-style: normal;\n}\na:hover { color: var(--accent-color)!important; }\ncode, pre { color: var(--inline-code-color);\nbackground-color: #eee; border-radius: 3px; }\npre { padding: 1em; margin: 2em 0; }\ncode { padding: 0.3em; }\n.text-secondary, h3, h5 { color: var(--secondary-color); }\n.text-primary, h2,h4 { color: var(--primary-color); }\n\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Images\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n \nimg#logo {\n width: 50%;\n margin: 3em 0 0\n}\n\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Alerts */\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n \n.alert {\n font-weight: 600;\n font-style: italic;\n display: block;\n background-color: #fff7f7;\n padding: 1em;\n margin: 0;\n border-radius: 5px;\n color: #f25555\n}\n.alert.cool {\n background-color: #f3f0fc;\n color: #7155cf;\n}\n.flash-alert {\n display: inline-block;\n transition: ease-in-out 1s;\n font-size: 14pt;\n margin: 1em 0;\n padding-top: 0.5em;\n}\n.flash-alert.success {\n color: #000;\n}\n.flash-alert.failure {\n color: red;\n}\n.flash-alert.hidden {\n display: none;\n}\n\n \n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Sidenotes & Superscripts */\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n\nbody { counter-reset: count; }\np { whitespace: nowrap; }\n\n/* Different behavior if the screen is too \n narrow to show a sidenote on the side. */\n\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Buttons */\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n \n@media print {\n a.btn, button {\n display: none!important\n }\n}\n \n@media screen {\n a.btn, button {\n border-radius: 3px;\n color: #000 !important;\n text-decoration: none !important;\n font-size: 11pt;\n border: 1px solid #000;\n padding: 0.5em 1em;\n font-family: -apple-system, \n BlinkMacSystemFont, \n \"avenir next\", \n avenir,\n helvetica, \n \"helvetica neue\", \n ubuntu, \n roboto, \n noto, \n \"segoe ui\", \n arial,\n sans-serif !important;\n background: #fff;\n font-weight: 500;\n transition: all .05s ease-in-out,box-shadow-color .025s ease-in-out;\n display: inline-block;\n}\n\n a.btn:hover, button:hover {\n cursor: pointer;\n }\n a.btn:active, button.active, button:active {\n border: 1px solid;\n }\n a.btn.small,button.small {\n border: 1px solid #000;\n padding: .6em 1em;\n font-weight: 500\n }\n a.btn.small:hover,button.small:hover {\n }\n a.btn.small:active,button.small:active {\n }\n}\n\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n/* Blockquotes & Epigraphs\n/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */\n\nblockquote {\n margin: 1em;\n}\ndiv>blockquote>p {\n font-size: 13pt;\n color: #555;\n font-style: normal!important;\n margin: 0;\n padding: 1em 0 1.5em\n}\nblockquote > blockquote {\n padding: 0.5em 2em 1em 1.5em !important;\n}\n\nblockquote > blockquote,\nblockquote > blockquote > p {\n font-size: 14pt;\n padding: 0;\n margin: 0;\n text-align: center;\n font-style: italic;\n color: var(--epigraph-color);\n}\nblockquote footer {\n font-size: 12pt;\n text-align: inherit;\n display: block;\n font-style: normal;\n margin: 1em;\n color: #aaa;\n}\n", "sae-viewer\\src\\App.tsx": "import \"./App.css\"\nimport Feed from \"./feed\"\nimport React from \"react\"\nimport { Routes, Route, HashRouter } from \"react-router-dom\"\nimport { AUTOENCODER_FAMILIES } from \"./autoencoder_registry\"\nimport Welcome from \"./welcome\"\n\nfunction App() {\n return (\n <div style={{ width: '100%', paddingBottom: '20px'}}>\n <HashRouter>\n <Routes>\n <Route path=\"/\" element={<Welcome />} />\n <Route path=\"/feature/:atom\" element={<Feed />} />\n {\n Object.values(AUTOENCODER_FAMILIES).map((family) => {\n let extra = '';\n family.selectors.forEach((selector) => {\n extra += `/${selector.key}/:${selector.key}`;\n })\n return <Route key={family.name} path={`/model/:model/family/:family${extra}/feature/:atom`} element={<Feed />} />\n })\n }\n </Routes>\n </HashRouter>\n </div>\n )\n}\n\nexport default App\n", "sae-viewer\\src\\index.css": "body {\n margin: 0;\n font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',\n 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',\n sans-serif;\n -webkit-font-smoothing: antialiased;\n -moz-osx-font-smoothing: grayscale;\n}\n\ncode {\n font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',\n monospace;\n}\n", "sae-viewer\\src\\index.html": "<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"utf-8\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n <meta name=\"theme-color\" content=\"#000000\" />\n <meta\n name=\"description\"\n content=\"Web site created using create-react-app\"\n />\n <!--\n manifest.json provides metadata used when your web app is installed on a\n user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/\n -->\n <!--\n Notice the use of %PUBLIC_URL% in the tags above.\n It will be replaced with the URL of the `public` folder during the build.\n Only files inside the `public` folder can be referenced from the HTML.\n\n Unlike \"/favicon.ico\" or \"favicon.ico\", \"%PUBLIC_URL%/favicon.ico\" will\n work correctly both with client-side routing and a non-root public URL.\n Learn how to configure a non-root public URL by running `npm run build`.\n -->\n <link rel=\"icon\" type=\"image/x-icon\" href=\"../public/favicon.ico\">\n\n <title>SAE viewer</title>\n <!--script src=\"https://cdn.tailwindcss.com?plugins=forms,typography,aspect-ratio,line-clamp\"></script-->\n <!--curl 'https://cdn.tailwindcss.com/[email protected],[email protected],[email protected],[email protected]' -o public/tailwind.js-->\n <script src=\"../public/tailwind.js\"></script>\n\n <script>\n tailwind.config = {\n theme: {\n extend: {\n colors: {\n clifford: '#da373d',\n }\n }\n }\n }\n </script>\n </head>\n <body>\n <noscript>You need to enable JavaScript to run this app.</noscript>\n <div id=\"root\"></div>\n <!--\n This HTML file is a template.\n If you open it directly in the browser, you will see an empty page.\n\n You can add webfonts, meta tags, or analytics to this file.\n The build step will place the bundled scripts into the <body> tag.\n\n To begin the development, run `npm start` or `yarn start`.\n To create a production bundle, use `npm run build` or `yarn build`.\n -->\n <script src=\"./index.tsx\" async type=\"module\"></script>\n <link href=\"App.css\" rel=\"stylesheet\"></link>\n </body>\n</html>\n", "sae-viewer\\src\\index.tsx": "import React from 'react';\nimport ReactDOM from 'react-dom/client';\nimport './index.css';\nimport App from './App';\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(\n <React.StrictMode>\n <App />\n </React.StrictMode>\n);\n"}
null
summarize-from-feedback
{"type": "directory", "name": "summarize-from-feedback", "children": [{"type": "directory", "name": "exps", "children": [{"type": "file", "name": "eval_rm.py"}, {"type": "file", "name": "sample.py"}]}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "model_card.md"}, {"type": "file", "name": "Pipfile"}, {"type": "file", "name": "Pipfile.lock"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "setup.py"}, {"type": "directory", "name": "summarize_from_feedback", "children": [{"type": "directory", "name": "datasets", "children": [{"type": "file", "name": "cnndm.py"}, {"type": "file", "name": "encodings.py"}, {"type": "file", "name": "jsonl_encoding.py"}, {"type": "file", "name": "test.py"}, {"type": "file", "name": "test_datasets.py"}, {"type": "file", "name": "tldr.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "eval_rm.py"}, {"type": "directory", "name": "models", "children": [{"type": "file", "name": "attention.py"}, {"type": "file", "name": "loss_functions.py"}, {"type": "file", "name": "ops.py"}, {"type": "file", "name": "sample_fns.py"}, {"type": "file", "name": "transformer.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "model_layout.py"}, {"type": "file", "name": "policy.py"}, {"type": "file", "name": "query_response_model.py"}, {"type": "file", "name": "reward_model.py"}, {"type": "file", "name": "sample.py"}, {"type": "file", "name": "tasks.py"}, {"type": "file", "name": "task_data.py"}, {"type": "file", "name": "test_tasks.py"}, {"type": "directory", "name": "utils", "children": [{"type": "file", "name": "assertions.py"}, {"type": "file", "name": "blobs.py"}, {"type": "file", "name": "combos.py"}, {"type": "file", "name": "dist_utils.py"}, {"type": "file", "name": "even_more_itertools.py"}, {"type": "file", "name": "experiments.py"}, {"type": "file", "name": "experiment_helpers.py"}, {"type": "file", "name": "hyperparams.py"}, {"type": "file", "name": "jobs.py"}, {"type": "file", "name": "logging_utils.py"}, {"type": "file", "name": "nested.py"}, {"type": "file", "name": "test_even_more_itertools.py"}, {"type": "file", "name": "test_hyperparams.py"}, {"type": "file", "name": "test_torch_utils.py"}, {"type": "file", "name": "torch_utils.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "__init__.py"}]}]}
**Status:** Archive (code is provided as-is, no updates expected) # Learning to Summarize from Human Feedback This repository contains code to run our models, including the supervised baseline, the trained reward model, and the RL fine-tuned policy. Supported platform: Python 3.7 64-bit on Ubuntu 18.04 ## Install - Install [pipenv](https://github.com/pypa/pipenv#installation). - Clone this repo. Then, inside it: ``` pipenv install ``` ## Run the models You'll need to run this on a machine with an Nvidia GPU. First, let's run some tests to make sure everything is working. ``` pipenv run exps/sample.py test test-sample pipenv run exps/eval_rm.py test test-eval ``` Now let's run some actual evaluations. We can have the model summarize some posts from the validation set: ``` pipenv run exps/sample.py ppo_xl sample-ppo-xl --num_queries 32 ``` This will output to `/tmp/jobs/sample-ppo-xl/results/`. Now we can evaluate them using the reward model: ``` pipenv run exps/eval_rm.py rm4 eval-rm4 --input_path /tmp/jobs/sample-ppo-xl/results/ ``` This will print some aggregate statistics and output scores for each sample to `/tmp/jobs/eval-rm4/results/`. # Human feedback data We've released our human feedback dataset for further research. The dataset contains 64,832 summary comparisons on the TL;DR dataset, as well as our evaluation data on both TL;DR (comparisons and Likert scores) and CNN/DM (Likert scores). The dataset is stored in Azure Blob Storage, split into two directories described below: `comparisons` and `axis_evals`. You can download it by running `azcopy copy "https://openaipublic.blob.core.windows.net/summarize-from-feedback/dataset/*" . --recursive`. You can also explore the data by hand on [our dataset website](https://openaipublic.blob.core.windows.net/summarize-from-feedback/website/index.html#/). ## Comparisons `https://openaipublic.blob.core.windows.net/summarize-from-feedback/dataset/comparisons` contains labeled comparisons between pairs of summaries as jsonl files, where each line represents a single comparison. Here is a formatted example: ``` { "info": { "id": "t3_2vwp1w", "post": "I had a car accident on friday, other party involved was speeding and hit me. but because he denies it it seems like I was wrong because he was supposed to go first under normal circumstances. ( give way road markings ) \n\nbut because it was clear when I checked it I drove on, and when I was almost past the intersection he slammed me in the side near the back seat. and caused me to slide across the road for 2-3 meters hit a street light and then bounce back a meter. both doors completely jammed so i had to climb out the window...\n\ncan I somehow get an investigation going about this to see how fast he had to be driving to get this much force in the collision?\nbecause the damage on my car would suggest that he was driving way faster than the legal limit there. ( which is 50 km/h )\n\nalso another reason why i think he was going way faster than admitted is because he could never have reached the intersection from such a distance as where i could not even see him yet\n\n(pictures of the damage: ) as you can see with the damage, I am lucky to be alive and unharmed right now... 1ft further forward and it could have been my end...\n\nhelp would be appeciated on this :)", "title": "Anybody with knowledge of the Dutch law around ? car accident questions.", "subreddit": "legaladvice" }, "summaries": [ { "text": " car accident caused me 2-3m damage to my car both doors totally jammed and driving way faster than usual. need info on what to do with this.. thanks :)", "policy": "sup4_ppo_rm3_kl10", "note": "Was the accident caused by driving fast." }, { "text": " we suspect other party involved of speeding when he hit me but I can't prove it without an investigation into the damage, how can i get such an investigation ? if at all possible.", "policy": "ref", "note": "Unclear what happened." } ], "choice": 1, "worker": "ikNmucwunMnYJCQpnq6ZYb57OW7NiD", "batch": "batch9", "split": "train", "extra": { "confidence": 8 } } ``` `note` fields contain the naive interpretation notes written by the worker before seeing the post (but possibly edited afterwards). May be null. `split` will always be `train`, `valid1`, or `valid2`; posts / articles marked with `valid1` were used to select models during training, so we restricted to `valid2` labels for final evaluations. The training data for `sup4` is found in `comparisons/batch3.json` through `comparisons/batch10.json`; later batches are primarily evaluation. ## Axis evals `https://openaipublic.blob.core.windows.net/summarize-from-feedback/dataset/axis_evals` contains ratings of summaries along several axes, again as jsonl files. Here is a formatted example: ``` { "info": { "id": "167f80cc6634b166a699d182e25c81a2349d82d2", "site": "dailymail", "title": "Newcastle United midfielder Moussa Sissoko faces disciplinary action from the club after dangerous tackle on Lucas Leiva", "article": "Newcastle stand-in skipper Moussa Sissoko is facing disciplinary action after he was sent off following a reckless challenge on Liverpool midfielder Lucas Leiva during Monday's 2-0 defeat at Anfield.\n\nThe France international was given a second yellow card for the offence, but head coach John Carver feels it should have been a straight red.\n\n'The club will deal with that situation,' he said when asked if Sissoko - who is now banned for two matches - would be punished.\n\nLiverpool midfielder Lucas Leiva clutches his leg after Moussa Sissoko's tackle at Anfield\n\nSissoko hands the captain's armband to boss John Carver as he leaves the pitch after being sent off\n\n'He knows he was wrong. He was fortunate not to get a straight red and he agreed with me.\n\n'He apologised afterwards to Lucas, which was important.\n\n'But you think captains would lead by example. We have to improve our discipline. I will be looking at that.'\n\nMeanwhile, Carver says Newcastle cannot rely on the shortcomings of others to preserve their Premier League status.\n\nThe Magpies are the division's most out-of-form side having lost five on the spin, scoring just one goal along the way.\n\nLiverpool's players surround Lucas following Sissoko's dangerous tackle during Monday night's game\n\nRaheem Sterling bends the ball past Tim Krul to open the scoring in Liverpool's 2-0 win against Newcastle\n\nThey are nine points clear of danger with six matches to play, but Carver says it's about time they started helping themselves, starting with Sunday's visit of Spurs.\n\n'These two home games (Spurs followed by Swansea) are massive for us. I'm not bothered about performances, we need results,' he said.\n\n'I'm not worrying about that (relegation) at the moment, and the good thing is we have four games at home.\n\n'But we need to start winning now. We can't rely on others teams. We can't afford to ease off, I have always said that.\n\n'We have gone through a rough spell. It's down to me now to get players in right frame of mind.'\n\nNewcastle's players appear dejected as Joe Allen celebrates scoring Liverpool's second goal at Anfield" }, "split": "test", "summary": { "text": "Moussa Sissoko was sent off against Liverpool on Monday night.. John Carver felt that Sissoko's second booking was worthy of a red card.. Midfielder could be punished by his club on top of a two-game ban.. Carver admits he is only concerned with results and not performances.. Newcastle are 13th in the table, nine points off the relegation zone.", "policy": "ref", "note": "Misleading: \"Carver admits he is only concerned with results and not performances\" understood as if critics of monday's match but it's said for the following matches.\n\n13th??\n\nDoesnt properly address the teams, the match, the result, 2nd yellow card and therefore sent off, etc.", "axes": { "overall": 3, "accuracy": 5, "coverage": 4, "coherence": 2 } }, "worker": "qo6WIyEh27cwAjWpA3Q60J7NaDxzQJ", "batch": "cnndm1" } ``` # Reddit TL;DR dataset Our filtered versions of the TL;DR dataset are available here: https://openaipublic.blob.core.windows.net/summarize-from-feedback/datasets/tldr_3_filtered/train.jsonl https://openaipublic.blob.core.windows.net/summarize-from-feedback/datasets/tldr_3_filtered/valid.jsonl https://openaipublic.blob.core.windows.net/summarize-from-feedback/datasets/tldr_3_filtered/test.jsonl https://openaipublic.blob.core.windows.net/summarize-from-feedback/datasets/tldr_3_filtered/samples.txt For details on the original TL;DR dataset, see [Syed et al 2018](https://zenodo.org/record/1168855) by Syed, Shahbaz, Voelske, Michael, Potthast, Martin, & Stein, Benno (2018). It is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
{"setup.py": "from setuptools import setup\n\nsetup(\n name=\"summarize_from_feedback\",\n py_modules=[\"summarize_from_feedback\"],\n version=\"0.0.1\",\n description=\"Code for 'Learning to Summarize from Human Feedback'\",\n author=\"OpenAI\",\n)\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null
supervised-reptile
{"type": "directory", "name": "supervised-reptile", "children": [{"type": "file", "name": "fetch_data.sh"}, {"type": "file", "name": "LICENSE"}, {"type": "directory", "name": "metadata", "children": [{"type": "directory", "name": "miniimagenet", "children": [{"type": "directory", "name": "test", "children": [{"type": "file", "name": "n01930112.csv"}, {"type": "file", "name": "n01981276.csv"}, {"type": "file", "name": "n02099601.csv"}, {"type": "file", "name": "n02110063.csv"}, {"type": "file", "name": "n02110341.csv"}, {"type": "file", "name": "n02116738.csv"}, {"type": "file", "name": "n02129165.csv"}, {"type": "file", "name": "n02219486.csv"}, {"type": "file", "name": "n02443484.csv"}, {"type": "file", "name": "n02871525.csv"}, {"type": "file", "name": "n03127925.csv"}, {"type": "file", "name": "n03146219.csv"}, {"type": "file", "name": "n03272010.csv"}, {"type": "file", "name": "n03544143.csv"}, {"type": "file", "name": "n03775546.csv"}, {"type": "file", "name": "n04146614.csv"}, {"type": "file", "name": "n04149813.csv"}, {"type": "file", "name": "n04418357.csv"}, {"type": "file", "name": "n04522168.csv"}, {"type": "file", "name": "n07613480.csv"}]}, {"type": "directory", "name": "train", "children": [{"type": "file", "name": "n01532829.csv"}, {"type": "file", "name": "n01558993.csv"}, {"type": "file", "name": "n01704323.csv"}, {"type": "file", "name": "n01749939.csv"}, {"type": "file", "name": "n01770081.csv"}, {"type": "file", "name": "n01843383.csv"}, {"type": "file", "name": "n01910747.csv"}, {"type": "file", "name": "n02074367.csv"}, {"type": "file", "name": "n02089867.csv"}, {"type": "file", "name": "n02091831.csv"}, {"type": "file", "name": "n02101006.csv"}, {"type": "file", "name": "n02105505.csv"}, {"type": "file", "name": "n02108089.csv"}, {"type": "file", "name": "n02108551.csv"}, {"type": "file", "name": "n02108915.csv"}, {"type": "file", "name": "n02111277.csv"}, {"type": "file", "name": "n02113712.csv"}, {"type": "file", "name": "n02120079.csv"}, {"type": "file", "name": "n02165456.csv"}, {"type": "file", "name": "n02457408.csv"}, {"type": "file", "name": "n02606052.csv"}, {"type": "file", "name": "n02687172.csv"}, {"type": "file", "name": "n02747177.csv"}, {"type": "file", "name": "n02795169.csv"}, {"type": "file", "name": "n02823428.csv"}, {"type": "file", "name": "n02966193.csv"}, {"type": "file", "name": "n03017168.csv"}, {"type": "file", "name": "n03047690.csv"}, {"type": "file", "name": "n03062245.csv"}, {"type": "file", "name": "n03207743.csv"}, {"type": "file", "name": "n03220513.csv"}, {"type": "file", "name": "n03337140.csv"}, {"type": "file", "name": "n03347037.csv"}, {"type": "file", "name": "n03400231.csv"}, {"type": "file", "name": "n03476684.csv"}, {"type": "file", "name": "n03527444.csv"}, {"type": "file", "name": "n03676483.csv"}, {"type": "file", "name": "n03838899.csv"}, {"type": "file", "name": "n03854065.csv"}, {"type": "file", "name": "n03888605.csv"}, {"type": "file", "name": "n03908618.csv"}, {"type": "file", "name": "n03924679.csv"}, {"type": "file", "name": "n03998194.csv"}, {"type": "file", "name": "n04067472.csv"}, {"type": "file", "name": "n04243546.csv"}, {"type": "file", "name": "n04251144.csv"}, {"type": "file", "name": "n04258138.csv"}, {"type": "file", "name": "n04275548.csv"}, {"type": "file", "name": "n04296562.csv"}, {"type": "file", "name": "n04389033.csv"}, {"type": "file", "name": "n04435653.csv"}, {"type": "file", "name": "n04443257.csv"}, {"type": "file", "name": "n04509417.csv"}, {"type": "file", "name": "n04515003.csv"}, {"type": "file", "name": "n04596742.csv"}, {"type": "file", "name": "n04604644.csv"}, {"type": "file", "name": "n04612504.csv"}, {"type": "file", "name": "n06794110.csv"}, {"type": "file", "name": "n07584110.csv"}, {"type": "file", "name": "n07697537.csv"}, {"type": "file", "name": "n07747607.csv"}, {"type": "file", "name": "n09246464.csv"}, {"type": "file", "name": "n13054560.csv"}, {"type": "file", "name": "n13133613.csv"}]}, {"type": "directory", "name": "val", "children": [{"type": "file", "name": "n01855672.csv"}, {"type": "file", "name": "n02091244.csv"}, {"type": "file", "name": "n02114548.csv"}, {"type": "file", "name": "n02138441.csv"}, {"type": "file", "name": "n02174001.csv"}, {"type": "file", "name": "n02950826.csv"}, {"type": "file", "name": "n02971356.csv"}, {"type": "file", "name": "n02981792.csv"}, {"type": "file", "name": "n03075370.csv"}, {"type": "file", "name": "n03417042.csv"}, {"type": "file", "name": "n03535780.csv"}, {"type": "file", "name": "n03584254.csv"}, {"type": "file", "name": "n03770439.csv"}, {"type": "file", "name": "n03773504.csv"}, {"type": "file", "name": "n03980874.csv"}, {"type": "file", "name": "n09256479.csv"}]}]}]}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "run_miniimagenet.py"}, {"type": "file", "name": "run_omniglot.py"}, {"type": "file", "name": "setup.py"}, {"type": "directory", "name": "supervised_reptile", "children": [{"type": "file", "name": "args.py"}, {"type": "file", "name": "eval.py"}, {"type": "file", "name": "miniimagenet.py"}, {"type": "file", "name": "models.py"}, {"type": "file", "name": "omniglot.py"}, {"type": "file", "name": "reptile.py"}, {"type": "file", "name": "train.py"}, {"type": "file", "name": "variables.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "web", "children": [{"type": "file", "name": "build.sh"}, {"type": "directory", "name": "deps", "children": [{"type": "file", "name": "jsnet.js"}, {"type": "file", "name": "model.js"}]}, {"type": "directory", "name": "helpers", "children": [{"type": "file", "name": "export.py"}]}, {"type": "file", "name": "index.html"}, {"type": "directory", "name": "src", "children": [{"type": "file", "name": "default.js"}, {"type": "file", "name": "drawing.js"}, {"type": "file", "name": "evaluator.js"}, {"type": "file", "name": "predictions.js"}, {"type": "file", "name": "ui.js"}, {"type": "file", "name": "webworker.js"}]}, {"type": "file", "name": "style.css"}]}]}
**Status:** Archive (code is provided as-is, no updates expected) # supervised-reptile [Reptile](https://arxiv.org/abs/1803.02999) training code for [Omniglot](https://github.com/brendenlake/omniglot) and [Mini-ImageNet](https://openreview.net/pdf?id=rJY0-Kcll). Reptile is a meta-learning algorithm that finds a good initialization. It works by sampling a task, training on the sampled task, and then updating the initialization towards the new weights for the task. # Getting the data The [fetch_data.sh](fetch_data.sh) script creates a `data/` directory and downloads Omniglot and Mini-ImageNet into it. The data is on the order of 5GB, so the download takes 10-20 minutes on a reasonably fast internet connection. ``` $ ./fetch_data.sh Fetching omniglot/images_background ... Extracting omniglot/images_background ... Fetching omniglot/images_evaluation ... Extracting omniglot/images_evaluation ... Fetching Mini-ImageNet train set ... Fetching wnid: n01532829 Fetching wnid: n01558993 Fetching wnid: n01704323 Fetching wnid: n01749939 ... ``` If you want to download Omniglot but not Mini-ImageNet, you can simply kill the script after it starts downloading Mini-ImageNet. The script automatically deletes partially-downloaded data when it is killed early. # Reproducing training runs You can train models with the `run_omniglot.py` and `run_miniimagenet.py` scripts. Hyper-parameters are specified as flags (see `--help` for a detailed list). Here are the commands used for the paper: ```shell # transductive 1-shot 5-way Omniglot. python -u run_omniglot.py --shots 1 --inner-batch 10 --inner-iters 5 --meta-step 1 --meta-batch 5 --meta-iters 100000 --eval-batch 5 --eval-iters 50 --learning-rate 0.001 --meta-step-final 0 --train-shots 10 --checkpoint ckpt_o15t --transductive # transductive 1-shot 5-way Mini-ImageNet. python -u run_miniimagenet.py --shots 1 --inner-batch 10 --inner-iters 8 --meta-step 1 --meta-batch 5 --meta-iters 100000 --eval-batch 5 --eval-iters 50 --learning-rate 0.001 --meta-step-final 0 --train-shots 15 --checkpoint ckpt_m15t --transductive # 5-shot 5-way Mini-ImageNet. python -u run_miniimagenet.py --inner-batch 10 --inner-iters 8 --meta-step 1 --meta-batch 5 --meta-iters 100000 --eval-batch 15 --eval-iters 50 --learning-rate 0.001 --meta-step-final 0 --train-shots 15 --checkpoint ckpt_m55 # 1-shot 5-way Mini-ImageNet. python -u run_miniimagenet.py --shots 1 --inner-batch 10 --inner-iters 8 --meta-step 1 --meta-batch 5 --meta-iters 100000 --eval-batch 5 --eval-iters 50 --learning-rate 0.001 --meta-step-final 0 --train-shots 15 --checkpoint ckpt_m15 # 5-shot 5-way Omniglot. python -u run_omniglot.py --train-shots 10 --inner-batch 10 --inner-iters 5 --learning-rate 0.001 --meta-step 1 --meta-step-final 0 --meta-batch 5 --meta-iters 100000 --eval-batch 5 --eval-iters 50 --checkpoint ckpt_o55 # 1-shot 5-way Omniglot. python -u run_omniglot.py --shots 1 --inner-batch 10 --inner-iters 5 --meta-step 1 --meta-batch 5 --meta-iters 100000 --eval-batch 5 --eval-iters 50 --learning-rate 0.001 --meta-step-final 0 --train-shots 10 --checkpoint ckpt_o15 # 1-shot 20-way Omniglot. python -u run_omniglot.py --shots 1 --classes 20 --inner-batch 20 --inner-iters 10 --meta-step 1 --meta-batch 5 --meta-iters 200000 --eval-batch 10 --eval-iters 50 --learning-rate 0.0005 --meta-step-final 0 --train-shots 10 --checkpoint ckpt_o120 # 5-shot 20-way Omniglot. python -u run_omniglot.py --classes 20 --inner-batch 20 --inner-iters 10 --meta-step 1 --meta-batch 5 --meta-iters 200000 --eval-batch 10 --eval-iters 50 --learning-rate 0.0005 --meta-step-final 0 --train-shots 10 --checkpoint ckpt_o520 ``` Training creates checkpoints. Currently, you cannot resume training from a checkpoint, but you can re-run evaluation from a checkpoint by passing `--pretrained`. You can use TensorBoard on the checkpoint directories to see approximate learning curves during training and testing. To evaluate with transduction, pass the `--transductive` flag. In this implementation, transductive evaluation is faster than non-transductive evaluation since it makes better use of batches. # Comparing different inner-loop gradient combinations Here are the commands for comparing different gradient combinations. The `--foml` flag indicates that only the final gradient should be used. ```shell # Shared hyper-parameters for all experiments. shared="--sgd --seed 0 --inner-batch 25 --learning-rate 0.003 --meta-step-final 0 --meta-iters 40000 --eval-batch 25 --eval-iters 5 --eval-interval 1" python run_omniglot.py --inner-iters 1 --train-shots 5 --meta-step 0.25 --checkpoint g1_ckpt $shared | tee g1.txt python run_omniglot.py --inner-iters 2 --train-shots 10 --meta-step 0.25 --checkpoint g1_g2_ckpt $shared | tee g1_g2.txt python run_omniglot.py --inner-iters 2 --train-shots 10 --meta-step 0.125 --checkpoint half_g1_g2_ckpt $shared | tee half_g1_g2.txt python run_omniglot.py --foml --inner-iters 2 --train-shots 10 --meta-step 0.25 --checkpoint g2_ckpt $shared | tee g2.txt python run_omniglot.py --inner-iters 3 --train-shots 15 --meta-step 0.25 --checkpoint g1_g2_g3_ckpt $shared | tee g1_g2_g3.txt python run_omniglot.py --inner-iters 3 --train-shots 15 --meta-step 0.08325 --checkpoint third_g1_g2_g3_ckpt $shared | tee third_g1_g2_g3.txt python run_omniglot.py --foml --inner-iters 3 --train-shots 15 --meta-step 0.25 --checkpoint g3_ckpt $shared | tee g3.txt python run_omniglot.py --foml --inner-iters 4 --train-shots 20 --meta-step 0.25 --checkpoint g4_ckpt $shared | tee g4.txt python run_omniglot.py --inner-iters 4 --train-shots 20 --meta-step 0.25 --checkpoint g1_g2_g3_g4_ckpt $shared | tee g1_g2_g3_g4.txt python run_omniglot.py --inner-iters 4 --train-shots 20 --meta-step 0.0625 --checkpoint fourth_g1_g2_g3_g4_ckpt $shared | tee fourth_g1_g2_g3_g4.txt ```
{"setup.py": "\"\"\"\nModule configuration.\n\"\"\"\n\nfrom setuptools import setup\n\nsetup(\n name='supervised-reptile',\n version='0.0.1',\n description='Reptile for supervised meta-learning',\n long_description='Reptile for supervised meta-learning',\n url='https://github.com/openai/supervised-reptile',\n author='Alex Nichol',\n author_email='[email protected]',\n license='MIT',\n keywords='ai machine learning',\n packages=['supervised_reptile'],\n install_requires=[\n 'numpy>=1.0.0,<2.0.0',\n 'Pillow>=4.0.0,<5.0.0'\n ],\n extras_require={\n \"tf\": [\"tensorflow>=1.0.0\"],\n \"tf_gpu\": [\"tensorflow-gpu>=1.0.0\"],\n }\n)\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", "web\\index.html": "<!doctype html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Few-Shot Learning</title>\n <link rel=\"stylesheet\" href=\"style.css\" type=\"text/css\">\n <script src=\"build/app.js\"></script>\n </head>\n <body>\n <div class=\"few-shot-container\"></div>\n </body>\n</html>\n"}
null
swarm
{"type": "directory", "name": "swarm", "children": [{"type": "file", "name": ".pre-commit-config.yaml"}, {"type": "directory", "name": "assets", "children": []}, {"type": "directory", "name": "examples", "children": [{"type": "directory", "name": "airline", "children": [{"type": "directory", "name": "configs", "children": [{"type": "file", "name": "agents.py"}, {"type": "file", "name": "tools.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "data", "children": [{"type": "directory", "name": "routines", "children": [{"type": "directory", "name": "baggage", "children": [{"type": "file", "name": "policies.py"}]}, {"type": "directory", "name": "flight_modification", "children": [{"type": "file", "name": "policies.py"}]}, {"type": "file", "name": "prompts.py"}]}]}, {"type": "directory", "name": "evals", "children": [{"type": "directory", "name": "eval_cases", "children": [{"type": "file", "name": "flight_modification_cases.json"}, {"type": "file", "name": "triage_cases.json"}]}, {"type": "directory", "name": "eval_results", "children": [{"type": "file", "name": "flight_modification_evals.json"}, {"type": "file", "name": "triage_evals.json"}]}, {"type": "file", "name": "eval_utils.py"}, {"type": "file", "name": "function_evals.py"}]}, {"type": "file", "name": "main.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "basic", "children": [{"type": "file", "name": "agent_handoff.py"}, {"type": "file", "name": "bare_minimum.py"}, {"type": "file", "name": "context_variables.py"}, {"type": "file", "name": "function_calling.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "simple_loop_no_helpers.py"}]}, {"type": "directory", "name": "customer_service", "children": [{"type": "directory", "name": "logs", "children": [{"type": "file", "name": "session_20240422-134602.json"}, {"type": "file", "name": "session_20240422-135231.json"}, {"type": "file", "name": "session_20240422-135321.json"}, {"type": "file", "name": "session_20240422-140035.json"}, {"type": "file", "name": "session_20240422-141344.json"}]}]}, {"type": "directory", "name": "customer_service_lite", "children": [{"type": "directory", "name": "logs", "children": [{"type": "file", "name": "session_20240425-175026.json"}, {"type": "file", "name": "session_20240425-175112.json"}, {"type": "file", "name": "session_20240425-175154.json"}, {"type": "file", "name": "session_20240425-175210.json"}]}]}, {"type": "directory", "name": "customer_service_streaming", "children": [{"type": "directory", "name": "configs", "children": [{"type": "directory", "name": "assistants", "children": [{"type": "directory", "name": "user_interface", "children": [{"type": "file", "name": "assistant.json"}]}]}, {"type": "file", "name": "general.py"}, {"type": "file", "name": "prompts.py"}, {"type": "file", "name": "swarm_tasks.json"}, {"type": "directory", "name": "tools", "children": [{"type": "directory", "name": "query_docs", "children": [{"type": "file", "name": "handler.py"}, {"type": "file", "name": "tool.json"}]}, {"type": "directory", "name": "send_email", "children": [{"type": "file", "name": "handler.py"}, {"type": "file", "name": "tool.json"}]}, {"type": "directory", "name": "submit_ticket", "children": [{"type": "file", "name": "handler.py"}, {"type": "file", "name": "tool.json"}]}]}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "data", "children": [{"type": "file", "name": "article_6233728.json"}, {"type": "file", "name": "article_6272941.json"}, {"type": "file", "name": "article_6272952.json"}, {"type": "file", "name": "article_6283125.json"}, {"type": "file", "name": "article_6338764.json"}, {"type": "file", "name": "article_6338765.json"}, {"type": "file", "name": "article_6378378.json"}, {"type": "file", "name": "article_6378407.json"}, {"type": "file", "name": "article_6399305.json"}, {"type": "file", "name": "article_6402865.json"}, {"type": "file", "name": "article_6425277.json"}, {"type": "file", "name": "article_6431339.json"}, {"type": "file", "name": "article_6431922.json"}, {"type": "file", "name": "article_6468065.json"}, {"type": "file", "name": "article_6485334.json"}, {"type": "file", "name": "article_6503842.json"}, {"type": "file", "name": "article_6516417.json"}, {"type": "file", "name": "article_6582257.json"}, {"type": "file", "name": "article_6582391.json"}, {"type": "file", "name": "article_6584194.json"}, {"type": "file", "name": "article_6584249.json"}, {"type": "file", "name": "article_6613520.json"}, {"type": "file", "name": "article_6613605.json"}, {"type": "file", "name": "article_6613629.json"}, {"type": "file", "name": "article_6613657.json"}, {"type": "file", "name": "article_6614161.json"}, {"type": "file", "name": "article_6614209.json"}, {"type": "file", "name": "article_6614457.json"}, {"type": "file", "name": "article_6639781.json"}, {"type": "file", "name": "article_6640792.json"}, {"type": "file", "name": "article_6640864.json"}, {"type": "file", "name": "article_6640875.json"}, {"type": "file", "name": "article_6641048.json"}, {"type": "file", "name": "article_6643004.json"}, {"type": "file", "name": "article_6643036.json"}, {"type": "file", "name": "article_6643167.json"}, {"type": "file", "name": "article_6643200.json"}, {"type": "file", "name": "article_6643435.json"}, {"type": "file", "name": "article_6653653.json"}, {"type": "file", "name": "article_6654000.json"}, {"type": "file", "name": "article_6654303.json"}, {"type": "file", "name": "article_6681258.json"}, {"type": "file", "name": "article_6684216.json"}, {"type": "file", "name": "article_6696591.json"}, {"type": "file", "name": "article_6705023.json"}, {"type": "file", "name": "article_6742369.json"}, {"type": "file", "name": "article_6781152.json"}, {"type": "file", "name": "article_6781222.json"}, {"type": "file", "name": "article_6781228.json"}, {"type": "file", "name": "article_6783457.json"}, {"type": "file", "name": "article_6811186.json"}, {"type": "file", "name": "article_6824809.json"}, {"type": "file", "name": "article_6825453.json"}, {"type": "file", "name": "article_6837156.json"}, {"type": "file", "name": "article_6843909.json"}, {"type": "file", "name": "article_6843914.json"}, {"type": "file", "name": "article_6882433.json"}, {"type": "file", "name": "article_6891753.json"}, {"type": "file", "name": "article_6891767.json"}, {"type": "file", "name": "article_6891781.json"}, {"type": "file", "name": "article_6891827.json"}, {"type": "file", "name": "article_6891829.json"}, {"type": "file", "name": "article_6891831.json"}, {"type": "file", "name": "article_6891834.json"}, {"type": "file", "name": "article_6891839.json"}, {"type": "file", "name": "article_6897179.json"}, {"type": "file", "name": "article_6897186.json"}, {"type": "file", "name": "article_6897191.json"}, {"type": "file", "name": "article_6897194.json"}, {"type": "file", "name": "article_6897198.json"}, {"type": "file", "name": "article_6897199.json"}, {"type": "file", "name": "article_6897202.json"}, {"type": "file", "name": "article_6897204.json"}, {"type": "file", "name": "article_6897213.json"}, {"type": "file", "name": "article_6901266.json"}, {"type": "file", "name": "article_6950777.json"}]}, {"type": "file", "name": "docker-compose.yaml"}, {"type": "directory", "name": "logs", "children": []}, {"type": "file", "name": "main.py"}, {"type": "file", "name": "prep_data.py"}, {"type": "directory", "name": "src", "children": [{"type": "file", "name": "arg_parser.py"}, {"type": "directory", "name": "evals", "children": [{"type": "file", "name": "eval_function.py"}]}, {"type": "directory", "name": "runs", "children": [{"type": "file", "name": "run.py"}]}, {"type": "directory", "name": "swarm", "children": [{"type": "file", "name": "assistants.py"}, {"type": "file", "name": "conversation.py"}, {"type": "directory", "name": "engines", "children": [{"type": "file", "name": "assistants_engine.py"}, {"type": "file", "name": "engine.py"}, {"type": "file", "name": "local_engine.py"}]}, {"type": "file", "name": "swarm.py"}, {"type": "file", "name": "tool.py"}]}, {"type": "directory", "name": "tasks", "children": [{"type": "file", "name": "task.py"}]}, {"type": "file", "name": "utils.py"}, {"type": "file", "name": "validator.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_prompts.jsonl"}, {"type": "directory", "name": "test_runs", "children": []}]}]}, {"type": "directory", "name": "personal_shopper", "children": [{"type": "file", "name": "database.py"}, {"type": "file", "name": "main.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "support_bot", "children": [{"type": "file", "name": "customer_service.py"}, {"type": "directory", "name": "data", "children": [{"type": "file", "name": "article_6233728.json"}, {"type": "file", "name": "article_6272941.json"}, {"type": "file", "name": "article_6272952.json"}, {"type": "file", "name": "article_6283125.json"}, {"type": "file", "name": "article_6338764.json"}, {"type": "file", "name": "article_6338765.json"}, {"type": "file", "name": "article_6378378.json"}, {"type": "file", "name": "article_6378407.json"}, {"type": "file", "name": "article_6399305.json"}, {"type": "file", "name": "article_6402865.json"}, {"type": "file", "name": "article_6425277.json"}, {"type": "file", "name": "article_6431339.json"}, {"type": "file", "name": "article_6431922.json"}, {"type": "file", "name": "article_6468065.json"}, {"type": "file", "name": "article_6485334.json"}, {"type": "file", "name": "article_6503842.json"}, {"type": "file", "name": "article_6516417.json"}, {"type": "file", "name": "article_6582257.json"}, {"type": "file", "name": "article_6582391.json"}, {"type": "file", "name": "article_6584194.json"}, {"type": "file", "name": "article_6584249.json"}, {"type": "file", "name": "article_6613520.json"}, {"type": "file", "name": "article_6613605.json"}, {"type": "file", "name": "article_6613629.json"}, {"type": "file", "name": "article_6613657.json"}, {"type": "file", "name": "article_6614161.json"}, {"type": "file", "name": "article_6614209.json"}, {"type": "file", "name": "article_6614457.json"}, {"type": "file", "name": "article_6639781.json"}, {"type": "file", "name": "article_6640792.json"}, {"type": "file", "name": "article_6640864.json"}, {"type": "file", "name": "article_6640875.json"}, {"type": "file", "name": "article_6641048.json"}, {"type": "file", "name": "article_6643004.json"}, {"type": "file", "name": "article_6643036.json"}, {"type": "file", "name": "article_6643167.json"}, {"type": "file", "name": "article_6643200.json"}, {"type": "file", "name": "article_6643435.json"}, {"type": "file", "name": "article_6653653.json"}, {"type": "file", "name": "article_6654000.json"}, {"type": "file", "name": "article_6654303.json"}, {"type": "file", "name": "article_6681258.json"}, {"type": "file", "name": "article_6684216.json"}, {"type": "file", "name": "article_6696591.json"}, {"type": "file", "name": "article_6705023.json"}, {"type": "file", "name": "article_6742369.json"}, {"type": "file", "name": "article_6781152.json"}, {"type": "file", "name": "article_6781222.json"}, {"type": "file", "name": "article_6781228.json"}, {"type": "file", "name": "article_6783457.json"}, {"type": "file", "name": "article_6811186.json"}, {"type": "file", "name": "article_6824809.json"}, {"type": "file", "name": "article_6825453.json"}, {"type": "file", "name": "article_6837156.json"}, {"type": "file", "name": "article_6843909.json"}, {"type": "file", "name": "article_6843914.json"}, {"type": "file", "name": "article_6882433.json"}, {"type": "file", "name": "article_6891753.json"}, {"type": "file", "name": "article_6891767.json"}, {"type": "file", "name": "article_6891781.json"}, {"type": "file", "name": "article_6891827.json"}, {"type": "file", "name": "article_6891829.json"}, {"type": "file", "name": "article_6891831.json"}, {"type": "file", "name": "article_6891834.json"}, {"type": "file", "name": "article_6891839.json"}, {"type": "file", "name": "article_6897179.json"}, {"type": "file", "name": "article_6897186.json"}, {"type": "file", "name": "article_6897191.json"}, {"type": "file", "name": "article_6897194.json"}, {"type": "file", "name": "article_6897198.json"}, {"type": "file", "name": "article_6897199.json"}, {"type": "file", "name": "article_6897202.json"}, {"type": "file", "name": "article_6897204.json"}, {"type": "file", "name": "article_6897213.json"}, {"type": "file", "name": "article_6901266.json"}, {"type": "file", "name": "article_6950777.json"}]}, {"type": "file", "name": "docker-compose.yaml"}, {"type": "file", "name": "main.py"}, {"type": "file", "name": "Makefile"}, {"type": "file", "name": "prep_data.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "requirements.txt"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "triage_agent", "children": [{"type": "file", "name": "agents.py"}, {"type": "file", "name": "evals.py"}, {"type": "file", "name": "evals_util.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "run.py"}]}, {"type": "directory", "name": "weather_agent", "children": [{"type": "file", "name": "agents.py"}, {"type": "file", "name": "evals.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "run.py"}]}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "LICENSE"}, {"type": "directory", "name": "logs", "children": [{"type": "file", "name": "session_20240402-112114.json"}, {"type": "file", "name": "session_20240402-112443.json"}, {"type": "file", "name": "session_20240402-112456.json"}, {"type": "file", "name": "session_20240402-112501.json"}, {"type": "file", "name": "session_20240402-113222.json"}, {"type": "file", "name": "session_20240402-113415.json"}, {"type": "file", "name": "session_20240425-135655.json"}, {"type": "file", "name": "session_20240425-135657.json"}, {"type": "file", "name": "session_20240425-135728.json"}, {"type": "file", "name": "session_20240425-140427.json"}, {"type": "file", "name": "session_20240425-140502.json"}, {"type": "file", "name": "session_20240425-140516.json"}, {"type": "file", "name": "session_20240425-140553.json"}, {"type": "file", "name": "session_20240425-141416.json"}, {"type": "file", "name": "session_20240425-141509.json"}, {"type": "file", "name": "session_20240425-141709.json"}, {"type": "file", "name": "session_20240425-145129.json"}, {"type": "file", "name": "session_20240425-145324.json"}, {"type": "file", "name": "session_20240425-145907.json"}, {"type": "file", "name": "session_20240425-145930.json"}, {"type": "file", "name": "session_20240425-150004.json"}, {"type": "file", "name": "session_20240425-150040.json"}, {"type": "file", "name": "session_20240425-155814.json"}, {"type": "file", "name": "session_20240425-172809.json"}, {"type": "file", "name": "session_20240425-211732.json"}, {"type": "file", "name": "session_20240425-211813.json"}, {"type": "file", "name": "session_20240425-211942.json"}, {"type": "file", "name": "session_20240425-212341.json"}, {"type": "file", "name": "session_20240425-212431.json"}, {"type": "file", "name": "session_20240425-212748.json"}, {"type": "file", "name": "session_20240425-213023.json"}]}, {"type": "file", "name": "pyproject.toml"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "SECURITY.md"}, {"type": "file", "name": "setup.cfg"}, {"type": "directory", "name": "swarm", "children": [{"type": "file", "name": "core.py"}, {"type": "directory", "name": "repl", "children": [{"type": "file", "name": "repl.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "types.py"}, {"type": "file", "name": "util.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "mock_client.py"}, {"type": "file", "name": "test_core.py"}, {"type": "directory", "name": "test_runs", "children": [{"type": "file", "name": "test_20240402-113647.json"}]}, {"type": "file", "name": "test_util.py"}, {"type": "file", "name": "__init__.py"}]}]}
# Weather agent This example is a weather agent demonstrating function calling with a single agent. The agent has tools to get the weather of a particular city, and send an email. ## Setup To run the weather agent Swarm: 1. Run ```shell python3 run.py ``` ## Evals > [!NOTE] > These evals are intended to be examples to demonstrate functionality, but will have to be updated and catered to your particular use case. This example uses `Pytest` to run eval unit tests. We have two tests in the `evals.py` file, one which tests if we call the `get_weather` function when expected, and one which assesses if we properly do NOT call the `get_weather` function when we shouldn't have a tool call. To run the evals, run ```shell pytest evals.py ```
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 9db581cecaacea0d46a933d6453c312b034dbf47 Hamza Amin <[email protected]> 1729337640 +0500\tclone: from https://github.com/openai/swarm.git\n", ".git\\refs\\heads\\main": "9db581cecaacea0d46a933d6453c312b034dbf47\n", "examples\\airline\\main.py": "from configs.agents import *\nfrom swarm.repl import run_demo_loop\n\ncontext_variables = {\n \"customer_context\": \"\"\"Here is what you know about the customer's details:\n1. CUSTOMER_ID: customer_12345\n2. NAME: John Doe\n3. PHONE_NUMBER: (123) 456-7890\n4. EMAIL: [email protected]\n5. STATUS: Premium\n6. ACCOUNT_STATUS: Active\n7. BALANCE: $0.00\n8. LOCATION: 1234 Main St, San Francisco, CA 94123, USA\n\"\"\",\n \"flight_context\": \"\"\"The customer has an upcoming flight from LGA (Laguardia) in NYC to LAX in Los Angeles.\nThe flight # is 1919. The flight departure date is 3pm ET, 5/21/2024.\"\"\",\n}\nif __name__ == \"__main__\":\n run_demo_loop(triage_agent, context_variables=context_variables, debug=True)\n", "examples\\customer_service_streaming\\main.py": "import shlex\nimport argparse\nfrom src.swarm.swarm import Swarm\nfrom src.tasks.task import Task\nfrom configs.general import test_root, test_file, engine_name, persist\nfrom src.validator import validate_all_tools, validate_all_assistants\nfrom src.arg_parser import parse_args\n\n\ndef main():\n args = parse_args()\n try:\n validate_all_tools(engine_name)\n validate_all_assistants()\n except:\n raise Exception(\"Validation failed\")\n\n swarm = Swarm(\n engine_name=engine_name, persist=persist)\n\n if args.test is not None:\n test_files = args.test\n if len(test_files) == 0:\n test_file_paths = [f\"{test_root}/{test_file}\"]\n else:\n test_file_paths = [f\"{test_root}/{file}\" for file in test_files]\n swarm = Swarm(engine_name='local')\n swarm.deploy(test_mode=True, test_file_paths=test_file_paths)\n\n elif args.input:\n # Interactive mode for adding tasks\n while True:\n print(\"Enter a task (or 'exit' to quit):\")\n task_input = input()\n\n # Check for exit command\n if task_input.lower() == 'exit':\n break\n\n # Use shlex to parse the task description and arguments\n task_args = shlex.split(task_input)\n task_parser = argparse.ArgumentParser()\n task_parser.add_argument(\"description\", type=str, nargs='?', default=\"\")\n task_parser.add_argument(\"--iterate\", action=\"store_true\", help=\"Set the iterate flag for the new task.\")\n task_parser.add_argument(\"--evaluate\", action=\"store_true\", help=\"Set the evaluate flag for the new task.\")\n task_parser.add_argument(\"--assistant\", type=str, default=\"user_interface\", help=\"Specify the assistant for the new task.\")\n\n # Parse task arguments\n task_parsed_args = task_parser.parse_args(task_args)\n\n # Create and add the new task\n new_task = Task(description=task_parsed_args.description,\n iterate=task_parsed_args.iterate,\n evaluate=task_parsed_args.evaluate,\n assistant=task_parsed_args.assistant)\n swarm.add_task(new_task)\n\n # Deploy Swarm with the new task\n swarm.deploy()\n swarm.tasks.clear()\n\n else:\n # Load predefined tasks if any\n # Deploy the Swarm for predefined tasks\n swarm.load_tasks()\n swarm.deploy()\n\n print(\"\\n\\n\ud83c\udf6f\ud83d\udc1d\ud83c\udf6f Swarm operations complete \ud83c\udf6f\ud83d\udc1d\ud83c\udf6f\\n\\n\")\n\n\nif __name__ == \"__main__\":\n main()\n", "examples\\personal_shopper\\main.py": "import datetime\nimport random\n\nimport database\nfrom swarm import Agent\nfrom swarm.agents import create_triage_agent\nfrom swarm.repl import run_demo_loop\n\n\ndef refund_item(user_id, item_id):\n \"\"\"Initiate a refund based on the user ID and item ID.\n Takes as input arguments in the format '{\"user_id\":\"1\",\"item_id\":\"3\"}'\n \"\"\"\n conn = database.get_connection()\n cursor = conn.cursor()\n cursor.execute(\n \"\"\"\n SELECT amount FROM PurchaseHistory\n WHERE user_id = ? AND item_id = ?\n \"\"\",\n (user_id, item_id),\n )\n result = cursor.fetchone()\n if result:\n amount = result[0]\n print(f\"Refunding ${amount} to user ID {user_id} for item ID {item_id}.\")\n else:\n print(f\"No purchase found for user ID {user_id} and item ID {item_id}.\")\n print(\"Refund initiated\")\n\n\ndef notify_customer(user_id, method):\n \"\"\"Notify a customer by their preferred method of either phone or email.\n Takes as input arguments in the format '{\"user_id\":\"1\",\"method\":\"email\"}'\"\"\"\n\n conn = database.get_connection()\n cursor = conn.cursor()\n cursor.execute(\n \"\"\"\n SELECT email, phone FROM Users\n WHERE user_id = ?\n \"\"\",\n (user_id,),\n )\n user = cursor.fetchone()\n if user:\n email, phone = user\n if method == \"email\" and email:\n print(f\"Emailed customer {email} a notification.\")\n elif method == \"phone\" and phone:\n print(f\"Texted customer {phone} a notification.\")\n else:\n print(f\"No {method} contact available for user ID {user_id}.\")\n else:\n print(f\"User ID {user_id} not found.\")\n\n\ndef order_item(user_id, product_id):\n \"\"\"Place an order for a product based on the user ID and product ID.\n Takes as input arguments in the format '{\"user_id\":\"1\",\"product_id\":\"2\"}'\"\"\"\n date_of_purchase = datetime.datetime.now()\n item_id = random.randint(1, 300)\n\n conn = database.get_connection()\n cursor = conn.cursor()\n cursor.execute(\n \"\"\"\n SELECT product_id, product_name, price FROM Products\n WHERE product_id = ?\n \"\"\",\n (product_id,),\n )\n result = cursor.fetchone()\n if result:\n product_id, product_name, price = result\n print(\n f\"Ordering product {product_name} for user ID {user_id}. The price is {price}.\"\n )\n # Add the purchase to the database\n database.add_purchase(user_id, date_of_purchase, item_id, price)\n else:\n print(f\"Product {product_id} not found.\")\n\n\n# Initialize the database\ndatabase.initialize_database()\n\n# Preview tables\ndatabase.preview_table(\"Users\")\ndatabase.preview_table(\"PurchaseHistory\")\ndatabase.preview_table(\"Products\")\n\n# Define the agents\n\nrefunds_agent = Agent(\n name=\"Refunds Agent\",\n description=f\"\"\"You are a refund agent that handles all actions related to refunds after a return has been processed.\n You must ask for both the user ID and item ID to initiate a refund. Ask for both user_id and item_id in one message.\n If the user asks you to notify them, you must ask them what their preferred method of notification is. For notifications, you must\n ask them for user_id and method in one message.\"\"\",\n functions=[refund_item, notify_customer],\n)\n\nsales_agent = Agent(\n name=\"Sales Agent\",\n description=f\"\"\"You are a sales agent that handles all actions related to placing an order to purchase an item.\n Regardless of what the user wants to purchase, must ask for BOTH the user ID and product ID to place an order.\n An order cannot be placed without these two pieces of information. Ask for both user_id and product_id in one message.\n If the user asks you to notify them, you must ask them what their preferred method is. For notifications, you must\n ask them for user_id and method in one message.\n \"\"\",\n functions=[order_item, notify_customer],\n)\n\ntriage_agent = create_triage_agent(\n name=\"Triage Agent\",\n instructions=f\"\"\"You are to triage a users request, and call a tool to transfer to the right intent.\n Once you are ready to transfer to the right intent, call the tool to transfer to the right intent.\n You dont need to know specifics, just the topic of the request.\n If the user request is about making an order or purchasing an item, transfer to the Sales Agent.\n If the user request is about getting a refund on an item or returning a product, transfer to the Refunds Agent.\n When you need more information to triage the request to an agent, ask a direct question without explaining why you're asking it.\n Do not share your thought process with the user! Do not make unreasonable assumptions on behalf of user.\"\"\",\n agents=[sales_agent, refunds_agent],\n add_backlinks=True,\n)\n\nfor f in triage_agent.functions:\n print(f.__name__)\n\nif __name__ == \"__main__\":\n # Run the demo loop\n run_demo_loop(triage_agent, debug=False)\n", "examples\\support_bot\\main.py": "import re\n\nimport qdrant_client\nfrom openai import OpenAI\n\nfrom swarm import Agent\nfrom swarm.repl import run_demo_loop\n\n# Initialize connections\nclient = OpenAI()\nqdrant = qdrant_client.QdrantClient(host=\"localhost\")\n\n# Set embedding model\nEMBEDDING_MODEL = \"text-embedding-3-large\"\n\n# Set qdrant collection\ncollection_name = \"help_center\"\n\n\ndef query_qdrant(query, collection_name, vector_name=\"article\", top_k=5):\n # Creates embedding vector from user query\n embedded_query = (\n client.embeddings.create(\n input=query,\n model=EMBEDDING_MODEL,\n )\n .data[0]\n .embedding\n )\n\n query_results = qdrant.search(\n collection_name=collection_name,\n query_vector=(vector_name, embedded_query),\n limit=top_k,\n )\n\n return query_results\n\n\ndef query_docs(query):\n \"\"\"Query the knowledge base for relevant articles.\"\"\"\n print(f\"Searching knowledge base with query: {query}\")\n query_results = query_qdrant(query, collection_name=collection_name)\n output = []\n\n for i, article in enumerate(query_results):\n title = article.payload[\"title\"]\n text = article.payload[\"text\"]\n url = article.payload[\"url\"]\n\n output.append((title, text, url))\n\n if output:\n title, content, _ = output[0]\n response = f\"Title: {title}\\nContent: {content}\"\n truncated_content = re.sub(\n r\"\\s+\", \" \", content[:50] + \"...\" if len(content) > 50 else content\n )\n print(\"Most relevant article title:\", truncated_content)\n return {\"response\": response}\n else:\n print(\"No results\")\n return {\"response\": \"No results found.\"}\n\n\ndef send_email(email_address, message):\n \"\"\"Send an email to the user.\"\"\"\n response = f\"Email sent to: {email_address} with message: {message}\"\n return {\"response\": response}\n\n\ndef submit_ticket(description):\n \"\"\"Submit a ticket for the user.\"\"\"\n return {\"response\": f\"Ticket created for {description}\"}\n\n\ndef transfer_to_help_center():\n \"\"\"Transfer the user to the help center agent.\"\"\"\n return help_center_agent\n\n\nuser_interface_agent = Agent(\n name=\"User Interface Agent\",\n instructions=\"You are a user interface agent that handles all interactions with the user. Call this agent for general questions and when no other agent is correct for the user query.\",\n functions=[transfer_to_help_center],\n)\n\nhelp_center_agent = Agent(\n name=\"Help Center Agent\",\n instructions=\"You are an OpenAI help center agent who deals with questions about OpenAI products, such as GPT models, DALL-E, Whisper, etc.\",\n functions=[query_docs, submit_ticket, send_email],\n)\n\nif __name__ == \"__main__\":\n run_demo_loop(user_interface_agent)\n", "examples\\support_bot\\requirements.txt": "qdrant-client"}
null
tabulate
{"type": "directory", "name": "tabulate", "children": [{"type": "directory", "name": "excel-addin", "children": [{"type": "file", "name": ".eslintrc.json"}, {"type": "directory", "name": "assets", "children": []}, {"type": "file", "name": "manifest.xml"}, {"type": "file", "name": "package-lock.json"}, {"type": "file", "name": "package.json"}, {"type": "directory", "name": "src", "children": [{"type": "directory", "name": "commands", "children": [{"type": "file", "name": "commands.html"}, {"type": "file", "name": "commands.js"}]}, {"type": "directory", "name": "taskpane", "children": [{"type": "file", "name": "taskpane.css"}, {"type": "file", "name": "taskpane.html"}, {"type": "file", "name": "taskpane.js"}]}]}, {"type": "file", "name": "tsconfig.json"}, {"type": "file", "name": "webpack.config.js"}]}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "README.md"}]}
# OpenAI API Excel integration (Update 2022-05-09: This code is no longer being maintained and there are no expectations that it will work. For the most up to date documentation on our API, consider visiting https://beta.openai.com/examples or https://github.com/openai/openai-python) This repository contains an example OpenAI API integration for Excel. It allows users to query the API to automatically generate Excel tables about topics. For more details see the [API blog post](https://openai.com/blog/openai-api/) The integration is an Excel TaskPane Add-in, which is structured as an HTML / CSS / Javascript web app running in an iframe. See the following links for more info: - https://docs.microsoft.com/en-us/office/dev/add-ins/overview/learning-path-beginner - https://docs.microsoft.com/en-us/office/dev/add-ins/excel/excel-add-ins-core-concepts ## Setup Add your OpenAI API key and organization at the top of `excel-addin/src/taskpane.js` (search for `***KEY HERE***` and `***ORG HERE***`) To start the local development server from the `excel-addin` directory: - `brew install node@12` (Node LTS) - `npm install` - `npm run dev-server` Open Excel for the web. Click "Insert" Menu (Ribbon) > Click "Office Add-ins" > Click "Upload My Add-in" in the upper right corner > Select `excel-addin/manifest.xml` ([source](https://docs.microsoft.com/en-us/office/dev/add-ins/testing/sideload-office-add-ins-for-testing#sideload-an-office-add-in-in-office-on-the-web)) You should see a new "OpenAI API" command group on the "Home" ribbon; click the "Tabulate" button to open the sidebar with API commands
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", "excel-addin\\package.json": "{\n \"name\": \"office-addin-taskpane-js\",\n \"version\": \"0.0.1\",\n \"repository\": {\n \"type\": \"git\",\n \"url\": \"https://github.com/OfficeDev/Office-Addin-TaskPane-JS.git\"\n },\n \"license\": \"MIT\",\n \"config\": {\n \"app-to-debug\": \"excel\",\n \"app-type-to-debug\": \"desktop\",\n \"dev-server-port\": 3000\n },\n \"scripts\": {\n \"build\": \"webpack -p --mode production --https false\",\n \"build:dev\": \"webpack --mode development --https false\",\n \"build-dev\": \"webpack --mode development --https false && echo . && echo . && echo . && echo Please use 'build:dev' instead of 'build-dev'.\",\n \"dev-server\": \"webpack-dev-server --mode development\",\n \"lint\": \"office-addin-lint check\",\n \"lint:fix\": \"office-addin-lint fix\",\n \"prettier\": \"office-addin-lint prettier\",\n \"start\": \"office-addin-debugging start manifest.xml\",\n \"start:desktop\": \"office-addin-debugging start manifest.xml desktop\",\n \"start:web\": \"office-addin-debugging start manifest.xml web\",\n \"stop\": \"office-addin-debugging stop manifest.xml\",\n \"validate\": \"office-addin-manifest validate manifest.xml\",\n \"watch\": \"webpack --mode development --watch\"\n },\n \"dependencies\": {\n \"js-sha256\": \"^0.9.0\",\n \"sync-request\": \"^6.1.0\"\n },\n \"devDependencies\": {\n \"@babel/core\": \"^7.9.0\",\n \"@babel/polyfill\": \"^7.8.7\",\n \"@babel/preset-env\": \"^7.9.0\",\n \"@types/find-process\": \"1.2.0\",\n \"@types/office-js\": \"^1.0.91\",\n \"@types/office-runtime\": \"^1.0.13\",\n \"babel-loader\": \"^8.1.0\",\n \"clean-webpack-plugin\": \"^3.0.0\",\n \"copy-webpack-plugin\": \"^6.3.1\",\n \"eslint-config-office-addins\": \"^1.0.14\",\n \"file-loader\": \"^4.2.0\",\n \"find-process\": \"^1.4.3\",\n \"html-loader\": \"^0.5.5\",\n \"html-webpack-plugin\": \"^4.0.4\",\n \"office-addin-cli\": \"^1.0.9\",\n \"office-addin-debugging\": \"^3.0.25\",\n \"office-addin-dev-certs\": \"^1.5.0\",\n \"office-addin-lint\": \"^1.0.21\",\n \"office-addin-manifest\": \"^1.5.0\",\n \"office-addin-prettier-config\": \"^1.0.12\",\n \"source-map-loader\": \"^0.2.4\",\n \"ts-loader\": \"^6.2.2\",\n \"typescript\": \"^3.8.3\",\n \"webpack\": \"^4.42.1\",\n \"webpack-cli\": \"^3.3.11\",\n \"webpack-dev-server\": \"^3.11.0\"\n },\n \"prettier\": \"office-addin-prettier-config\"\n}\n"}
null
tiktoken
{"type": "directory", "name": "tiktoken", "children": [{"type": "file", "name": "Cargo.toml"}, {"type": "file", "name": "CHANGELOG.md"}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "MANIFEST.in"}, {"type": "file", "name": "perf.svg"}, {"type": "file", "name": "pyproject.toml"}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "scripts", "children": [{"type": "file", "name": "benchmark.py"}, {"type": "file", "name": "redact.py"}]}, {"type": "file", "name": "setup.py"}, {"type": "directory", "name": "src", "children": [{"type": "file", "name": "lib.rs"}]}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_encoding.py"}, {"type": "file", "name": "test_helpers.py"}, {"type": "file", "name": "test_misc.py"}, {"type": "file", "name": "test_offsets.py"}, {"type": "file", "name": "test_pickle.py"}, {"type": "file", "name": "test_simple_public.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "tiktoken", "children": [{"type": "file", "name": "core.py"}, {"type": "file", "name": "load.py"}, {"type": "file", "name": "model.py"}, {"type": "file", "name": "py.typed"}, {"type": "file", "name": "registry.py"}, {"type": "file", "name": "_educational.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "tiktoken_ext", "children": [{"type": "file", "name": "openai_public.py"}]}]}
# ⏳ tiktoken tiktoken is a fast [BPE](https://en.wikipedia.org/wiki/Byte_pair_encoding) tokeniser for use with OpenAI's models. ```python import tiktoken enc = tiktoken.get_encoding("o200k_base") assert enc.decode(enc.encode("hello world")) == "hello world" # To get the tokeniser corresponding to a specific model in the OpenAI API: enc = tiktoken.encoding_for_model("gpt-4o") ``` The open source version of `tiktoken` can be installed from PyPI: ``` pip install tiktoken ``` The tokeniser API is documented in `tiktoken/core.py`. Example code using `tiktoken` can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb). ## Performance `tiktoken` is between 3-6x faster than a comparable open source tokeniser: ![image](https://raw.githubusercontent.com/openai/tiktoken/main/perf.svg) Performance measured on 1GB of text using the GPT-2 tokeniser, using `GPT2TokenizerFast` from `tokenizers==0.13.2`, `transformers==4.24.0` and `tiktoken==0.2.0`. ## Getting help Please post questions in the [issue tracker](https://github.com/openai/tiktoken/issues). If you work at OpenAI, make sure to check the internal documentation or feel free to contact @shantanu. ## What is BPE anyway? Language models don't see text like you and I, instead they see a sequence of numbers (known as tokens). Byte pair encoding (BPE) is a way of converting text into tokens. It has a couple desirable properties: 1) It's reversible and lossless, so you can convert tokens back into the original text 2) It works on arbitrary text, even text that is not in the tokeniser's training data 3) It compresses the text: the token sequence is shorter than the bytes corresponding to the original text. On average, in practice, each token corresponds to about 4 bytes. 4) It attempts to let the model see common subwords. For instance, "ing" is a common subword in English, so BPE encodings will often split "encoding" into tokens like "encod" and "ing" (instead of e.g. "enc" and "oding"). Because the model will then see the "ing" token again and again in different contexts, it helps models generalise and better understand grammar. `tiktoken` contains an educational submodule that is friendlier if you want to learn more about the details of BPE, including code that helps visualise the BPE procedure: ```python from tiktoken._educational import * # Train a BPE tokeniser on a small amount of text enc = train_simple_encoding() # Visualise how the GPT-4 encoder encodes text enc = SimpleBytePairEncoding.from_tiktoken("cl100k_base") enc.encode("hello world aaaaaaaaaaaa") ``` ## Extending tiktoken You may wish to extend `tiktoken` to support new encodings. There are two ways to do this. **Create your `Encoding` object exactly the way you want and simply pass it around.** ```python cl100k_base = tiktoken.get_encoding("cl100k_base") # In production, load the arguments directly instead of accessing private attributes # See openai_public.py for examples of arguments for specific encodings enc = tiktoken.Encoding( # If you're changing the set of special tokens, make sure to use a different name # It should be clear from the name what behaviour to expect. name="cl100k_im", pat_str=cl100k_base._pat_str, mergeable_ranks=cl100k_base._mergeable_ranks, special_tokens={ **cl100k_base._special_tokens, "<|im_start|>": 100264, "<|im_end|>": 100265, } ) ``` **Use the `tiktoken_ext` plugin mechanism to register your `Encoding` objects with `tiktoken`.** This is only useful if you need `tiktoken.get_encoding` to find your encoding, otherwise prefer option 1. To do this, you'll need to create a namespace package under `tiktoken_ext`. Layout your project like this, making sure to omit the `tiktoken_ext/__init__.py` file: ``` my_tiktoken_extension ├── tiktoken_ext │   └── my_encodings.py └── setup.py ``` `my_encodings.py` should be a module that contains a variable named `ENCODING_CONSTRUCTORS`. This is a dictionary from an encoding name to a function that takes no arguments and returns arguments that can be passed to `tiktoken.Encoding` to construct that encoding. For an example, see `tiktoken_ext/openai_public.py`. For precise details, see `tiktoken/registry.py`. Your `setup.py` should look something like this: ```python from setuptools import setup, find_namespace_packages setup( name="my_tiktoken_extension", packages=find_namespace_packages(include=['tiktoken_ext*']), install_requires=["tiktoken"], ... ) ``` Then simply `pip install ./my_tiktoken_extension` and you should be able to use your custom encodings! Make sure **not** to use an editable install.
{"setup.py": "from setuptools import setup\nfrom setuptools_rust import Binding, RustExtension\n\nsetup(\n name=\"tiktoken\",\n rust_extensions=[\n RustExtension(\n \"tiktoken._tiktoken\",\n binding=Binding.PyO3,\n # Between our use of editable installs and wanting to use Rust for performance sensitive\n # code, it makes sense to just always use --release\n debug=False,\n )\n ],\n package_data={\"tiktoken\": [\"py.typed\"]},\n packages=[\"tiktoken\", \"tiktoken_ext\"],\n zip_safe=False,\n)\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 63527649963def8c759b0f91f2eb69a40934e468 Hamza Amin <[email protected]> 1729337354 +0500\tclone: from https://github.com/openai/tiktoken.git\n", ".git\\refs\\heads\\main": "63527649963def8c759b0f91f2eb69a40934e468\n"}
null
transformer-debugger
{"type": "directory", "name": "transformer-debugger", "children": [{"type": "file", "name": ".isort.cfg"}, {"type": "file", "name": ".pre-commit-config.yaml"}, {"type": "file", "name": "datasets.md"}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "mypy.ini"}, {"type": "directory", "name": "neuron_explainer", "children": [{"type": "directory", "name": "activations", "children": [{"type": "file", "name": "activations.py"}, {"type": "file", "name": "activation_records.py"}, {"type": "file", "name": "attention_utils.py"}, {"type": "directory", "name": "derived_scalars", "children": [{"type": "file", "name": "activations_and_metadata.py"}, {"type": "file", "name": "attention.py"}, {"type": "file", "name": "autoencoder.py"}, {"type": "file", "name": "config.py"}, {"type": "file", "name": "derived_scalar_store.py"}, {"type": "file", "name": "derived_scalar_types.py"}, {"type": "file", "name": "direct_effects.py"}, {"type": "file", "name": "edge_activation.py"}, {"type": "file", "name": "edge_attribution.py"}, {"type": "file", "name": "indexing.py"}, {"type": "file", "name": "least_common_tokens.py"}, {"type": "file", "name": "locations.py"}, {"type": "file", "name": "logprobs.py"}, {"type": "file", "name": "make_scalar_derivers.py"}, {"type": "file", "name": "mlp.py"}, {"type": "file", "name": "multi_group.py"}, {"type": "file", "name": "multi_pass_scalar_deriver.py"}, {"type": "file", "name": "node_write.py"}, {"type": "file", "name": "postprocessing.py"}, {"type": "file", "name": "raw_activations.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "reconstituted.py"}, {"type": "file", "name": "reconstituter_class.py"}, {"type": "file", "name": "residual.py"}, {"type": "file", "name": "scalar_deriver.py"}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_attention.py"}, {"type": "file", "name": "test_derived_scalar_store.py"}, {"type": "file", "name": "test_derived_scalar_types.py"}, {"type": "file", "name": "utils.py"}]}, {"type": "file", "name": "tokens.py"}, {"type": "file", "name": "utils.py"}, {"type": "file", "name": "write_tensors.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "hook_graph.py"}, {"type": "file", "name": "test_attention_utils.py"}]}, {"type": "directory", "name": "activation_server", "children": [{"type": "file", "name": "derived_scalar_computation.py"}, {"type": "file", "name": "dst_helpers.py"}, {"type": "file", "name": "explainer_routes.py"}, {"type": "file", "name": "explanation_datasets.py"}, {"type": "file", "name": "inference_routes.py"}, {"type": "file", "name": "interactive_model.py"}, {"type": "file", "name": "load_neurons.py"}, {"type": "file", "name": "main.py"}, {"type": "file", "name": "neuron_datasets.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "read_routes.py"}, {"type": "file", "name": "requests_and_responses.py"}, {"type": "file", "name": "tdb_conversions.py"}]}, {"type": "file", "name": "api_client.py"}, {"type": "directory", "name": "explanations", "children": [{"type": "file", "name": "attention_head_scoring.py"}, {"type": "file", "name": "calibrated_simulator.py"}, {"type": "file", "name": "explainer.py"}, {"type": "file", "name": "explanations.py"}, {"type": "file", "name": "few_shot_examples.py"}, {"type": "file", "name": "prompt_builder.py"}, {"type": "file", "name": "scoring.py"}, {"type": "file", "name": "simulator.py"}, {"type": "file", "name": "test_explainer.py"}, {"type": "file", "name": "test_simulator.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "fast_dataclasses", "children": [{"type": "file", "name": "fast_dataclasses.py"}, {"type": "file", "name": "test_fast_dataclasses.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "file_utils.py"}, {"type": "directory", "name": "models", "children": [{"type": "file", "name": "autoencoder.py"}, {"type": "file", "name": "autoencoder_context.py"}, {"type": "file", "name": "hooks.py"}, {"type": "file", "name": "inference_engine_type_registry.py"}, {"type": "file", "name": "model_component_registry.py"}, {"type": "file", "name": "model_context.py"}, {"type": "file", "name": "model_registry.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "transformer.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "pydantic", "children": [{"type": "file", "name": "camel_case_base_model.py"}, {"type": "file", "name": "hashable_base_model.py"}, {"type": "file", "name": "immutable.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "scripts", "children": [{"type": "file", "name": "create_hf_test_data.py"}, {"type": "file", "name": "download_from_hf.py"}]}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "conftest.py"}, {"type": "file", "name": "test_activation_reconstituter.py"}, {"type": "file", "name": "test_against_data.py"}, {"type": "file", "name": "test_all_dsts.py"}, {"type": "file", "name": "test_emb_dsts.py"}, {"type": "file", "name": "test_hooks.py"}, {"type": "file", "name": "test_interactive_model.py"}, {"type": "file", "name": "test_model_context_get_weight.py"}, {"type": "file", "name": "test_offline_autoencoder_dsts.py"}, {"type": "file", "name": "test_online_autoencoder_dsts.py"}, {"type": "file", "name": "test_postprocessing.py"}, {"type": "file", "name": "test_reconstituted_gradients.py"}, {"type": "file", "name": "test_serialization_of_model_config_from_model_context.py"}, {"type": "file", "name": "test_trace_through_v.py"}, {"type": "file", "name": "test_transformer.py"}]}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "neuron_viewer", "children": [{"type": "file", "name": ".parcelrc"}, {"type": "file", "name": ".postcssrc"}, {"type": "file", "name": ".prettierrc"}, {"type": "file", "name": "package-lock.json"}, {"type": "file", "name": "package.json"}, {"type": "file", "name": "prepend_autogen_comments.sh"}, {"type": "directory", "name": "public", "children": [{"type": "file", "name": "favicon.ico"}, {"type": "file", "name": "manifest.json"}, {"type": "file", "name": "robots.txt"}]}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "src", "children": [{"type": "file", "name": "App.css"}, {"type": "file", "name": "App.tsx"}, {"type": "directory", "name": "client", "children": [{"type": "directory", "name": "core", "children": [{"type": "file", "name": "ApiError.ts"}, {"type": "file", "name": "ApiRequestOptions.ts"}, {"type": "file", "name": "ApiResult.ts"}, {"type": "file", "name": "CancelablePromise.ts"}, {"type": "file", "name": "OpenAPI.ts"}, {"type": "file", "name": "request.ts"}]}, {"type": "file", "name": "index.ts"}, {"type": "directory", "name": "models", "children": [{"type": "file", "name": "AblationSpec.ts"}, {"type": "file", "name": "ActivationLocationType.ts"}, {"type": "file", "name": "AttentionHeadRecordResponse.ts"}, {"type": "file", "name": "AttentionTraceType.ts"}, {"type": "file", "name": "AttributedScoredExplanation.ts"}, {"type": "file", "name": "BatchedRequest.ts"}, {"type": "file", "name": "BatchedResponse.ts"}, {"type": "file", "name": "BatchedTdbRequest.ts"}, {"type": "file", "name": "ComponentTypeForAttention.ts"}, {"type": "file", "name": "ComponentTypeForMlp.ts"}, {"type": "file", "name": "DerivedAttentionScalarsRequest.ts"}, {"type": "file", "name": "DerivedAttentionScalarsRequestSpec.ts"}, {"type": "file", "name": "DerivedAttentionScalarsResponse.ts"}, {"type": "file", "name": "DerivedAttentionScalarsResponseData.ts"}, {"type": "file", "name": "DerivedScalarsRequest.ts"}, {"type": "file", "name": "DerivedScalarsRequestSpec.ts"}, {"type": "file", "name": "DerivedScalarsResponse.ts"}, {"type": "file", "name": "DerivedScalarsResponseData.ts"}, {"type": "file", "name": "DerivedScalarType.ts"}, {"type": "file", "name": "Dimension.ts"}, {"type": "file", "name": "ExistingExplanationsRequest.ts"}, {"type": "file", "name": "ExplanationResult.ts"}, {"type": "file", "name": "GroupId.ts"}, {"type": "file", "name": "HTTPValidationError.ts"}, {"type": "file", "name": "InferenceAndTokenData.ts"}, {"type": "file", "name": "InferenceRequestSpec.ts"}, {"type": "file", "name": "InferenceResponse.ts"}, {"type": "file", "name": "InferenceResponseAndResponseDict.ts"}, {"type": "file", "name": "InferenceSubRequest.ts"}, {"type": "file", "name": "LossFnConfig.ts"}, {"type": "file", "name": "LossFnName.ts"}, {"type": "file", "name": "MirroredActivationIndex.ts"}, {"type": "file", "name": "MirroredNodeIndex.ts"}, {"type": "file", "name": "MirroredTraceConfig.ts"}, {"type": "file", "name": "ModelInfoResponse.ts"}, {"type": "file", "name": "MultipleTopKDerivedScalarsRequest.ts"}, {"type": "file", "name": "MultipleTopKDerivedScalarsRequestSpec.ts"}, {"type": "file", "name": "MultipleTopKDerivedScalarsResponse.ts"}, {"type": "file", "name": "MultipleTopKDerivedScalarsResponseData.ts"}, {"type": "file", "name": "NeuronDatasetMetadata.ts"}, {"type": "file", "name": "NeuronRecordResponse.ts"}, {"type": "file", "name": "NodeAblation.ts"}, {"type": "file", "name": "NodeIdAndDatasets.ts"}, {"type": "file", "name": "NodeToTrace.ts"}, {"type": "file", "name": "NodeType.ts"}, {"type": "file", "name": "PassType.ts"}, {"type": "file", "name": "PreOrPostAct.ts"}, {"type": "file", "name": "ProcessingResponseDataType.ts"}, {"type": "file", "name": "ScoredTokensRequestSpec.ts"}, {"type": "file", "name": "ScoredTokensResponseData.ts"}, {"type": "file", "name": "ScoreRequest.ts"}, {"type": "file", "name": "ScoreResult.ts"}, {"type": "file", "name": "TdbRequestSpec.ts"}, {"type": "file", "name": "Tensor0D.ts"}, {"type": "file", "name": "Tensor1D.ts"}, {"type": "file", "name": "Tensor2D.ts"}, {"type": "file", "name": "Tensor3D.ts"}, {"type": "file", "name": "TensorType.ts"}, {"type": "file", "name": "TokenAndAttentionScalars.ts"}, {"type": "file", "name": "TokenAndScalar.ts"}, {"type": "file", "name": "TokenPairAttributionRequestSpec.ts"}, {"type": "file", "name": "TokenPairAttributionResponseData.ts"}, {"type": "file", "name": "TokenScoringType.ts"}, {"type": "file", "name": "TopTokens.ts"}, {"type": "file", "name": "TopTokensAttendedTo.ts"}, {"type": "file", "name": "ValidationError.ts"}]}, {"type": "directory", "name": "services", "children": [{"type": "file", "name": "ExplainerService.ts"}, {"type": "file", "name": "HelloWorldService.ts"}, {"type": "file", "name": "InferenceService.ts"}, {"type": "file", "name": "MemoryService.ts"}, {"type": "file", "name": "ReadService.ts"}]}]}, {"type": "file", "name": "colors.ts"}, {"type": "file", "name": "commonUiComponents.tsx"}, {"type": "file", "name": "heatmapGrid.tsx"}, {"type": "file", "name": "heatmapGrid2d.tsx"}, {"type": "file", "name": "images.d.ts"}, {"type": "file", "name": "index.css"}, {"type": "file", "name": "index.html"}, {"type": "file", "name": "index.tsx"}, {"type": "file", "name": "modelInteractions.tsx"}, {"type": "file", "name": "navigation.tsx"}, {"type": "file", "name": "nodePage.tsx"}, {"type": "directory", "name": "panes", "children": [{"type": "file", "name": "activationsForPrompt.tsx"}, {"type": "file", "name": "datasetExamples.tsx"}, {"type": "file", "name": "explanation.tsx"}, {"type": "file", "name": "fetchAndDisplayPane.tsx"}, {"type": "file", "name": "index.ts"}, {"type": "file", "name": "logitLens.tsx"}, {"type": "file", "name": "scoreExplanation.tsx"}]}, {"type": "file", "name": "plots.tsx"}, {"type": "directory", "name": "requests", "children": [{"type": "file", "name": "explainerRequests.ts"}, {"type": "file", "name": "inferenceRequests.ts"}, {"type": "file", "name": "paths.ts"}, {"type": "file", "name": "readRequests.ts"}]}, {"type": "file", "name": "tokenHeatmap.tsx"}, {"type": "file", "name": "tokenHeatmap2d.tsx"}, {"type": "file", "name": "tokenRendering.tsx"}, {"type": "directory", "name": "TransformerDebugger", "children": [{"type": "directory", "name": "cards", "children": [{"type": "file", "name": "BySequenceTokenDisplay.tsx"}, {"type": "file", "name": "DisplayOptions.tsx"}, {"type": "directory", "name": "inference_params", "children": [{"type": "file", "name": "AblateNodeSpecs.tsx"}, {"type": "file", "name": "inferenceParams.ts"}, {"type": "file", "name": "InferenceParamsDisplay.tsx"}, {"type": "file", "name": "TokenLabel.tsx"}, {"type": "file", "name": "TraceUpstreamNodeSpec.tsx"}]}, {"type": "file", "name": "LayerDisplay.tsx"}, {"type": "file", "name": "LogitsDisplay.tsx"}, {"type": "directory", "name": "node_table", "children": [{"type": "file", "name": "NodeTable.tsx"}, {"type": "file", "name": "TopTokensDisplay.tsx"}]}, {"type": "directory", "name": "prompt", "children": [{"type": "file", "name": "MultiTokenInput.tsx"}, {"type": "file", "name": "PromptAndTokensOfInterest.tsx"}]}, {"type": "file", "name": "SparsityMetricsDisplay.tsx"}, {"type": "file", "name": "TokenTable.tsx"}]}, {"type": "directory", "name": "common", "children": [{"type": "file", "name": "ExplanatoryTooltip.tsx"}, {"type": "file", "name": "JsonModal.tsx"}]}, {"type": "directory", "name": "requests", "children": [{"type": "file", "name": "explanationFetcher.ts"}, {"type": "file", "name": "inferenceDataFetcher.ts"}, {"type": "file", "name": "inferenceResponseUtils.tsx"}]}, {"type": "file", "name": "TransformerDebugger.tsx"}, {"type": "directory", "name": "utils", "children": [{"type": "file", "name": "explanations.ts"}, {"type": "file", "name": "nodes.tsx"}, {"type": "file", "name": "numbers.tsx"}, {"type": "file", "name": "urlParams.ts"}]}]}, {"type": "file", "name": "types.ts"}, {"type": "file", "name": "welcome.tsx"}]}, {"type": "file", "name": "tailwind.config.js"}, {"type": "file", "name": "tsconfig.json"}]}, {"type": "file", "name": "pyproject.toml"}, {"type": "file", "name": "pytest.ini"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "setup.py"}, {"type": "file", "name": "terminology.md"}]}
# Neuron viewer A React app that hosts TDB as well as pages with information about individual model components (MLP neurons, attention heads and autoencoder latents for both). ## Running the server locally First, install the app: ```sh npm install ``` Then run the frontend: ```sh npm start ``` - To open a Neuron Viewer page, navigate to `http://localhost:1234`. - To open TDB, navigate to `http://localhost:1234/gpt2-small/tdb_alpha`. - To open TDB with autoencoders, navigate to `http://localhost:1234/gpt2-small_ae-resid-delta-mlp-v4_ae-resid-delta-attn-v4/tdb_alpha` (where `ae-resid-delta-mlp-v4` and `ae-resid-delta-attn-v4` must match the autoencoder names that are used in the [activation server](../neuron_explainer/activation_server/README.md)). ## Formatting code To check whether the code is correctly formatted: ```sh npm run check-code-format ``` To format the code: ```sh npm run format-code ``` ## Code organization - [src/client](src/client/): Auto-generated code for interacting with the activation server (the neuron viewer's backend). Do not edit this code! Follow the instructions in [the activation server README](../neuron_explainer/activation_server/README.md) to regenerate this code if you make changes to the activation server. Use [src/requests](src/requests/) when calling the activation server. - [src/panes](src/panes/): UI elements that can be used as panes on a page: tokens+activations, similar neurons, etc. - [src/requests](src/requests/): Client libraries for making network requests to the activation server. - [src/TransformerDebugger](src/TransformerDebugger/): Code related to the Transformer Debugger. - [src](src/): Other code. ## Using a remote activation server If you decide to run your activation server on a different host or port than the default, you can point neuron viewer at it by setting the `NEURON_VIEWER_ACTIVATION_SERVER_URL` environment variable: ```sh NEURON_VIEWER_ACTIVATION_SERVER_URL=https://some.url:port npm start ``` ## Making changes Be sure to run the following to validate any changes you make: ```sh npm run check-type-warnings && npm run check-code-format && npm run build ```
{"setup.py": "from setuptools import find_packages, setup\n\nsetup(\n name=\"neuron_explainer\",\n packages=find_packages(),\n version=\"0.0.1\",\n author=\"OpenAI\",\n install_requires=[\n \"aiohttp\",\n \"click\",\n \"fastapi==0.97\",\n \"fire\",\n \"httpx>=0.22\",\n \"mypy==1.7.1\",\n \"numpy\",\n \"orjson\",\n \"pre-commit\",\n \"pydantic<2.0.0\",\n \"pytest\",\n \"pytest-asyncio\",\n \"scikit-learn\",\n \"starlette\",\n \"tiktoken\",\n \"torch>=1.13\",\n \"uvicorn\",\n ],\n url=\"\",\n description=\"\",\n python_requires=\">=3.11\",\n)\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 87e6db7b7e73ded5037eeeff05deb5e81548a10a Hamza Amin <[email protected]> 1729337447 +0500\tclone: from https://github.com/openai/transformer-debugger.git\n", ".git\\refs\\heads\\main": "87e6db7b7e73ded5037eeeff05deb5e81548a10a\n", "neuron_explainer\\activations\\derived_scalars\\indexing.py": "\"\"\"\nThis file contains classes for referring to individual nodes (e.g. attention heads), activations\n(e.g. attention post-softmax), or derived scalars (e.g. attention head write norm) from a forward\npass. DerivedScalarIndex can be used to index into a DerivedScalarStore.\n\nThese classes have a parallel structure to each other. One node index can be associated with\nmultiple activation indices and derived scalar indices. Derived scalar indices can be associated\nwith more types of scalars that aren't instantiated as 'activations' in the forward pass as\nimplemented.\n\nMirrored versions of these classes are used to refer to the same objects, but in a way that can be\ntransmitted via pydantic response and request data types for communication with a server. Changes\napplied to mirrored dataclasses must be applied also to their unmirrored versions, and vice versa.\n\"\"\"\n\nimport dataclasses\nfrom dataclasses import dataclass\nfrom enum import Enum, unique\nfrom typing import Any, Literal, Union\n\nfrom neuron_explainer.activations.derived_scalars.derived_scalar_types import DerivedScalarType\nfrom neuron_explainer.models.model_component_registry import (\n ActivationLocationType,\n Dimension,\n LayerIndex,\n NodeType,\n PassType,\n)\nfrom neuron_explainer.pydantic import CamelCaseBaseModel, HashableBaseModel, immutable\n\nDETACH_LAYER_NORM_SCALE = (\n True # this sets default behavior for whether to detach layer norm scale everywhere\n # TODO: if all goes well, have this be hard-coded to True, and remove the plumbing\n)\n\n\n@dataclass(frozen=True)\nclass DerivedScalarIndex:\n \"\"\"\n Indexes into a DerivedScalarStore and returns a tensor of activations specified by indices.\n \"\"\"\n\n dst: DerivedScalarType\n tensor_indices: tuple[\n int | None, ...\n ] # the indices of the activation tensor (not including layer_index)\n # elements of indices correspond to the elements of\n # scalar_deriver.shape_of_activation_per_token_spec\n # e.g. MLP activations might have shape (n_tokens, n_neurons).\n # an element of indices is None -> apply slice(None) for that dimension\n layer_index: LayerIndex # the layer_index of the activation, if applicable\n pass_type: PassType\n\n @property\n def tensor_index_by_dim(self) -> dict[Dimension, int | None]:\n tensor_indices_list = list(self.tensor_indices)\n assert len(tensor_indices_list) <= len(self.dst.shape_spec_per_token_sequence), (\n f\"Too many tensor indices {tensor_indices_list} for \"\n f\"{self.dst.shape_spec_per_token_sequence=}\"\n )\n tensor_indices_list.extend(\n [None] * (len(self.dst.shape_spec_per_token_sequence) - len(self.tensor_indices))\n )\n return dict(zip(self.dst.shape_spec_per_token_sequence, tensor_indices_list))\n\n @classmethod\n def from_node_index(\n cls,\n node_index: \"NodeIndex | MirroredNodeIndex\",\n dst: DerivedScalarType,\n ) -> \"DerivedScalarIndex\":\n # with the extra information of what dst is desired (subject to the constraint\n # that it must share the same node_type), we can convert a NodeIndex to a DerivedScalarIndex\n assert (\n node_index.node_type == dst.node_type\n ), f\"Node type does not match with the derived scalar type: {node_index.node_type=}, {dst=}\"\n return cls(\n dst=dst,\n layer_index=node_index.layer_index,\n tensor_indices=node_index.tensor_indices,\n pass_type=node_index.pass_type,\n )\n\n\n@immutable\nclass MirroredDerivedScalarIndex(HashableBaseModel):\n dst: DerivedScalarType\n tensor_indices: tuple[int | None, ...]\n layer_index: LayerIndex\n pass_type: PassType\n\n @classmethod\n def from_ds_index(cls, ds_index: DerivedScalarIndex) -> \"MirroredDerivedScalarIndex\":\n return cls(\n dst=ds_index.dst,\n layer_index=ds_index.layer_index,\n tensor_indices=ds_index.tensor_indices,\n pass_type=ds_index.pass_type,\n )\n\n def to_ds_index(self) -> DerivedScalarIndex:\n return DerivedScalarIndex(\n dst=self.dst,\n layer_index=self.layer_index,\n tensor_indices=self.tensor_indices,\n pass_type=self.pass_type,\n )\n\n\nAllOrOneIndex = Union[int, Literal[\"All\"]]\nAllOrOneIndices = tuple[AllOrOneIndex, ...]\n\n\n@dataclass(frozen=True)\nclass ActivationIndex:\n \"\"\"\n This is parallel to DerivedScalarIndex, but specifically for ActivationLocationType's, not for more general DerivedScalarType's.\n \"\"\"\n\n activation_location_type: ActivationLocationType\n tensor_indices: AllOrOneIndices\n layer_index: LayerIndex\n pass_type: PassType\n\n @property\n def tensor_index_by_dim(self) -> dict[Dimension, AllOrOneIndex]:\n # copied from DerivedScalarIndex; TODO: ActivationIndex and DerivedScalarIndex inherit from a shared base class,\n # and perhaps likewise with DerivedScalarType and ActivationLocationType?\n tensor_indices_list = list(self.tensor_indices)\n assert len(tensor_indices_list) <= len(\n self.activation_location_type.shape_spec_per_token_sequence\n ), (\n f\"Too many tensor indices {tensor_indices_list} for \"\n f\"{self.activation_location_type.shape_spec_per_token_sequence=}\"\n )\n tensor_indices_list.extend(\n [\"All\"]\n * (\n len(self.activation_location_type.shape_spec_per_token_sequence)\n - len(self.tensor_indices)\n )\n )\n assert len(tensor_indices_list) == len(\n self.activation_location_type.shape_spec_per_token_sequence\n )\n return dict(\n zip(\n self.activation_location_type.shape_spec_per_token_sequence,\n tensor_indices_list,\n )\n )\n\n @classmethod\n def from_node_index(\n cls,\n node_index: \"NodeIndex | MirroredNodeIndex\",\n activation_location_type: ActivationLocationType,\n ) -> \"ActivationIndex\":\n # with the extra information of what activation_location_type is desired (subject to the constraint\n # that it must share the same node_type), we can convert a NodeIndex to an ActivationIndex\n assert (\n node_index.node_type == activation_location_type.node_type\n ), f\"Node type does not match with the derived scalar type: {node_index.node_type=}, {activation_location_type=}\"\n return cls(\n activation_location_type=activation_location_type,\n layer_index=node_index.layer_index,\n tensor_indices=make_all_or_one_from_tensor_indices(node_index.tensor_indices),\n pass_type=node_index.pass_type,\n )\n\n @property\n def ndim(self) -> int:\n return compute_indexed_tensor_ndim(\n activation_location_type=self.activation_location_type,\n tensor_indices=self.tensor_indices,\n )\n\n def with_updates(self, **kwargs: Any) -> \"ActivationIndex\":\n \"\"\"Given new values for fields of this ActivationIndex, return a new ActivationIndex instance with those\n fields updated\"\"\"\n return dataclasses.replace(self, **kwargs)\n\n\ndef make_all_or_one_from_tensor_indices(tensor_indices: tuple[int | None, ...]) -> AllOrOneIndices:\n return tuple(\"All\" if tensor_index is None else tensor_index for tensor_index in tensor_indices)\n\n\ndef make_tensor_indices_from_all_or_one_indices(\n all_or_one_indices: AllOrOneIndices,\n) -> tuple[int | None, ...]:\n return tuple(\n None if all_or_one_index == \"All\" else all_or_one_index\n for all_or_one_index in all_or_one_indices\n )\n\n\ndef compute_indexed_tensor_ndim(\n activation_location_type: ActivationLocationType,\n tensor_indices: AllOrOneIndices | tuple[int | None, ...],\n) -> int:\n \"\"\"Returns the dimensionality of a tensor of the given ActivationLocationType after being indexed by tensor_indices.\n int dimensions are removed from the resulting tensor.\"\"\"\n ndim = activation_location_type.ndim_per_token_sequence - len(\n [tensor_index for tensor_index in tensor_indices if tensor_index not in {\"All\", None}]\n )\n assert ndim >= 0\n return ndim\n\n\ndef make_python_slice_from_tensor_indices(\n tensor_indices: tuple[int | None, ...]\n) -> tuple[slice | int, ...]:\n return make_python_slice_from_all_or_one_indices(\n make_all_or_one_from_tensor_indices(tensor_indices)\n )\n\n\ndef make_python_slice_from_all_or_one_indices(\n all_or_one_indices: AllOrOneIndices,\n) -> tuple[slice | int, ...]:\n return tuple(\n slice(None) if all_or_one_index == \"All\" else all_or_one_index\n for all_or_one_index in all_or_one_indices\n )\n\n\n@immutable\nclass MirroredActivationIndex(HashableBaseModel):\n activation_location_type: ActivationLocationType\n tensor_indices: AllOrOneIndices\n layer_index: LayerIndex\n pass_type: PassType\n\n @classmethod\n def from_activation_index(cls, activation_index: ActivationIndex) -> \"MirroredActivationIndex\":\n return cls(\n activation_location_type=activation_index.activation_location_type,\n layer_index=activation_index.layer_index,\n tensor_indices=activation_index.tensor_indices,\n pass_type=activation_index.pass_type,\n )\n\n def to_activation_index(self) -> ActivationIndex:\n return ActivationIndex(\n activation_location_type=self.activation_location_type,\n layer_index=self.layer_index,\n tensor_indices=self.tensor_indices,\n pass_type=self.pass_type,\n )\n\n\n@dataclass(frozen=True)\nclass NodeIndex:\n \"\"\"\n This is parallel to DerivedScalarIndex, but refers to the NodeType associated with a\n DerivedScalarType, rather than the DerivedScalarType itself. This is for situations in\n which multiple derived scalars are computed for the same node.\n \"\"\"\n\n node_type: NodeType\n tensor_indices: tuple[int | None, ...]\n layer_index: LayerIndex\n pass_type: PassType\n\n @classmethod\n def from_ds_index(\n cls,\n ds_index: DerivedScalarIndex,\n ) -> \"NodeIndex\":\n return cls(\n node_type=ds_index.dst.node_type,\n layer_index=ds_index.layer_index,\n tensor_indices=ds_index.tensor_indices,\n pass_type=ds_index.pass_type,\n )\n\n @classmethod\n def from_activation_index(\n cls,\n activation_index: ActivationIndex,\n ) -> \"NodeIndex\":\n return cls(\n node_type=activation_index.activation_location_type.node_type,\n layer_index=activation_index.layer_index,\n tensor_indices=make_tensor_indices_from_all_or_one_indices(\n activation_index.tensor_indices\n ),\n pass_type=activation_index.pass_type,\n )\n\n def with_updates(self, **kwargs: Any) -> \"NodeIndex\":\n \"\"\"Given new values for fields of this NodeIndex, return a new NodeIndex instance with those\n fields updated\"\"\"\n return dataclasses.replace(self, **kwargs)\n\n @property\n def ndim(self) -> int:\n match self.node_type:\n case NodeType.ATTENTION_HEAD:\n reference_activation_location_type = ActivationLocationType.ATTN_QK_PROBS\n case NodeType.MLP_NEURON:\n reference_activation_location_type = ActivationLocationType.MLP_POST_ACT\n case NodeType.AUTOENCODER_LATENT:\n reference_activation_location_type = (\n ActivationLocationType.ONLINE_AUTOENCODER_LATENT\n )\n case NodeType.MLP_AUTOENCODER_LATENT:\n reference_activation_location_type = (\n ActivationLocationType.ONLINE_MLP_AUTOENCODER_LATENT\n )\n case NodeType.ATTENTION_AUTOENCODER_LATENT:\n reference_activation_location_type = (\n ActivationLocationType.ONLINE_ATTENTION_AUTOENCODER_LATENT\n )\n case NodeType.RESIDUAL_STREAM_CHANNEL:\n reference_activation_location_type = ActivationLocationType.RESID_POST_MLP\n case _:\n raise NotImplementedError(f\"Node type {self.node_type} not supported\")\n return compute_indexed_tensor_ndim(\n activation_location_type=reference_activation_location_type,\n tensor_indices=self.tensor_indices,\n )\n\n def to_subnode_index(self, q_k_or_v: ActivationLocationType) -> \"AttnSubNodeIndex\":\n assert (\n self.node_type == NodeType.ATTENTION_HEAD\n ), f\"Node type {self.node_type} is not NodeType.ATTENTION_HEAD\"\n return AttnSubNodeIndex(\n node_type=self.node_type,\n layer_index=self.layer_index,\n tensor_indices=self.tensor_indices,\n pass_type=self.pass_type,\n q_k_or_v=q_k_or_v,\n )\n\n\n@immutable\nclass MirroredNodeIndex(HashableBaseModel):\n \"\"\"This class mirrors the fields of NodeIndex without default values.\"\"\"\n\n node_type: NodeType\n tensor_indices: tuple[int | None, ...]\n layer_index: LayerIndex\n pass_type: PassType\n\n @classmethod\n def from_node_index(cls, node_index: NodeIndex) -> \"MirroredNodeIndex\":\n \"\"\"\n Note that this conversion may lose information, specifically if the if the NodeIndex\n is an instance of a subclass of NodeIndex such as AttnSubNodeIndex.\n \"\"\"\n return cls(\n node_type=node_index.node_type,\n layer_index=node_index.layer_index,\n tensor_indices=node_index.tensor_indices,\n pass_type=node_index.pass_type,\n )\n\n def to_node_index(self) -> NodeIndex:\n return NodeIndex(\n node_type=self.node_type,\n layer_index=self.layer_index,\n tensor_indices=self.tensor_indices,\n pass_type=self.pass_type,\n )\n\n\n@dataclass(frozen=True)\nclass AttnSubNodeIndex(NodeIndex):\n \"\"\"A NodeIndex that contains an extra piece of metadata, q_k_or_v,\n which specifies whether the input to an attention head node should\n be restricted to the portion going through the query, key, or value\"\"\"\n\n q_k_or_v: ActivationLocationType\n\n def __post_init__(self) -> None:\n assert (\n self.node_type == NodeType.ATTENTION_HEAD\n ), f\"Node type {self.node_type} is not NodeType.ATTENTION_HEAD\"\n assert self.q_k_or_v in {\n ActivationLocationType.ATTN_QUERY,\n ActivationLocationType.ATTN_KEY,\n ActivationLocationType.ATTN_VALUE,\n }\n\n\n# TODO: consider subsuming this and the above into NodeIndex/ActivationIndex respectively\n@dataclass(frozen=True)\nclass AttnSubActivationIndex(ActivationIndex):\n \"\"\"An ActivationIndex that contains an extra piece of metadata, q_or_k,\n which specifies whether the input to an attention head node should\n be restricted to the portion going through the query or key\"\"\"\n\n q_or_k: ActivationLocationType\n\n def __post_init__(self) -> None:\n assert self.activation_location_type.node_type == NodeType.ATTENTION_HEAD\n assert self.q_or_k in {\n ActivationLocationType.ATTN_QUERY,\n ActivationLocationType.ATTN_KEY,\n }\n\n\n@immutable\nclass AblationSpec(CamelCaseBaseModel):\n \"\"\"A specification for performing ablation on a model.\"\"\"\n\n index: MirroredActivationIndex\n value: float\n\n\n@unique\nclass AttentionTraceType(Enum):\n Q = \"Q\"\n K = \"K\"\n QK = \"QK\"\n \"\"\"Q times K\"\"\"\n V = \"V\"\n \"\"\"Allow gradient to flow through value vector; the attention write * gradient with respect to\n some downstream node or the loss provides the scalar which is backpropagated\"\"\"\n\n\n@immutable\nclass NodeAblation(CamelCaseBaseModel):\n \"\"\"A specification for tracing an upstream node.\n\n This data structure is used by the client. The server converts it to an AblationSpec.\n \"\"\"\n\n node_index: MirroredNodeIndex\n value: float\n\n\nclass PreOrPostAct(str, Enum):\n \"\"\"Specifies whether to trace from pre- or post-nonlinearity\"\"\"\n\n PRE = \"pre\"\n POST = \"post\"\n\n\n@dataclass(frozen=True)\nclass TraceConfig:\n \"\"\"This specifies a node from which to compute a backward pass, along with whether to trace from\n pre- or post-nonlinearity, which subnodes to flow the gradient through in the case of an attention node,\n and whether to detach the layer norm scale just before the activation (i.e. whether to flow gradients\n through the layer norm scale parameter).\"\"\"\n\n node_index: NodeIndex\n pre_or_post_act: PreOrPostAct\n detach_layer_norm_scale: bool\n attention_trace_type: AttentionTraceType | None = None # applies only to attention heads\n downstream_trace_config: \"TraceConfig | None\" = (\n None # applies only to attention heads with attention_trace_type == AttentionTraceType.V\n )\n\n def __post_init__(self) -> None:\n if self.node_index.node_type != NodeType.ATTENTION_HEAD:\n assert self.attention_trace_type is None\n\n if self.attention_trace_type != AttentionTraceType.V:\n # only tracing through V supports a downstream node\n assert self.downstream_trace_config is None\n else:\n if self.downstream_trace_config is not None:\n # repeatedly tracing through V is not allowed; all other types of\n # downstream trace configs are fine\n assert self.downstream_trace_config.attention_trace_type != AttentionTraceType.V\n # cfg is None -> a loss (function of logits) is assumed to be defined\n\n @property\n def node_type(self) -> NodeType:\n return self.node_index.node_type\n\n @property\n def tensor_indices(self) -> AllOrOneIndices:\n return make_all_or_one_from_tensor_indices(self.node_index.tensor_indices)\n\n @property\n def layer_index(self) -> LayerIndex:\n return self.node_index.layer_index\n\n @property\n def pass_type(self) -> PassType:\n return self.node_index.pass_type\n\n @property\n def ndim(self) -> int:\n return self.node_index.ndim\n\n def with_updated_index(\n self,\n **kwargs: Any,\n ) -> \"TraceConfig\":\n return dataclasses.replace(\n self,\n node_index=self.node_index.with_updates(**kwargs),\n )\n\n @classmethod\n def from_activation_index(\n cls,\n activation_index: ActivationIndex,\n detach_layer_norm_scale: bool = DETACH_LAYER_NORM_SCALE,\n ) -> \"TraceConfig\":\n node_index = NodeIndex.from_activation_index(activation_index)\n match activation_index.activation_location_type:\n case ActivationLocationType.MLP_PRE_ACT | ActivationLocationType.ATTN_QK_LOGITS:\n pre_or_post_act = PreOrPostAct.PRE\n case (\n ActivationLocationType.MLP_POST_ACT\n | ActivationLocationType.ATTN_QK_PROBS\n | ActivationLocationType.ONLINE_AUTOENCODER_LATENT\n ):\n pre_or_post_act = PreOrPostAct.POST\n case _:\n raise ValueError(\n f\"ActivationLocationType {activation_index.activation_location_type} not supported\"\n )\n match node_index.node_type:\n case NodeType.ATTENTION_HEAD:\n attention_trace_type: AttentionTraceType | None = AttentionTraceType.QK\n case _:\n attention_trace_type = None\n downstream_trace_config = None\n return cls(\n node_index=node_index,\n pre_or_post_act=pre_or_post_act,\n detach_layer_norm_scale=detach_layer_norm_scale,\n attention_trace_type=attention_trace_type,\n downstream_trace_config=downstream_trace_config,\n )\n\n\n@immutable\nclass MirroredTraceConfig(HashableBaseModel):\n node_index: MirroredNodeIndex\n pre_or_post_act: PreOrPostAct\n detach_layer_norm_scale: bool\n attention_trace_type: AttentionTraceType | None = None # applies only to attention heads\n downstream_trace_config: \"MirroredTraceConfig | None\" = (\n None # applies only to attention heads with attention_trace_type == AttentionTraceType.V\n )\n\n def to_trace_config(self) -> TraceConfig:\n downstream_trace_config = (\n self.downstream_trace_config.to_trace_config()\n if self.downstream_trace_config is not None\n else None\n )\n return TraceConfig(\n node_index=self.node_index.to_node_index(),\n pre_or_post_act=self.pre_or_post_act,\n detach_layer_norm_scale=self.detach_layer_norm_scale,\n attention_trace_type=self.attention_trace_type,\n downstream_trace_config=downstream_trace_config,\n )\n\n @classmethod\n def from_trace_config(cls, trace_config: TraceConfig) -> \"MirroredTraceConfig\":\n mirrored_downstream_trace_config = (\n cls.from_trace_config(trace_config.downstream_trace_config)\n if trace_config.downstream_trace_config is not None\n else None\n )\n return cls(\n node_index=MirroredNodeIndex.from_node_index(trace_config.node_index),\n pre_or_post_act=trace_config.pre_or_post_act,\n detach_layer_norm_scale=trace_config.detach_layer_norm_scale,\n attention_trace_type=trace_config.attention_trace_type,\n downstream_trace_config=mirrored_downstream_trace_config,\n )\n\n\n@immutable\nclass NodeToTrace(CamelCaseBaseModel):\n \"\"\"A specification for tracing a node.\n\n This data structure is used by the client. The server converts it to an activation index and\n an ablation spec.\n\n In the case of tracing through attention value, there can be up to two NodeToTrace\n objects: one upstream and one downstream. First, a gradient is computed with respect to the\n downstream node. Then, the direct effect of the upstream (attention) node on that downstream\n node is computed. Then, the gradient is computed with respect to that direct effect, propagated\n through V\n \"\"\"\n\n node_index: MirroredNodeIndex\n attention_trace_type: AttentionTraceType | None\n downstream_trace_config: MirroredTraceConfig | None\n", "neuron_explainer\\activation_server\\main.py": "\"\"\"Starts the activation server. Methods on the server are defined in separate files.\"\"\"\n\nimport datetime\nimport os\nimport re\nimport signal\n\nimport fire\nimport torch\nimport uvicorn\nfrom fastapi import FastAPI, HTTPException, Request\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom fastapi.responses import JSONResponse\nfrom fastapi.routing import APIRoute\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\n\nfrom neuron_explainer.activation_server.explainer_routes import (\n AttentionExplainAndScoreMethodId,\n NeuronExplainAndScoreMethodId,\n define_explainer_routes,\n)\nfrom neuron_explainer.activation_server.inference_routes import define_inference_routes\nfrom neuron_explainer.activation_server.interactive_model import InteractiveModel\nfrom neuron_explainer.activation_server.read_routes import define_read_routes\nfrom neuron_explainer.activation_server.requests_and_responses import GroupId\nfrom neuron_explainer.models.autoencoder_context import AutoencoderContext # noqa: F401\nfrom neuron_explainer.models.autoencoder_context import MultiAutoencoderContext\nfrom neuron_explainer.models.model_context import StandardModelContext, get_default_device\nfrom neuron_explainer.models.model_registry import make_autoencoder_context\n\n\ndef main(\n host_name: str = \"localhost\",\n port: int = 80,\n model_name: str = \"gpt2-small\",\n mlp_autoencoder_name: str | None = None,\n attn_autoencoder_name: str | None = None,\n run_model: bool = True,\n neuron_method: str = \"baseline\",\n attention_head_method: str = \"baseline\",\n cuda_memory_debugging: bool = False,\n) -> None:\n neuron_method_id = NeuronExplainAndScoreMethodId.from_string(neuron_method)\n attention_head_method_id = AttentionExplainAndScoreMethodId.from_string(attention_head_method)\n\n def custom_generate_unique_id(route: APIRoute) -> str:\n return f\"{route.tags[0]}-{route.name}\"\n\n app = FastAPI(generate_unique_id_function=custom_generate_unique_id)\n\n allow_origin_regex_str = r\"https?://localhost(:\\d+)?$\"\n allow_origin_regex = re.compile(allow_origin_regex_str)\n app.add_middleware(\n CORSMiddleware,\n allow_origin_regex=allow_origin_regex_str,\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n # We don't just want to disable CORS for successful responses: we also want to do it for error\n # responses, which FastAPI's middleware doesn't cover. This allows the client to see helpful\n # information like the HTTP status code, which is otherwise hidden from it. To do this, we add\n # two exception handlers. It's possible we could just get away with the first one, but GPT-4\n # thought it was good to include both and who am I to disagree?\n def add_access_control_headers(request: Request, response: JSONResponse) -> JSONResponse:\n origin = request.headers.get(\"origin\")\n # This logic does something similar to what the standard CORSMiddleware does. You can't\n # use a regex in the actual response header, but you can run the regex on the server and\n # then choose to include the header if it matches the origin.\n if origin and allow_origin_regex.fullmatch(origin):\n response.headers[\"Access-Control-Allow-Origin\"] = origin\n response.headers[\"Access-Control-Allow-Methods\"] = \"*\"\n response.headers[\"Access-Control-Allow-Headers\"] = \"*\"\n return response\n\n @app.exception_handler(Exception)\n async def handle_unhandled_exception(request: Request, exc: Exception) -> JSONResponse:\n print(\"************** Handling an unhandled exception ***********************\")\n print(f\"Exception type: {type(exc).__name__}\")\n print(f\"Exception details: {exc}\")\n response = add_access_control_headers(\n request,\n JSONResponse(status_code=500, content={\"message\": \"Unhandled server exception\"}),\n )\n\n # Check if this exception is a cuda OOM, which is unrecoverable. If it is, we should kill\n # the server.\n if isinstance(exc, torch.cuda.OutOfMemoryError):\n print(\"***** Killing server due to cuda OOM *****\")\n # Use SIGKILL so the return code of the top-level process is *not* 0.\n os.kill(os.getpid(), signal.SIGKILL)\n\n return response\n\n @app.exception_handler(StarletteHTTPException)\n async def handle_starlette_http_exception(request: Request, exc: HTTPException) -> JSONResponse:\n return add_access_control_headers(\n request, JSONResponse(status_code=exc.status_code, content={\"message\": exc.detail})\n )\n\n @app.get(\"/\", tags=[\"hello_world\"])\n def read_root() -> dict[str, str]:\n return {\"Hello\": \"World\"}\n\n # The FastAPI client code generation setup only generates TypeScript classes for types\n # referenced from top-level endpoints. In some cases we want to share a type across client and\n # server that isn't referenced in this way. For example, GroupId is used in requests, but only\n # as a key in a dictionary, and the generated TypeScript for dictionaries treats enum values as\n # strings, so GroupId isn't referenced in the generated TypeScript. To work around this, we\n # define a dummy endpoint that references GroupId, which causes the client code generation to\n # generate a TypeScript class for it. The same trick can be used for other types in the future.\n @app.get(\"/force_client_code_generation\", tags=[\"hello_world\"])\n def force_client_code_generation(group_id: GroupId) -> None:\n return None\n\n @app.get(\"/dump_memory_snapshot\", tags=[\"memory\"])\n def dump_memory_snapshot() -> str:\n if not cuda_memory_debugging:\n raise HTTPException(\n status_code=400,\n detail=\"The cuda_memory_debugging flag must be set to dump a memory snapshot\",\n )\n formatted_time = datetime.datetime.now().strftime(\"%H%M%S\")\n filename = f\"torch_memory_snapshot_{formatted_time}.pkl\"\n torch.cuda.memory._dump_snapshot(filename)\n return f\"Dumped cuda memory snapshot to {filename}\"\n\n if run_model:\n if cuda_memory_debugging:\n torch.cuda.memory._record_memory_history(max_entries=100000)\n device = get_default_device()\n standard_model_context = StandardModelContext(model_name, device=device)\n if mlp_autoencoder_name is not None or attn_autoencoder_name is not None:\n autoencoder_context_list = [\n make_autoencoder_context(\n model_name=model_name,\n autoencoder_name=autoencoder_name,\n device=device,\n omit_dead_latents=True,\n )\n for autoencoder_name in [mlp_autoencoder_name, attn_autoencoder_name]\n if autoencoder_name is not None\n ]\n multi_autoencoder_context = MultiAutoencoderContext.from_autoencoder_context_list(\n autoencoder_context_list\n )\n multi_autoencoder_context.warmup()\n model = InteractiveModel.from_standard_model_context_and_autoencoder_context(\n standard_model_context, multi_autoencoder_context\n )\n\n else:\n model = InteractiveModel.from_standard_model_context(standard_model_context)\n\n else:\n model = None\n\n define_read_routes(app)\n define_explainer_routes(\n app=app,\n neuron_method_id=neuron_method_id,\n attention_head_method_id=attention_head_method_id,\n )\n define_inference_routes(\n app=app,\n model=model,\n mlp_autoencoder_name=mlp_autoencoder_name,\n attn_autoencoder_name=attn_autoencoder_name,\n )\n\n # TODO(sbills): Make reload=True work. We need to pass something like \"main:app\" as a string\n # rather than passing a FastAPI object directly.\n uvicorn.run(app, host=host_name, port=port)\n\n\nif __name__ == \"__main__\":\n fire.Fire(main)\n\n\n\"\"\"\nFor local testing without running a subject model:\npython neuron_explainer/activation_server/main.py --run_model False --port 8000\n\"\"\"\n", "neuron_viewer\\package.json": "{\n \"name\": \"neuron-viewer\",\n \"version\": \"1.0.0\",\n \"dependencies\": {\n \"@heroicons/react\": \"^2.0.18\",\n \"@microlink/react-json-view\": \"^1.23.0\",\n \"@nextui-org/react\": \"^2.2.5\",\n \"@types/lodash\": \"^4.14.194\",\n \"@types/react\": \"^18.0.37\",\n \"@types/react-dom\": \"^18.0.11\",\n \"ag-grid-community\": \"^30.2.1\",\n \"ag-grid-react\": \"^30.2.1\",\n \"axios\": \"^1.3.3\",\n \"buffer\": \"^6.0.3\",\n \"framer-motion\": \"^10.16.4\",\n \"lodash\": \"^4.17.21\",\n \"process\": \"^0.11.10\",\n \"react\": \"^18.2.0\",\n \"react-dom\": \"^18.2.0\",\n \"react-router-dom\": \"^6.10.0\",\n \"web-vitals\": \"^3.0.3\"\n },\n \"scripts\": {\n \"start\": \"parcel src/index.html --no-cache\",\n \"build\": \"parcel build src/index.html --no-cache\",\n \"serve\": \"NODE_ENV=production npm run build && serve -s -l 1234 dist\",\n \"typecheck\": \"tsc -p .\",\n \"check-type-warnings\": \"eslint --max-warnings=0 src --ext .js,.jsx,.ts,.tsx\",\n \"format-code\": \"prettier --write src\",\n \"check-code-format\": \"prettier --check src || (printf '\\\\e[31m\\\\nRun `npm run format-code` to fix\\\\n\\\\e[0m' && false)\",\n \"generate-client\": \"openapi --input http://localhost:8000/openapi.json --output ./src/client --client axios && prettier --write src/client && sh prepend_autogen_comments.sh\"\n },\n \"eslintConfig\": {\n \"extends\": [\n \"react-app\"\n ],\n \"rules\": {\n \"import/no-anonymous-default-export\": \"off\",\n \"jsx-a11y/anchor-is-valid\": \"off\"\n },\n \"parser\": \"@typescript-eslint/parser\",\n \"parserOptions\": {\n \"project\": \"./tsconfig.json\"\n },\n \"plugins\": [\n \"@typescript-eslint\"\n ],\n \"overrides\": [\n {\n \"files\": [\n \"./src/**\"\n ],\n \"rules\": {\n \"@typescript-eslint/naming-convention\": [\n \"error\",\n {\n \"selector\": \"variable\",\n \"format\": [\n \"camelCase\",\n \"PascalCase\",\n \"UPPER_CASE\"\n ],\n \"leadingUnderscore\": \"allow\",\n \"trailingUnderscore\": \"allow\"\n },\n {\n \"selector\": \"function\",\n \"format\": [\n \"camelCase\",\n \"PascalCase\"\n ]\n },\n {\n \"selector\": [\n \"accessor\",\n \"classMethod\",\n \"classProperty\",\n \"function\",\n \"objectLiteralMethod\",\n \"parameterProperty\",\n \"typeMethod\",\n \"typeProperty\"\n ],\n \"format\": [\n \"camelCase\"\n ]\n },\n {\n \"selector\": [\n \"parameter\"\n ],\n \"format\": [\n \"camelCase\"\n ],\n \"leadingUnderscore\": \"allow\"\n },\n {\n \"selector\": \"typeLike\",\n \"format\": [\n \"PascalCase\"\n ]\n },\n {\n \"selector\": \"objectLiteralProperty\",\n \"format\": [\n \"camelCase\",\n \"PascalCase\"\n ]\n },\n {\n \"selector\": [\n \"classProperty\",\n \"objectLiteralProperty\",\n \"typeProperty\",\n \"classMethod\",\n \"objectLiteralMethod\",\n \"typeMethod\",\n \"accessor\",\n \"enumMember\"\n ],\n \"format\": null,\n \"modifiers\": [\n \"requiresQuotes\"\n ]\n }\n ]\n }\n }\n ]\n },\n \"alias\": {\n \"preact/jsx-dev-runtime\": \"preact/jsx-runtime\",\n \"plot\": \"@observablehq/plot/dist/plot.umd.js\"\n },\n \"devDependencies\": {\n \"@observablehq/plot\": \"^0.6.11\",\n \"@parcel/transformer-typescript-tsc\": \"^2.8.3\",\n \"@parcel/validator-typescript\": \"^2.8.3\",\n \"@types/node\": \"^20.10.6\",\n \"assert\": \"^2.1.0\",\n \"eslint\": \"^8.41.0\",\n \"eslint-config-react-app\": \"^7.0.1\",\n \"openapi-typescript-codegen\": \"^0.24.0\",\n \"parcel\": \"^2.11.0\",\n \"prettier\": \"2.5.1\",\n \"serve\": \"^14.2.0\",\n \"typescript\": \"^5.0.4\"\n }\n}\n", "neuron_viewer\\src\\App.css": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\n.ag-theme-alpine {\n --ag-grid-size: 1px;\n --ag-list-item-height: 20px;\n}\n", "neuron_viewer\\src\\App.tsx": "import React from \"react\";\nimport { useNavigate, Route, Routes, Link } from \"react-router-dom\";\nimport \"./App.css\";\nimport TransformerDebugger from \"./TransformerDebugger/TransformerDebugger\";\nimport { NextUIProvider } from \"@nextui-org/react\";\nimport Welcome from \"./welcome\";\nimport NodePage from \"./nodePage\";\n\nconst NotFoundPage: React.FC = () => {\n return (\n <div className=\"flex items-center justify-center h-screen bg-gray-100\">\n <div className=\"text-center\">\n <h1 className=\"text-4xl font-bold text-gray-800\">Page Not Found</h1>\n <p className=\"mt-4 text-xl text-gray-600\">\n Sorry, the page you are looking for does not exist.\n </p>\n <Link\n to=\"/\"\n className=\"mt-6 inline-block px-6 py-3 bg-blue-500 text-white font-medium text-lg leading-tight uppercase rounded shadow-md hover:bg-blue-700 hover:shadow-lg focus:bg-blue-700 focus:shadow-lg focus:outline-none focus:ring-0 active:bg-blue-800 active:shadow-lg transition duration-150 ease-in-out\"\n >\n Go back home\n </Link>\n </div>\n </div>\n );\n};\n\nconst App: React.FC = () => {\n const navigate = useNavigate();\n\n return (\n <NextUIProvider navigate={navigate}>\n <Routes>\n {/* Actual substantive pages */}\n <Route path=\"/\" element={<Welcome />} />\n <Route path=\"/:model/:nodeTypeStr/:layerIndex/:nodeIndex\" element={<NodePage />} />\n <Route path=\":model/tdb_alpha\" element={<TransformerDebugger />} />\n\n {/* Catch-all for bogus URLs */}\n <Route path=\"*\" element={<NotFoundPage />} />\n </Routes>\n </NextUIProvider>\n );\n};\n\nexport default App;\n", "neuron_viewer\\src\\index.css": "body {\n margin: 0;\n font-family: -apple-system, BlinkMacSystemFont, \"Segoe UI\", \"Roboto\", \"Oxygen\", \"Ubuntu\",\n \"Cantarell\", \"Fira Sans\", \"Droid Sans\", \"Helvetica Neue\", sans-serif;\n -webkit-font-smoothing: antialiased;\n -moz-osx-font-smoothing: grayscale;\n}\n\ncode {\n font-family: source-code-pro, Menlo, Monaco, Consolas, \"Courier New\", monospace;\n}\n", "neuron_viewer\\src\\index.html": "<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"utf-8\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n <meta name=\"theme-color\" content=\"#000000\" />\n <meta name=\"description\" content=\"Neuron viewer\" />\n <title>Neuron Viewer</title>\n </head>\n <body>\n <noscript>You need to enable JavaScript to run this app.</noscript>\n <div id=\"root\"></div>\n <!--\n This HTML file is a template.\n If you open it directly in the browser, you will see an empty page.\n\n You can add webfonts, meta tags, or analytics to this file.\n The build step will place the bundled scripts into the <body> tag.\n\n To begin the development, run `npm start` or `yarn start`.\n To create a production bundle, use `npm run build` or `yarn build`.\n -->\n <script src=\"/src/index.tsx\" async type=\"module\"></script>\n <link href=\"App.css\" rel=\"stylesheet\" />\n </body>\n</html>\n", "neuron_viewer\\src\\index.tsx": "import React from \"react\";\nimport ReactDOM from \"react-dom/client\";\nimport \"./index.css\";\nimport App from \"./App\";\nimport { BrowserRouter } from \"react-router-dom\";\nconst root = ReactDOM.createRoot(document.getElementById(\"root\")!);\n\nroot.render(\n <BrowserRouter>\n <React.StrictMode>\n <App />\n </React.StrictMode>\n </BrowserRouter>\n);\n", "neuron_viewer\\src\\client\\index.ts": "// Auto-generated code. Do not edit! See neuron_explainer/activation_server/README.md to learn how to regenerate it.\n\n/* istanbul ignore file */\n/* tslint:disable */\n/* eslint-disable */\nexport { ApiError } from \"./core/ApiError\";\nexport { CancelablePromise, CancelError } from \"./core/CancelablePromise\";\nexport { OpenAPI } from \"./core/OpenAPI\";\nexport type { OpenAPIConfig } from \"./core/OpenAPI\";\n\nexport type { AblationSpec } from \"./models/AblationSpec\";\nexport { ActivationLocationType } from \"./models/ActivationLocationType\";\nexport type { AttentionHeadRecordResponse } from \"./models/AttentionHeadRecordResponse\";\nexport { AttentionTraceType } from \"./models/AttentionTraceType\";\nexport type { AttributedScoredExplanation } from \"./models/AttributedScoredExplanation\";\nexport type { BatchedRequest } from \"./models/BatchedRequest\";\nexport type { BatchedResponse } from \"./models/BatchedResponse\";\nexport type { BatchedTdbRequest } from \"./models/BatchedTdbRequest\";\nexport { ComponentTypeForAttention } from \"./models/ComponentTypeForAttention\";\nexport { ComponentTypeForMlp } from \"./models/ComponentTypeForMlp\";\nexport type { DerivedAttentionScalarsRequest } from \"./models/DerivedAttentionScalarsRequest\";\nexport { DerivedAttentionScalarsRequestSpec } from \"./models/DerivedAttentionScalarsRequestSpec\";\nexport type { DerivedAttentionScalarsResponse } from \"./models/DerivedAttentionScalarsResponse\";\nexport type { DerivedAttentionScalarsResponseData } from \"./models/DerivedAttentionScalarsResponseData\";\nexport type { DerivedScalarsRequest } from \"./models/DerivedScalarsRequest\";\nexport { DerivedScalarsRequestSpec } from \"./models/DerivedScalarsRequestSpec\";\nexport type { DerivedScalarsResponse } from \"./models/DerivedScalarsResponse\";\nexport type { DerivedScalarsResponseData } from \"./models/DerivedScalarsResponseData\";\nexport { DerivedScalarType } from \"./models/DerivedScalarType\";\nexport { Dimension } from \"./models/Dimension\";\nexport type { ExistingExplanationsRequest } from \"./models/ExistingExplanationsRequest\";\nexport type { ExplanationResult } from \"./models/ExplanationResult\";\nexport { GroupId } from \"./models/GroupId\";\nexport type { HTTPValidationError } from \"./models/HTTPValidationError\";\nexport type { InferenceAndTokenData } from \"./models/InferenceAndTokenData\";\nexport type { InferenceRequestSpec } from \"./models/InferenceRequestSpec\";\nexport type { InferenceResponse } from \"./models/InferenceResponse\";\nexport type { InferenceResponseAndResponseDict } from \"./models/InferenceResponseAndResponseDict\";\nexport type { InferenceSubRequest } from \"./models/InferenceSubRequest\";\nexport type { LossFnConfig } from \"./models/LossFnConfig\";\nexport { LossFnName } from \"./models/LossFnName\";\nexport type { MirroredActivationIndex } from \"./models/MirroredActivationIndex\";\nexport type { MirroredNodeIndex } from \"./models/MirroredNodeIndex\";\nexport type { MirroredTraceConfig } from \"./models/MirroredTraceConfig\";\nexport type { ModelInfoResponse } from \"./models/ModelInfoResponse\";\nexport type { MultipleTopKDerivedScalarsRequest } from \"./models/MultipleTopKDerivedScalarsRequest\";\nexport { MultipleTopKDerivedScalarsRequestSpec } from \"./models/MultipleTopKDerivedScalarsRequestSpec\";\nexport type { MultipleTopKDerivedScalarsResponse } from \"./models/MultipleTopKDerivedScalarsResponse\";\nexport type { MultipleTopKDerivedScalarsResponseData } from \"./models/MultipleTopKDerivedScalarsResponseData\";\nexport type { NeuronDatasetMetadata } from \"./models/NeuronDatasetMetadata\";\nexport type { NeuronRecordResponse } from \"./models/NeuronRecordResponse\";\nexport type { NodeAblation } from \"./models/NodeAblation\";\nexport type { NodeIdAndDatasets } from \"./models/NodeIdAndDatasets\";\nexport type { NodeToTrace } from \"./models/NodeToTrace\";\nexport { NodeType } from \"./models/NodeType\";\nexport { PassType } from \"./models/PassType\";\nexport { PreOrPostAct } from \"./models/PreOrPostAct\";\nexport { ProcessingResponseDataType } from \"./models/ProcessingResponseDataType\";\nexport { ScoredTokensRequestSpec } from \"./models/ScoredTokensRequestSpec\";\nexport type { ScoredTokensResponseData } from \"./models/ScoredTokensResponseData\";\nexport type { ScoreRequest } from \"./models/ScoreRequest\";\nexport type { ScoreResult } from \"./models/ScoreResult\";\nexport { TdbRequestSpec } from \"./models/TdbRequestSpec\";\nexport type { Tensor0D } from \"./models/Tensor0D\";\nexport type { Tensor1D } from \"./models/Tensor1D\";\nexport type { Tensor2D } from \"./models/Tensor2D\";\nexport type { Tensor3D } from \"./models/Tensor3D\";\nexport { TensorType } from \"./models/TensorType\";\nexport type { TokenAndAttentionScalars } from \"./models/TokenAndAttentionScalars\";\nexport type { TokenAndScalar } from \"./models/TokenAndScalar\";\nexport { TokenPairAttributionRequestSpec } from \"./models/TokenPairAttributionRequestSpec\";\nexport type { TokenPairAttributionResponseData } from \"./models/TokenPairAttributionResponseData\";\nexport { TokenScoringType } from \"./models/TokenScoringType\";\nexport type { TopTokens } from \"./models/TopTokens\";\nexport type { TopTokensAttendedTo } from \"./models/TopTokensAttendedTo\";\nexport type { ValidationError } from \"./models/ValidationError\";\n\nexport { ExplainerService } from \"./services/ExplainerService\";\nexport { HelloWorldService } from \"./services/HelloWorldService\";\nexport { InferenceService } from \"./services/InferenceService\";\nexport { MemoryService } from \"./services/MemoryService\";\nexport { ReadService } from \"./services/ReadService\";\n", "neuron_viewer\\src\\client\\models\\MirroredActivationIndex.ts": "// Auto-generated code. Do not edit! See neuron_explainer/activation_server/README.md to learn how to regenerate it.\n\n/* istanbul ignore file */\n/* tslint:disable */\n/* eslint-disable */\n\nimport type { ActivationLocationType } from \"./ActivationLocationType\";\nimport type { PassType } from \"./PassType\";\n\n/**\n * Base model that will automatically generate camelCase aliases for fields. Python code can use\n * either snake_case or camelCase names. When Typescript code is generated, it will only use the\n * camelCase names.\n */\nexport type MirroredActivationIndex = {\n activationLocationType: ActivationLocationType;\n tensorIndices: Array<number | \"All\">;\n layerIndex?: number;\n passType: PassType;\n};\n", "neuron_viewer\\src\\client\\models\\MirroredNodeIndex.ts": "// Auto-generated code. Do not edit! See neuron_explainer/activation_server/README.md to learn how to regenerate it.\n\n/* istanbul ignore file */\n/* tslint:disable */\n/* eslint-disable */\n\nimport type { NodeType } from \"./NodeType\";\nimport type { PassType } from \"./PassType\";\n\n/**\n * This class mirrors the fields of NodeIndex without default values.\n */\nexport type MirroredNodeIndex = {\n nodeType: NodeType;\n tensorIndices: Array<number>;\n layerIndex?: number;\n passType: PassType;\n};\n", "neuron_viewer\\src\\panes\\index.ts": "import { Node } from \"../types\";\nimport ActivationsForPrompt from \"./activationsForPrompt\";\nimport DatasetExamples from \"./datasetExamples\";\nimport Explanation from \"./explanation\";\nimport LogitLens from \"./logitLens\";\nimport ScoreExplanation from \"./scoreExplanation\";\n\nexport const PaneComponents = {\n ActivationsForPrompt,\n DatasetExamples,\n Explanation,\n LogitLens,\n ScoreExplanation,\n} as const;\n\nexport type PaneComponentType = keyof typeof PaneComponents;\n\nexport interface PaneProps {\n activeNode: Node;\n}\n\nexport interface SentencePaneProps extends PaneProps {\n sentence: string;\n}\n\nexport interface ExplanationPaneProps extends PaneProps {\n explanation: string;\n}\n"}
null
understanding-rl-vision
{"type": "directory", "name": "understanding-rl-vision", "children": [{"type": "file", "name": "LICENCE"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "requirements.txt"}, {"type": "file", "name": "setup.py"}, {"type": "directory", "name": "understanding_rl_vision", "children": [{"type": "directory", "name": "rl_clarity", "children": [{"type": "file", "name": "compiling.py"}, {"type": "file", "name": "example.py"}, {"type": "file", "name": "interface.py"}, {"type": "file", "name": "loading.py"}, {"type": "directory", "name": "svelte", "children": [{"type": "file", "name": "attribution_selector.svelte"}, {"type": "file", "name": "attribution_viewer.svelte"}, {"type": "file", "name": "chart.svelte"}, {"type": "file", "name": "css_manipulate.js"}, {"type": "file", "name": "feature_viewer.svelte"}, {"type": "file", "name": "graph.svelte"}, {"type": "file", "name": "interface.svelte"}, {"type": "file", "name": "json_load.js"}, {"type": "file", "name": "legend.svelte"}, {"type": "file", "name": "Makefile"}, {"type": "file", "name": "navigator.svelte"}, {"type": "file", "name": "query.svelte"}, {"type": "file", "name": "screen.svelte"}, {"type": "file", "name": "scrubber.svelte"}, {"type": "file", "name": "trajectory_display.svelte"}]}, {"type": "file", "name": "training.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "svelte3", "children": [{"type": "file", "name": "compiling.py"}, {"type": "file", "name": "json_encoding.py"}, {"type": "file", "name": "package-lock.json"}, {"type": "file", "name": "package.json"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "__init__.py"}]}]}
**Status:** Archive (code is provided as-is, no updates expected) # Understanding RL Vision #### [ [Paper] ](https://distill.pub/2020/understanding-rl-vision) [ [Demo] ](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/demo/interface.html) Generate interfaces for interpreting vision models trained using RL. The core utilities used to compute feature visualization, attribution and dimensionality reduction can be found in `lucid.scratch.rl_util`, a submodule of [Lucid](https://github.com/tensorflow/lucid/). These are demonstrated in [this notebook](https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/misc/rl_util.ipynb). The code here leverages these utilities to build HTML interfaces similar to the above demo. ![](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/demo.gif) ## Installation Supported platforms: MacOS and Ubuntu, Python 3.7, TensorFlow <= 1.14 - Install [Baselines](https://github.com/openai/baselines) and its dependencies, including TensorFlow 1. - Clone the repo: ``` git clone https://github.com/openai/understanding-rl-vision.git ``` - Install the repo and its dependencies, among which is a pinned version of [Lucid](https://github.com/tensorflow/lucid): ``` pip install -e understanding-rl-vision ``` - Install an RL environment of your choice. Supported environments: - [CoinRun](https://github.com/openai/coinrun) (the original version used in the paper): follow the instructions. Note: due to CoinRun's requirements, you should re-install Baselines after installing CoinRun. - [Procgen](https://github.com/openai/procgen): `pip install procgen` - [Atari](https://github.com/openai/atari-py): `pip install atari-py` ## Generating interfaces The main script processes checkpoint files saved by RL code: ``` from understanding_rl_vision import rl_clarity rl_clarity.run('path/to/checkpoint/file', output_dir='path/to/directory') ``` An example checkpoint file can be downloaded [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/models/coinrun.jd), or can be generated using the [example script](understanding_rl_vision/rl_clarity/example.py). Checkpoint files for a number of pre-trained models are indexed [here](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/models/index.html). The precise format required of the checkpoint file, along with a full list of keyword arguments, can be found in the function's [docstring](understanding_rl_vision/rl_clarity/__init__.py). The script will create an `interface.html` file, along with directories containing images (which can take up several GB), at the location specified by `output_dir`. By default, the script will also create some files in the directory of the checkpoint file, in an `rl-clarity` subdirectory. These contain all the necessary information extracted from the model and environment for re-creating the same interface. To create these files in a temporary location instead, set `load_kwargs={'temp_files': True}`. To re-create an interface using existing files, set `load_kwargs={'resample': False}`. ### Speed issues The slowest part of the script is computing the attribution in all the required combinations. If you set `trajectories_kwargs={'num_envs': num_envs, 'num_steps': num_steps}`, then `num_envs` trajectories will be collected, each of length `num_steps`, and the script will distribute the trajectories among the MPI workers for computing the attribution. The memory requirements of each worker scales with `num_steps`, which defaults to 512 (about as large as a machine with 34 GB of memory can typically handle). The default `num_envs` is 8, so it is best to use 8 MPI workers by default to save time, if you have 8 GPUs available. The script should take a few hours to run, but if it is taking too long, then you can tell the script to ignore the first couple of non-input layers by setting `layer_kwargs={'discard_first_n': 2}`, for example. These layers take the longest to compute attribution for since they have the highest spatial resolution, and are usually not that informative anyway. By default, attribution is only computed for the value function, since computing attribution for every logit of the policy amounts to a large multiplier on the time taken by the script to run. To compute attribution for the policy, set `attr_policy=True`. To offset the increased computational load when doing this, you may wish to choose a single layer to compute attribution for by setting `layer_kwargs={'name_contains_one_of': ['2b']}`, for example. To save disk space, the hover effect for isolating single attribution channels can be disabled by setting `attr_single_channels=False`, though this will not have much effect on speed. ## Guide to interfaces As shown in [this demo](https://openaipublic.blob.core.windows.net/rl-clarity/attribution/demo/interface.html), interfaces are divided into a number of sections: - **Trajectories** - Each trajectory is a separate rollout of the agent interacting with the environment. Here you can select one of them. - **Bookmarks** - Advantages have been computed using [generalized advantage estimation](https://arxiv.org/abs/1506.02438) (GAE). These provide a measure of how successful each choice made by the agent turned out relative to its expectations, and would usually be used to improve the agent's policy during training. The links here allow you to skip to specific frames from the trajectories with the highest and lowest advantages (with at most one link per episode). - **Layers** - Here you can select a layer for which attribution (explained below) has been computed. For the input layer, if included, attribution makes less sense, so gradients have been computed instead. - **Timeline** - Here you can navigate through the frames in each trajectory, either using the buttons or by scrubbing. At the top, information about the current frame is displayed, including the last reward received, the agent's policy, and the action that was chosen next. There are graphs of advantages (as used by the Bookmarks section) and of each network output that has been selected in the Attribution section. - **Attribution** - Here you can view the observations processed by the agent, and attribution from network outputs (just the value function by default) to the selected layer. Below the observation is chart of the attribution summed over spatial positions. If attribution has been computed for the policy, you can add and remove rows from this section, and select a different network output for each row, such as the value function, or the policy's logit for a particular action. Attribution has been computed using the method of [integrated gradients](https://arxiv.org/abs/1703.01365): the gradient of the network output with respect to selected layer has been numerically integrated along the straight line from zero to the layer's output given the current observation. This effectively decomposes (or "attributes") the network output across the spatial positions and channels of the selected layer. Dimensionality reduction ([non-negative matrix factorization](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization)) has been applied to the channels using a large batch of varied observations, and the resulting channels are represented using different colors. Additional normalization and smoothing has been applied, with strong attribution bleeding into nearby spatial positions. - **Attribution legend** - For each of the channels produced by dimensionality reduction (explained above), there are small visualizations here of the feature picked out by that channel. These consist of patches taken from observations at the spatial positions where the selected layer was most strongly activated in the direction of the channel. Hovering over these isolates the channel for the displayed attribution, and clicking opens a the Feature visualization popup, where the feature can be further analyzed. - **Feature visualization** (in popup) - This is displayed after a feature from the Attribution legend section has been selected, and shows a larger visualization of the feature. This also consists of patches taken from observations where the selected layer was most strongly activated in the appropriate direction, but here the location of a patch determines a specific spatial position that must be activated. This means that there is a spatial correspondence between the visualization and observations. Patches with weaker activations are displayed with greater transparency, except when hovering over the image. There are sliders that can be used to set the zoom level of the patches (which can also be controlled by scrolling over the image) and the number of patches (which initially equals the number of spatial positions of the selected layer). Clicking on a patch reveals the full observation from which the patch was extracted. - **Hotkeys** - Here is a list of available keyboard shortcuts. Toggling between play and pause also toggles between whether the arrow keys change the play direction or take a single step in one direction. ## Training models There is also a script for training a model using [PPO2](https://github.com/openai/baselines/tree/master/baselines/ppo2) from [Baselines](https://github.com/openai/baselines), and saving a checkpoint file in the required format: ``` from understanding_rl_vision import rl_clarity rl_clarity.train(env_name='coinrun_old', save_dir='path/to/directory') ``` This script is intended to explain checkpoint files, and has not been well-tested. The [example script](understanding_rl_vision/rl_clarity/example.py) demonstrates how to train a model and then generate an interface for it. ## Svelte compilation To generate interfaces, the Svelte source must be compiled to JavaScript. At installation, the module will automatically attempt to download the pre-compiled JavaScript from a remote copy, though this copy is not guaranteed to be kept up-to-date. To obtain an up-to-date copy, or for development, you may wish to re-compile the JavaScript locally. To do this, first install [Node.js](https://nodejs.org/) if you have not already. On Mac: ``` brew install node ``` You will then be able to re-compile the JavaScript: ``` python -c 'from understanding_rl_vision import rl_clarity; rl_clarity.recompile_js()' ``` ### Standalone compiler The `svelte3` package provides generic functions for compiling version 3 of Svelte to JavaScript or HTML. These can be used to create an easy-to-use command-line tool: ``` python -c 'from understanding_rl_vision import svelte3; svelte3.compile_html("path/to/svelte/file", "path/to/html/file")' ``` Detailed usage instructions can be found in the functions' [docstrings](svelte3/compiling.py). ## Citation Please cite using the following BibTeX entry: ``` @article{hilton2020understanding, author = {Hilton, Jacob and Cammarata, Nick and Carter, Shan and Goh, Gabriel and Olah, Chris}, title = {Understanding RL Vision}, journal = {Distill}, year = {2020}, note = {https://distill.pub/2020/understanding-rl-vision}, doi = {10.23915/distill.00029} } ```
{"requirements.txt": "# installs dependencies from ./setup.py, and the package itself,\n# in editable mode\n-e .\n", "setup.py": "import os\nimport urllib.request\nfrom setuptools import setup, find_packages\n\nREMOTE_JS_PATH = (\n \"https://openaipublic.blob.core.windows.net/rl-clarity/attribution/js/interface.js\"\n)\n\n\ndef download_js():\n dir_ = os.path.dirname(os.path.realpath(__file__))\n js_dir_path = os.path.join(dir_, \"understanding_rl_vision/rl_clarity/js\")\n js_path = os.path.join(js_dir_path, \"interface.js\")\n if not os.path.isfile(js_path):\n if not os.path.exists(js_dir_path):\n os.mkdir(js_dir_path)\n try:\n urllib.request.urlretrieve(REMOTE_JS_PATH, js_path)\n except:\n if os.path.exists(js_path):\n os.remove(js_path)\n\n\nsetup(\n name=\"understanding-rl-vision\",\n packages=find_packages(),\n version=\"0.0.1\",\n install_requires=[\n \"mpi4py\",\n \"baselines\",\n \"lucid @ git+https://github.com/tensorflow/lucid.git@16a03dee8f99af4cdd89d6b7c1cc913817174c83\",\n ],\n extras_require={\"envs\": [\"coinrun\", \"procgen\", \"atari-py\"]},\n)\n\ndownload_js()\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", "understanding_rl_vision\\svelte3\\package.json": "{\n \"name\": \"svelte3\",\n \"description\": \"Svelte 3 compiler with a Python API\",\n \"version\": \"0.0.1\",\n \"dependencies\": {\n \"@babel/core\": \"^7.6.2\",\n \"@babel/preset-env\": \"^7.6.2\",\n \"commonjs\": \"0.0.1\",\n \"core-js\": \"^3.2.1\",\n \"eslint\": \"^6.4.0\",\n \"eslint-plugin-svelte3\": \"^2.7.3\",\n \"rollup\": \"^1.21.4\",\n \"rollup-plugin-babel\": \"^4.3.3\",\n \"rollup-plugin-commonjs\": \"^10.1.0\",\n \"rollup-plugin-eslint\": \"^7.0.0\",\n \"rollup-plugin-node-resolve\": \"^5.2.0\",\n \"rollup-plugin-svelte\": \"^5.1.0\",\n \"svelte\": \"^3.12.1\"\n }\n}\n"}
null
universe
{"type": "directory", "name": "universe", "children": [{"type": "file", "name": ".dockerignore"}, {"type": "file", "name": ".travis.yml"}, {"type": "file", "name": "CODE_OF_CONDUCT.rst"}, {"type": "directory", "name": "doc", "children": [{"type": "file", "name": "env_semantics.rst"}, {"type": "file", "name": "protocols.rst"}, {"type": "file", "name": "remotes.rst"}]}, {"type": "file", "name": "Dockerfile"}, {"type": "directory", "name": "example", "children": [{"type": "directory", "name": "diagnostic-agent", "children": [{"type": "file", "name": "diagnostic-agent.py"}]}, {"type": "directory", "name": "random-agent", "children": [{"type": "file", "name": "random-agent.py"}]}, {"type": "directory", "name": "recorders", "children": [{"type": "file", "name": "botaction_recorder.py"}, {"type": "file", "name": "reward_recorder.py"}, {"type": "file", "name": "vnc_recorder.py"}]}, {"type": "directory", "name": "starter-cluster", "children": [{"type": "file", "name": "starter-cluster"}, {"type": "file", "name": "starter-cluster-cf.json"}, {"type": "file", "name": "starter-cluster-requirements.txt"}]}, {"type": "directory", "name": "system-diagnostics", "children": [{"type": "file", "name": "system_diagnostics_logger.py"}]}]}, {"type": "file", "name": "ISSUE_TEMPLATE"}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "Makefile"}, {"type": "file", "name": "README.rst"}, {"type": "file", "name": "setup.py"}, {"type": "file", "name": "test.dockerfile"}, {"type": "directory", "name": "tests", "children": [{"type": "directory", "name": "functional", "children": [{"type": "file", "name": "test_core_envs_semantics.py"}, {"type": "file", "name": "test_envs.py"}]}]}, {"type": "file", "name": "tox.ini"}, {"type": "directory", "name": "universe", "children": [{"type": "file", "name": "configuration.py"}, {"type": "directory", "name": "envs", "children": [{"type": "file", "name": "diagnostics.py"}, {"type": "file", "name": "dummy_vnc_env.py"}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_semantics.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "vnc_core_env", "children": [{"type": "file", "name": "key.py"}, {"type": "file", "name": "translator.py"}, {"type": "file", "name": "vnc_core_env.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "vnc_env.py"}, {"type": "file", "name": "vnc_flashgames.py"}, {"type": "file", "name": "vnc_gtav.py"}, {"type": "file", "name": "vnc_internet.py"}, {"type": "file", "name": "vnc_starcraft.py"}, {"type": "file", "name": "vnc_wog.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "error.py"}, {"type": "directory", "name": "kube", "children": [{"type": "file", "name": "discovery.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "pyprofile", "children": [{"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "remotes", "children": [{"type": "file", "name": "allocator_remote.py"}, {"type": "file", "name": "build.py"}, {"type": "directory", "name": "compose", "children": [{"type": "file", "name": "colors.py"}, {"type": "file", "name": "container.py"}, {"type": "file", "name": "log_printer.py"}, {"type": "file", "name": "progress_stream.py"}, {"type": "file", "name": "signals.py"}, {"type": "file", "name": "utils.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "docker_remote.py"}, {"type": "file", "name": "hardcoded_addresses.py"}, {"type": "file", "name": "healthcheck.py"}, {"type": "file", "name": "remote.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "rewarder", "children": [{"type": "file", "name": "connection_timer.py"}, {"type": "file", "name": "env_status.py"}, {"type": "file", "name": "merge.py"}, {"type": "file", "name": "remote.py"}, {"type": "file", "name": "rewarder_client.py"}, {"type": "file", "name": "rewarder_session.py"}, {"type": "file", "name": "reward_buffer.py"}, {"type": "file", "name": "reward_proxy_server.py"}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_reward_buffer.py"}]}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "runtimes", "children": [{"type": "file", "name": ".agignore"}, {"type": "file", "name": "flashgames.json"}, {"type": "file", "name": "registration.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "runtimes.yml"}, {"type": "directory", "name": "scoreboard", "children": [{"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "spaces", "children": [{"type": "file", "name": "diagnostics.py"}, {"type": "file", "name": "hardcoded.py"}, {"type": "file", "name": "joystick_action_space.py"}, {"type": "file", "name": "joystick_event.py"}, {"type": "file", "name": "vnc_action_space.py"}, {"type": "file", "name": "vnc_event.py"}, {"type": "file", "name": "vnc_observation_space.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "twisty.py"}, {"type": "directory", "name": "utils", "children": [{"type": "file", "name": "display.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "vectorized", "children": [{"type": "file", "name": "core.py"}, {"type": "file", "name": "multiprocessing_env.py"}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_monitoring.py"}]}, {"type": "file", "name": "vectorize_filter.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "vncdriver", "children": [{"type": "file", "name": "auth.py"}, {"type": "file", "name": "constants.py"}, {"type": "file", "name": "dual_proxy_server.py"}, {"type": "file", "name": "error.py"}, {"type": "file", "name": "fbs_reader.py"}, {"type": "file", "name": "fbs_writer.py"}, {"type": "file", "name": "libvnc_session.py"}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "screen", "children": [{"type": "file", "name": "base.py"}, {"type": "file", "name": "numpy_screen.py"}, {"type": "file", "name": "pyglet_screen.py"}, {"type": "file", "name": "screen_buffer.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "server_messages.py"}, {"type": "directory", "name": "vendor", "children": [{"type": "file", "name": "pydes.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "vnc_client.py"}, {"type": "file", "name": "vnc_proxy_server.py"}, {"type": "file", "name": "vnc_session.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "wrappers", "children": [{"type": "file", "name": "action_space.py"}, {"type": "file", "name": "blocking_reset.py"}, {"type": "file", "name": "diagnostics.py"}, {"type": "directory", "name": "experimental", "children": [{"type": "file", "name": "action_space.py"}, {"type": "file", "name": "observation.py"}, {"type": "file", "name": "random_env.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "gym_core.py"}, {"type": "file", "name": "gym_core_sync.py"}, {"type": "file", "name": "joint.py"}, {"type": "file", "name": "logger.py"}, {"type": "file", "name": "monitoring.py"}, {"type": "file", "name": "multiprocessing_env.py"}, {"type": "file", "name": "recording.py"}, {"type": "file", "name": "render.py"}, {"type": "directory", "name": "tests", "children": [{"type": "file", "name": "test_joint.py"}, {"type": "file", "name": "test_time_limit.py"}]}, {"type": "file", "name": "throttle.py"}, {"type": "file", "name": "timer.py"}, {"type": "file", "name": "time_limit.py"}, {"type": "file", "name": "vectorize.py"}, {"type": "file", "name": "vision.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "__init__.py"}]}]}
# Python VNC driver implementation This Python VNC driver is using an older API, and needs a small amount of work to once again become a good backend. We haven't bothered with this since the Go driver is much faster. We would take a pull request to fix it though!
{"Dockerfile": "FROM ubuntu:16.04\n\nRUN apt-get update \\\n && apt-get install -y libav-tools \\\n python3-numpy \\\n python3-scipy \\\n python3-setuptools \\\n python3-pip \\\n libpq-dev \\\n libjpeg-dev \\\n curl \\\n cmake \\\n swig \\\n python3-opengl \\\n libboost-all-dev \\\n libsdl2-dev \\\n wget \\\n unzip \\\n git \\\n golang \\\n net-tools \\\n iptables \\\n libvncserver-dev \\\n software-properties-common \\\n && apt-get clean \\\n && rm -rf /var/lib/apt/lists/*\n\nRUN ln -sf /usr/bin/pip3 /usr/local/bin/pip \\\n && ln -sf /usr/bin/python3 /usr/local/bin/python \\\n && pip install -U pip\n\n# Install gym\nRUN pip install gym[all]\n\n# Get the faster VNC driver\nRUN pip install go-vncdriver>=0.4.0\n\n# Install pytest (for running test cases)\nRUN pip install pytest\n\n# Force the container to use the go vnc driver\nENV UNIVERSE_VNCDRIVER='go'\n\nWORKDIR /usr/local/universe/\n\n# Cachebusting\nCOPY ./setup.py ./\nCOPY ./tox.ini ./\n\nRUN pip install -e .\n\n# Upload our actual code\nCOPY . ./\n\n# Just in case any python cache files were carried over from the source directory, remove them\nRUN py3clean .\n", "setup.py": "from setuptools import setup, find_packages\n\nsetup(name='universe',\n version='0.21.5',\n description=\"Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.\",\n url='https://github.com/openai/universe',\n author='OpenAI',\n author_email='[email protected]',\n packages=[package for package in find_packages()\n if package.startswith('universe')],\n install_requires=[\n 'autobahn>=0.16.0',\n 'docker-py==1.10.3',\n 'docker-pycreds==0.2.1',\n 'fastzbarlight>=0.0.13',\n 'go-vncdriver>=0.4.8',\n 'gym>=0.8.1',\n 'Pillow>=3.3.0',\n 'PyYAML>=3.12',\n 'six>=1.10.0',\n 'twisted>=16.5.0',\n 'ujson>=1.35',\n ],\n package_data={'universe': ['runtimes.yml', 'runtimes/flashgames.json']},\n tests_require=['pytest'],\n extras_require={\n 'atari': 'gym[atari]',\n }\n )\n", "test.dockerfile": "FROM quay.io/openai/universe\n\nRUN pip install tox\n\n# Upload our actual code\nWORKDIR /usr/local/universe/\nCOPY . ./\n\n# Run tox. Keep printing so Travis knows we're alive.\nCMD [\"bash\", \"-c\", \"( while true; do echo '.'; sleep 60; done ) & tox\"]\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", "example\\starter-cluster\\starter-cluster-requirements.txt": "boto3>=1.4.2\nclick>=6.6\ndocker-py==1.10.6\nPyYAML>=3.12\nuniverse>=0.1.0\ndocker-compose>=1.9.0\n"}
null
universe-starter-agent
{"type": "directory", "name": "universe-starter-agent", "children": [{"type": "file", "name": "a3c.py"}, {"type": "file", "name": "envs.py"}, {"type": "directory", "name": "imgs", "children": []}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "model.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "train.py"}, {"type": "file", "name": "worker.py"}]}
**This repository has been deprecated in favor of the Retro (https://github.com/openai/retro) library. See our Retro Contest (https://blog.openai.com/retro-contest) blog post for detalis.** # universe-starter-agent The codebase implements a starter agent that can solve a number of `universe` environments. It contains a basic implementation of the [A3C algorithm](https://arxiv.org/abs/1602.01783), adapted for real-time environments. # Dependencies * Python 2.7 or 3.5 * [Golang](https://golang.org/doc/install) * [six](https://pypi.python.org/pypi/six) (for py2/3 compatibility) * [TensorFlow](https://www.tensorflow.org/) 0.12 * [tmux](https://tmux.github.io/) (the start script opens up a tmux session with multiple windows) * [htop](https://hisham.hm/htop/) (shown in one of the tmux windows) * [gym](https://pypi.python.org/pypi/gym) * gym[atari] * libjpeg-turbo (`brew install libjpeg-turbo`) * [universe](https://pypi.python.org/pypi/universe) * [opencv-python](https://pypi.python.org/pypi/opencv-python) * [numpy](https://pypi.python.org/pypi/numpy) * [scipy](https://pypi.python.org/pypi/scipy) # Getting Started ``` conda create --name universe-starter-agent python=3.5 source activate universe-starter-agent brew install tmux htop cmake golang libjpeg-turbo # On Linux use sudo apt-get install -y tmux htop cmake golang libjpeg-dev pip install "gym[atari]" pip install universe pip install six pip install tensorflow conda install -y -c https://conda.binstar.org/menpo opencv3 conda install -y numpy conda install -y scipy ``` Add the following to your `.bashrc` so that you'll have the correct environment when the `train.py` script spawns new bash shells ```source activate universe-starter-agent``` ## Atari Pong `python train.py --num-workers 2 --env-id PongDeterministic-v3 --log-dir /tmp/pong` The command above will train an agent on Atari Pong using ALE simulator. It will see two workers that will be learning in parallel (`--num-workers` flag) and will output intermediate results into given directory. The code will launch the following processes: * worker-0 - a process that runs policy gradient * worker-1 - a process identical to process-1, that uses different random noise from the environment * ps - the parameter server, which synchronizes the parameters among the different workers * tb - a tensorboard process for convenient display of the statistics of learning Once you start the training process, it will create a tmux session with a window for each of these processes. You can connect to them by typing `tmux a` in the console. Once in the tmux session, you can see all your windows with `ctrl-b w`. To switch to window number 0, type: `ctrl-b 0`. Look up tmux documentation for more commands. To access TensorBoard to see various monitoring metrics of the agent, open [http://localhost:12345/](http://localhost:12345/) in a browser. Using 16 workers, the agent should be able to solve `PongDeterministic-v3` (not VNC) within 30 minutes (often less) on an `m4.10xlarge` instance. Using 32 workers, the agent is able to solve the same environment in 10 minutes on an `m4.16xlarge` instance. If you run this experiment on a high-end MacBook Pro, the above job will take just under 2 hours to solve Pong. Add '--visualise' toggle if you want to visualise the worker using env.render() as follows: `python train.py --num-workers 2 --env-id PongDeterministic-v3 --log-dir /tmp/pong --visualise` ![pong](https://github.com/openai/universe-starter-agent/raw/master/imgs/tb_pong.png "Pong") For best performance, it is recommended for the number of workers to not exceed available number of CPU cores. You can stop the experiment with `tmux kill-session` command. ## Playing games over remote desktop The main difference with the previous experiment is that now we are going to play the game through VNC protocol. The VNC environments are hosted on the EC2 cloud and have an interface that's different from a conventional Atari Gym environment; luckily, with the help of several wrappers (which are used within `envs.py` file) the experience should be similar to the agent as if it was played locally. The problem itself is more difficult because the observations and actions are delayed due to the latency induced by the network. More interestingly, you can also peek at what the agent is doing with a VNCViewer. Note that the default behavior of `train.py` is to start the remotes on a local machine. Take a look at https://github.com/openai/universe/blob/master/doc/remotes.rst for documentation on managing your remotes. Pass additional `-r` flag to point to pre-existing instances. ### VNC Pong `python train.py --num-workers 2 --env-id gym-core.PongDeterministic-v3 --log-dir /tmp/vncpong` _Peeking into the agent's environment with TurboVNC_ You can use your system viewer as `open vnc://localhost:5900` (or `open vnc://${docker_ip}:5900`) or connect TurboVNC to that ip/port. VNC password is `"openai"`. ![pong](https://github.com/openai/universe-starter-agent/raw/master/imgs/vnc_pong.png "Pong over VNC") #### Important caveats One of the novel challenges in using Universe environments is that they operate in *real time*, and in addition, it takes time for the environment to transmit the observation to the agent. This time creates a lag: where the greater the lag, the harder it is to solve environment with today's RL algorithms. Thus, to get the best possible results it is necessary to reduce the lag, which can be achieved by having both the environments and the agent live on the same high-speed computer network. So for example, if you have a fast local network, you could host the environments on one set of machines, and the agent on another machine that can speak to the environments with low latency. Alternatively, you can run the environments and the agent on the same EC2/Azure region. Other configurations tend to have greater lag. To keep track of your lag, look for the phrase `reaction_time` in stderr. If you run both the agent and the environment on nearby machines on the cloud, your `reaction_time` should be as low as 40ms. The `reaction_time` statistic is printed to stderr because we wrap our environment with the `Logger` wrapper, as done in [here](<https://github.com/openai/universe-starter-agent/blob/master/envs.py#L32>). Generally speaking, environments that are most affected by lag are games that place a lot of emphasis on reaction time. For example, this agent is able to solve VNC Pong (`gym-core.PongDeterministic-v3`) in under 2 hours when both the agent and the environment are co-located on the cloud, but this agent had difficulty solving VNC Pong when the environment was on the cloud while the agent was not. This issue affects environments that place great emphasis on reaction time. ### A note on tuning This implementation has been tuned to do well on VNC Pong, and we do not guarantee its performance on other tasks. It is meant as a starting point. ### Playing flash games You may run the following command to launch the agent on the game Neon Race: `python train.py --num-workers 2 --env-id flashgames.NeonRace-v0 --log-dir /tmp/neonrace` _What agent sees when playing Neon Race_ (you can connect to this view via [note](#vnc-pong) above) ![neon](https://github.com/openai/universe-starter-agent/raw/master/imgs/neon_race.png "Neon Race") Getting 80% of the maximal score takes between 1 and 2 hours with 16 workers, and getting to 100% of the score takes about 12 hours. Also, flash games are run at 5fps by default, so it should be possible to productively use 16 workers on a machine with 8 (and possibly even 4) cores. ### Next steps Now that you have seen an example agent, develop agents of your own. We hope that you will find doing so to be an exciting and an enjoyable task.
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null
vdvae
{"type": "directory", "name": "vdvae", "children": [{"type": "file", "name": "data.py"}, {"type": "file", "name": "files_to_npy.py"}, {"type": "file", "name": "hps.py"}, {"type": "file", "name": "LICENSE.md"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "setup_cifar10.sh"}, {"type": "file", "name": "setup_ffhq1024.sh"}, {"type": "file", "name": "setup_ffhq256.sh"}, {"type": "file", "name": "setup_imagenet.sh"}, {"type": "file", "name": "train.py"}, {"type": "file", "name": "train_helpers.py"}, {"type": "file", "name": "utils.py"}, {"type": "file", "name": "vae.py"}, {"type": "file", "name": "vae_helpers.py"}]}
# Very Deep VAEs Repository for the paper "Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images" (https://arxiv.org/abs/2011.10650) Some model samples and a visualization of how it generates them: ![image](header-image.png) This repository is tested with PyTorch 1.6, CUDA 10.1, Numpy 1.16, Ubuntu 18.04, and V100 GPUs. # Setup Several additional packages are required, including NVIDIA Apex: ``` pip install imageio pip install mpi4py pip install sklearn git clone https://github.com/NVIDIA/apex cd apex pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ cd .. ``` Also, you'll have to download the data, depending on which one you want to run: ``` ./setup_cifar10.sh ./setup_imagenet.sh imagenet32 ./setup_imagenet.sh imagenet64 ./setup_ffhq256.sh ./setup_ffhq1024.sh /path/to/images1024x1024 # this one depends on you first downloading the subfolder `images_1024x1024` from https://github.com/NVlabs/ffhq-dataset on your own ``` # Training models Hyperparameters all reside in `hps.py`. We use 2 gpus for our CIFAR-10 runs, and 32 for the rest of the models. (Using a lower batch size is also possible and results in slower learning, and may also require a lower learning rate). The `mpiexec` arguments you use for runs with more than 1 node depend on the configuration of your system, so please adapt accordingly. ```bash mpiexec -n 2 python train.py --hps cifar10 mpiexec -n 32 python train.py --hps imagenet32 mpiexec -n 32 python train.py --hps imagenet64 mpiexec -n 32 python train.py --hps ffhq256 mpiexec -n 32 python train.py --hps ffhq1024 ``` # Restoring saved models For convenience, we have included training checkpoints which can be restored in order to confirm performance, continue training, or generate samples. ### ImageNet 32 ```bash # 119M parameter model, trained for 1.7M iters (about 2.5 weeks on 32 V100) wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/imagenet32-iter-1700000-log.jsonl wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/imagenet32-iter-1700000-model.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/imagenet32-iter-1700000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/imagenet32-iter-1700000-opt.th python train.py --hps imagenet32 --restore_path imagenet32-iter-1700000-model.th --restore_ema_path imagenet32-iter-1700000-model-ema.th --restore_log_path imagenet32-iter-1700000-log.jsonl --restore_optimizer_path imagenet32-iter-1700000-opt.th --test_eval # should give 2.6364 nats per dim, which is 3.80 bpd ``` ### ImageNet 64 ```bash # 125M parameter model, trained for 1.6M iters (about 2.5 weeks on 32 V100) wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/imagenet64-iter-1600000-log.jsonl wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/imagenet64-iter-1600000-model.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/imagenet64-iter-1600000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/imagenet64-iter-1600000-opt.th python train.py --hps imagenet64 --restore_path imagenet64-iter-1600000-model.th --restore_ema_path imagenet64-iter-1600000-model-ema.th --restore_log_path imagenet64-iter-1600000-log.jsonl --restore_optimizer_path imagenet64-iter-1600000-opt.th --test_eval # should be 2.44 nats, or 3.52 bits per dim ``` ### FFHQ-256 ```bash # 115M parameters, trained for 1.7M iterations (or about 2.5 weeks) on 32 V100 wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq256-iter-1700000-log.jsonl wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq256-iter-1700000-model.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq256-iter-1700000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq256-iter-1700000-opt.th python train.py --hps ffhq256 --restore_path ffhq256-iter-1700000-model.th --restore_ema_path ffhq256-iter-1700000-model-ema.th --restore_log_path ffhq256-iter-1700000-log.jsonl --restore_optimizer_path ffhq256-iter-1700000-opt.th --test_eval # should be 0.4232 nats, or 0.61 bits per dim ``` ### FFHQ-1024 ```bash # 115M parameters, trained for 1.7M iterations (or about 2.5 weeks) on 32 V100 wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq1024-iter-1700000-log.jsonl wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq1024-iter-1700000-model.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq1024-iter-1700000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets/ffhq1024-iter-1700000-opt.th python train.py --hps ffhq1024 --restore_path ffhq1024-iter-1700000-model.th --restore_ema_path ffhq1024-iter-1700000-model-ema.th --restore_log_path ffhq1024-iter-1700000-log.jsonl --restore_optimizer_path ffhq1024-iter-1700000-opt.th --test_eval # should be 1.678 nats, or 2.42 bits per dim ``` ### CIFAR-10 ```bash # 39M parameters, trained for ~1M iterations with early stopping (a little less than a week on 2 GPUs) wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/cifar10-seed0-iter-900000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/cifar10-seed1-iter-1050000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/cifar10-seed2-iter-650000-model-ema.th wget https://openaipublic.blob.core.windows.net/very-deep-vaes-assets/vdvae-assets-2/cifar10-seed3-iter-1050000-model-ema.th python train.py --hps cifar10 --restore_ema_path cifar10-seed0-iter-900000-model-ema.th --test_eval python train.py --hps cifar10 --restore_ema_path cifar10-seed1-iter-1050000-model-ema.th --test_eval python train.py --hps cifar10 --restore_ema_path cifar10-seed2-iter-650000-model-ema.th --test_eval python train.py --hps cifar10 --restore_ema_path cifar10-seed3-iter-1050000-model-ema.th --test_eval # seeds 0, 1, 2, 3 should give 2.879, 2.842, 2.898, 2.864 bits per dim, for an average of 2.87 bits per dim. ```
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 ea35b490313bc33e7f8ac63dd8132f3cc1a729b4 Hamza Amin <[email protected]> 1729337418 +0500\tclone: from https://github.com/openai/vdvae.git\n", ".git\\refs\\heads\\main": "ea35b490313bc33e7f8ac63dd8132f3cc1a729b4\n"}
null
Video-Pre-Training
{"type": "directory", "name": "Video-Pre-Training", "children": [{"type": "file", "name": "agent.py"}, {"type": "file", "name": "behavioural_cloning.py"}, {"type": "directory", "name": "cursors", "children": []}, {"type": "file", "name": "data_loader.py"}, {"type": "file", "name": "inverse_dynamics_model.py"}, {"type": "directory", "name": "lib", "children": [{"type": "file", "name": "actions.py"}, {"type": "file", "name": "action_head.py"}, {"type": "file", "name": "action_mapping.py"}, {"type": "file", "name": "impala_cnn.py"}, {"type": "file", "name": "masked_attention.py"}, {"type": "file", "name": "minecraft_util.py"}, {"type": "file", "name": "misc.py"}, {"type": "file", "name": "mlp.py"}, {"type": "file", "name": "normalize_ewma.py"}, {"type": "file", "name": "policy.py"}, {"type": "file", "name": "scaled_mse_head.py"}, {"type": "file", "name": "torch_util.py"}, {"type": "file", "name": "tree_util.py"}, {"type": "file", "name": "util.py"}, {"type": "file", "name": "xf.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "LICENSE"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "requirements.txt"}, {"type": "file", "name": "run_agent.py"}, {"type": "file", "name": "run_inverse_dynamics_model.py"}]}
# Video-Pre-Training Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos > :page_facing_up: [Read Paper](https://cdn.openai.com/vpt/Paper.pdf) \ :mega: [Blog Post](https://openai.com/blog/vpt) \ :space_invader: [MineRL Environment](https://github.com/minerllabs/minerl) (note version 1.0+ required) \ :checkered_flag: [MineRL BASALT Competition](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition) # Running agent models Install pre-requirements for [MineRL](https://minerl.readthedocs.io/en/latest/tutorials/index.html). Then install requirements with: ``` pip install git+https://github.com/minerllabs/minerl pip install -r requirements.txt ``` > ⚠️ Note: For reproducibility reasons, the PyTorch version is pinned as `torch==1.9.0`, which is incompatible with Python 3.10 or higher versions. If you are using Python 3.10 or higher, install a [newer version of PyTorch](https://pytorch.org/get-started/locally/) (usually, `pip install torch`). However, note that this *might* subtly change model behaviour (e.g., still act mostly as expected, but not reaching the reported performance). To run the code, call ``` python run_agent.py --model [path to .model file] --weights [path to .weight file] ``` After loading up, you should see a window of the agent playing Minecraft. # Agent Model Zoo Below are the model files and weights files for various pre-trained Minecraft models. The 1x, 2x and 3x model files correspond to their respective model weights width. * [:arrow_down: 1x Model](https://openaipublic.blob.core.windows.net/minecraft-rl/models/foundation-model-1x.model) * [:arrow_down: 2x Model](https://openaipublic.blob.core.windows.net/minecraft-rl/models/2x.model) * [:arrow_down: 3x Model](https://openaipublic.blob.core.windows.net/minecraft-rl/models/foundation-model-3x.model) ### Demonstration Only - Behavioral Cloning These models are trained on video demonstrations of humans playing Minecraft using behavioral cloning (BC) and are more general than later models which use reinforcement learning (RL) to further optimize the policy. Foundational models are trained across all videos in a single training run while house and early game models refine their respective size foundational model further using either the housebuilding contractor data or early game video sub-set. See the paper linked above for more details. #### Foundational Model :chart_with_upwards_trend: * [:arrow_down: 1x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/foundation-model-1x.weights) * [:arrow_down: 2x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/foundation-model-2x.weights) * [:arrow_down: 3x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/foundation-model-3x.weights) #### Fine-Tuned from House :chart_with_upwards_trend: * [:arrow_down: 3x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/bc-house-3x.weights) #### Fine-Tuned from Early Game :chart_with_upwards_trend: * [:arrow_down: 2x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/bc-early-game-2x.weights) * [:arrow_down: 3x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/bc-early-game-3x.weights) ### Models With Environment Interactions These models further refine the above demonstration based models with a reward function targeted at obtaining diamond pickaxes. While less general then the behavioral cloning models, these models have the benefit of interacting with the environment using a reward function and excel at progressing through the tech tree quickly. See the paper for more information on how they were trained and the exact reward schedule. #### RL from Foundation :chart_with_upwards_trend: * [:arrow_down: 2x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/rl-from-foundation-2x.weights) #### RL from House :chart_with_upwards_trend: * [:arrow_down: 2x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/rl-from-house-2x.weights) #### RL from Early Game :chart_with_upwards_trend: * [:arrow_down: 2x Width Weights](https://openaipublic.blob.core.windows.net/minecraft-rl/models/rl-from-early-game-2x.weights) # Running Inverse Dynamics Model (IDM) IDM aims to predict what actions player is taking in a video recording. Setup: * Install requirements: `pip install -r requirements.txt` * Download the IDM model [.model :arrow_down:](https://openaipublic.blob.core.windows.net/minecraft-rl/idm/4x_idm.model) and [.weight :arrow_down:](https://openaipublic.blob.core.windows.net/minecraft-rl/idm/4x_idm.weights) files * For demonstration purposes, you can use the contractor recordings shared below to. For this demo we use [this .mp4](https://openaipublic.blob.core.windows.net/minecraft-rl/data/10.0/cheeky-cornflower-setter-02e496ce4abb-20220421-092639.mp4) and [this associated actions file (.jsonl)](https://openaipublic.blob.core.windows.net/minecraft-rl/data/10.0/cheeky-cornflower-setter-02e496ce4abb-20220421-092639.jsonl). To run the model with above files placed in the root directory of this code: ``` python run_inverse_dynamics_model.py --weights 4x_idm.weights --model 4x_idm.model --video-path cheeky-cornflower-setter-02e496ce4abb-20220421-092639.mp4 --jsonl-path cheeky-cornflower-setter-02e496ce4abb-20220421-092639.jsonl ``` A window should pop up which shows the video frame-by-frame, showing the predicted and true (recorded) actions side-by-side on the left. Note that `run_inverse_dynamics_model.py` is designed to be a demo of the IDM, not code to put it into practice. # Using behavioural cloning to fine-tune the models **Disclaimer:** This code is a rough demonstration only and not an exact recreation of what original VPT paper did (but it contains some preprocessing steps you want to be aware of)! As such, do not expect replicate the original experiments with this code. This code has been designed to be run-able on consumer hardware (e.g., 8GB of VRAM). Setup: * Install requirements: `pip install -r requirements.txt` * Download `.weights` and `.model` file for model you want to fine-tune. * Download contractor data (below) and place the `.mp4` and `.jsonl` files to the same directory (e.g., `data`). With default settings, you need at least 12 recordings. If you downloaded the "1x Width" models and placed some data under `data` directory, you can perform finetuning with ``` python behavioural_cloning.py --data-dir data --in-model foundation-model-1x.model --in-weights foundation-model-1x.weights --out-weights finetuned-1x.weights ``` You can then use `finetuned-1x.weights` when running the agent. You can change the training settings at the top of `behavioural_cloning.py`. Major limitations: - Only trains single step at the time, i.e., errors are not propagated through timesteps. - Computes gradients one sample at a time to keep memory use low, but also slows down the code. # Contractor Demonstrations ### Versions Over the course of the project we requested various demonstrations from contractors which we release as index files below. In general, major recorder versions change for a new prompt or recording feature while bug-fixes were represented as minor version changes. However, some recorder versions we asked contractors to change their username when recording particular modalities. Also, as contractors internally ask questions, clarification from one contractor may result in a behavioral change in the other contractor. It is intractable to share every contractor's view for each version, but we've shared the prompts and major clarifications for each recorder version where the task changed significantly. <details> <summary>Initial Prompt</summary> We are collecting data for training AI models in Minecraft. You'll need to install java, download the modified version of minecraft (that collects and uploads your play data), and play minecraft survival mode! Paid per hour of gameplay. Prior experience in minecraft not. necessary. We do not collect any data that is unrelated to minecraft from your computer. </details> The following is a list of the available versions: * **6.x** Core recorder features subject to change [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/all_6xx_Jun_29.json) * 6.9 First feature complete recorder version * 6.10 Fixes mouse scaling on Mac when gui is open * 6.11 Tracks the hotbar slot * 6.13 Sprinting, swap-hands, ... (see commits below) <details> <summary>Commits</summary> * improve replays that are cut in the middle of gui; working on riding boats / replays cut in the middle of a run * improve replays by adding dwheel action etc, also, loosen up replay tolerances * opencv version bump * add swap hands, and recording of the step timestamp * implement replaying from running and sprinting and tests * do not record sprinting (can use stats for that) * check for mouse button number, ignore >2 * handle the errors when mouse / keyboard are recorded as null </details> * **7.x** Prompt changes [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/all_7xx_Apr_6.json) * 7.6 Bump version for internal tracking <details> <summary>Additional ask to contractors</summary> Right now, early game data is especially valuable to us. As such, we request that at least half of the data you upload is from the first 30 minutes of the game. This means that, for every hour of gameplay you spend in an older world, we ask you to play two sessions in which you create a new world and play for 30 minutes. You can play for longer in these worlds, but only the first 30 minutes counts as early game data. </details> * **8.x** :clipboard: House Building from Scratch Task [:arrow_down: index](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/all_8xx_Jun_29.json) <details> <summary>Changes and Prompt</summary> Hi all! Thank you for your hard work so far. This week we would like to have you all collect data on a specific task. This comes with a new recorder version 8.0 which you will need to update your recording script to download. This week we would like you to use a new world each time you play, so loading existing worlds is disabled. The new task is as follows: Starting in a new world, build a simple house in 10-15 minutes. This corresponds to one day and a bit of the night. Please use primarily wood, dirt, and sand, as well as crafted wood items such as doors, fences, ect. in constructing your house. Avoid using difficult items such as stone. Aside from those constraints, you may decorate the structure you build as you wish. It does not need to have any specific furniture. For example, it is OK if there is no bed in your house. If you have not finished the house by the sunrise (20 minutes) please exit and continue to another demonstration. Please continue to narrate what you are doing while completing this task. Since you will be unable to resume building after exiting Minecraft or going back to the main menu, you must finish these demonstrations in one session. Pausing via the menu is still supported. If you want to view your creations later, they will be saved locally so you can look at them in your own time. We may use these save files in a future task so if you have space, please leave the save files titled “build-house-15-min-“. For this week try to avoid all cobblestone / stone / granite For this week we just want simple houses without sleeping. If 10 minutes is too short, let us know and we can think of how to adjust! Stone tools are ok but I think you may run-out of time Changes: * Timer ends episode after 10 realtime minutes * Worlds are named: `"build-house-15-min-" + Math.abs(random.nextInt());` </details> * Note this version introduces 10-minute timer that ends the episode. It cut experiments short occasionally and was fixed in 9.1 * 8.0 Simple House * 8.2 Update upload script * **9.x** :clipboard: House Building from Random Starting Materials Task [:arrow_down: index](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/all_9xx_Jun_29.json) <details> <summary>Changes and Prompt</summary> You now will have 10 minutes to use the provided resources to build your house / home / or structure. In this version, the experiment will time out after 10 minutes if you are not complete so don't be alarmed if that happens, it is intentional. No need to use up all the resources! It's ok to collect a few things but spend the majority of the time placing blocks (the act of placing seems to be harder to learn) Changes: * Worlds are named: `"design-house-10-min-" + Math.abs(random.nextInt());` * Starting inventory given by code below </details> <details> <summary>Random Starting Inventory Code</summary> ```java Random random = new Random(); List<ItemStack> hotbar = new ArrayList<>(); List<ItemStack> inventory = new ArrayList<>(); // Ensure we give the player the basic tools in their hot bar hotbar.add(new ItemStack(Items.STONE_AXE)); hotbar.add(new ItemStack(Items.STONE_PICKAXE)); hotbar.add(new ItemStack(Items.STONE_SHOVEL)); hotbar.add(new ItemStack(Items.CRAFTING_TABLE)); // Add some random items to the player hotbar as well addToList(hotbar, inventory, Items.TORCH, random.nextInt(16) * 2 + 2); // Next add main building blocks if (random.nextFloat() < 0.7) { addToList(hotbar, inventory, Items.OAK_FENCE_GATE, random.nextInt(5)); addToList(hotbar, inventory, Items.OAK_FENCE, random.nextInt(5) * 64); addToList(hotbar, inventory, Items.OAK_DOOR, random.nextInt(5)); addToList(hotbar, inventory, Items.OAK_TRAPDOOR, random.nextInt(2) * 2); addToList(hotbar, inventory, Items.OAK_PLANKS, random.nextInt(3) * 64 + 128); addToList(hotbar, inventory, Items.OAK_SLAB, random.nextInt(3) * 64); addToList(hotbar, inventory, Items.OAK_STAIRS, random.nextInt(3) * 64); addToList(hotbar, inventory, Items.OAK_LOG, random.nextInt(2) * 32); addToList(hotbar, inventory, Items.OAK_PRESSURE_PLATE, random.nextInt(5)); } else { addToList(hotbar, inventory, Items.BIRCH_FENCE_GATE, random.nextInt(5)); addToList(hotbar, inventory, Items.BIRCH_FENCE, random.nextInt(5) * 64); addToList(hotbar, inventory, Items.BIRCH_DOOR, random.nextInt(5)); addToList(hotbar, inventory, Items.BIRCH_TRAPDOOR, random.nextInt(2) * 2); addToList(hotbar, inventory, Items.BIRCH_PLANKS, random.nextInt(3) * 64 + 128); addToList(hotbar, inventory, Items.BIRCH_SLAB, random.nextInt(3) * 64); addToList(hotbar, inventory, Items.BIRCH_STAIRS, random.nextInt(3) * 64); addToList(hotbar, inventory, Items.BIRCH_LOG, random.nextInt(2) * 32); addToList(hotbar, inventory, Items.BIRCH_PRESSURE_PLATE, random.nextInt(5)); } // Now add some random decoration items to the player inventory addToList(hotbar, inventory, Items.CHEST, random.nextInt(3)); addToList(hotbar, inventory, Items.FURNACE, random.nextInt(2) + 1); addToList(hotbar, inventory, Items.GLASS_PANE, random.nextInt(5) * 4); addToList(hotbar, inventory, Items.WHITE_BED, (int) (random.nextFloat() + 0.2)); // Bed 20% of the time addToList(hotbar, inventory, Items.PAINTING, (int) (random.nextFloat() + 0.1)); // Painting 10% of the time addToList(hotbar, inventory, Items.FLOWER_POT, (int) (random.nextFloat() + 0.1) * 4); // 4 Flower pots 10% of the time addToList(hotbar, inventory, Items.OXEYE_DAISY, (int) (random.nextFloat() + 0.1) * 4); // 4 Oxeye daisies 10% of the time addToList(hotbar, inventory, Items.POPPY, (int) (random.nextFloat() + 0.1) * 4); // 4 Poppies 10% of the time addToList(hotbar, inventory, Items.SUNFLOWER, (int) (random.nextFloat() + 0.1) * 4); // 4 Sunflowers 10% of the time // Shuffle the hotbar slots and inventory slots Collections.shuffle(hotbar); Collections.shuffle(inventory); // Give the player the items this.mc.getIntegratedServer().getPlayerList().getPlayers().forEach(p -> { if (p.getUniqueID().equals(this.getUniqueID())) { hotbar.forEach(p.inventory::addItemStackToInventory); inventory.forEach(p.inventory::addItemStackToInventory); } }); ``` </details> * 9.0 First version * 9.1 Fixed timer bug * **10.0** :clipboard: Obtain Diamond Pickaxe Task [:arrow_down: index](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/all_10xx_Jun_29.json) <details> <summary>Changes and Prompt</summary> Prompt: For this new task we have given you 20 minutes to craft a diamond pickaxe. We ask that you do not try to search for villages or other ways of getting diamonds, but if you are spawned in view of one, or happen to fall into a cave structure feel free to explore it for diamonds. If 20 min is not enough that is OK. It will happen on some seeds because of bad luck. Please do not use glitches to find the diamonds. Changes: * change to 20 minute time limit * _don't count gui time as part of the time limit_ * World are named `"collect-diamond-pickaxe-15min-" + Math.abs(random.nextInt());` </details> Sometimes we asked the contractors to signify other tasks besides changing the version. This primarily occurred in versions 6 and 7 as 8, 9 and 10 are all task specific. <details> <summary>Prompt to contractors (click to show)</summary> Another request about additional time - please use some of it to chop trees. Specifically, please start the recorder by adding --username treechop argument to the script (i.e. use play --username treechop on windows, ./play.sh --username treechop on osx/linux), and spend some time chopping trees! Getting wooden or stone tools is ok, but please spend the majority of the with username treechop specifically chopping. I did it myself for about 15 minutes, and it does get boring pretty quickly, so I don't expect you to do it all the time, but please do at least a little bit of chopping. Feel free to play normally the rest of the time (but please restart without --username treechop argument when you are not chopping) However, it is preferable that you start a new world though, and use only the tools that are easily obtainable in that world. I'll see what I can do about getting player an iron axe - that sounds reasonable, and should not be hard, but will require a code update. </details> ### Environment We restrict the contractors to playing Minecraft in windowed mode at 720p which we downsample at 20hz to 360p to minimize space. We also disabled the options screen to prevent the contractor from changing things such as brightness, or rendering options. We ask contractors not to press keys such as f3 which shows a debug overlay, however some contractors may still do this. ### Data format Demonstrations are broken up into up to 5 minute segments consisting of a series of compressed screen observations, actions, environment statistics, and a checkpoint save file from the start of the segment. Each relative path in the index will have all the files for that given segment, however if a file was dropped while uploading, the corresponding relative path is not included in the index therefore there may be missing chunks from otherwise continuous demonstrations. Index files are provided for each version as a json file: ```json { "basedir": "https://openaipublic.blob.core.windows.net/data/", "relpaths": [ "8.0/cheeky-cornflower-setter-74ae6c2eae2e-20220315-122354", ... ] } ``` Relative paths follow the following format: * `<recorder-version>/<contractor-alias>-<session-id>-<date>-<time>` > Note that due to network errors, some segments may be missing from otherwise continuous demonstrations. Your data loader can then find following files: * Video observation: `<basedir>/<relpath>.mp4` * Action file: `<basedir>/<relpath>.jsonl` * Options file: `<basedir>/<relpath>-options.json` * Checkpoint save file: `<basedir>/<relpath>.zip` The action file is **not** a valid json object: each line in action file is an individual action dictionary. For v7.x, the actions are in form ```json { "mouse": { "x": 274.0, "y": 338.0, "dx": 0.0, "dy": 0.0, "scaledX": -366.0, "scaledY": -22.0, "dwheel": 0.0, "buttons": [], "newButtons": [] }, "keyboard": { "keys": [ "key.keyboard.a", "key.keyboard.s" ], "newKeys": [], "chars": "" }, "isGuiOpen": false, "isGuiInventory": false, "hotbar": 4, "yaw": -112.35006, "pitch": 8.099996, "xpos": 841.364694513396, "ypos": 63.0, "zpos": 24.956354839537802, "tick": 0, "milli": 1649575088006, "inventory": [ { "type": "oak_door", "quantity": 3 }, { "type": "oak_planks", "quantity": 59 }, { "type": "stone_pickaxe", "quantity": 1 }, { "type": "oak_planks", "quantity": 64 } ], "serverTick": 6001, "serverTickDurationMs": 36.3466, "stats": { "minecraft.custom:minecraft.jump": 4, "minecraft.custom:minecraft.time_since_rest": 5999, "minecraft.custom:minecraft.play_one_minute": 5999, "minecraft.custom:minecraft.time_since_death": 5999, "minecraft.custom:minecraft.walk_one_cm": 7554, "minecraft.use_item:minecraft.oak_planks": 5, "minecraft.custom:minecraft.fall_one_cm": 269, "minecraft.use_item:minecraft.glass_pane": 3 } } ``` # BASALT 2022 dataset We also collected a dataset of demonstrations for the [MineRL BASALT 2022](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition) competition, with around 150GB of data per task. **Note**: To avoid confusion with the competition rules, the action files (.jsonl) have been stripped of information that is not allowed in the competition. We will upload unmodified dataset after the competition ends. * **FindCave** [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/find-cave-Jul-28.json) * <details> <summary>Prompt to contractors (click to show)</summary> ``` Look around for a cave. When you are inside one, quit the game by opening main menu and pressing "Save and Quit To Title". You are not allowed to dig down from the surface to find a cave. Timelimit: 3 minutes. Example recordings: https://www.youtube.com/watch?v=TclP_ozH-eg ``` </details> * **MakeWaterfall** [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/waterfall-Jul-28.json) * <details> <summary>Prompt to contractors (click to show)</summary> ``` After spawning in a mountainous area with a water bucket and various tools, build a beautiful waterfall and then reposition yourself to “take a scenic picture” of the same waterfall, and then quit the game by opening the menu and selecting "Save and Quit to Title" Timelimit: 5 minutes. Example recordings: https://youtu.be/NONcbS85NLA ``` </details> * **MakeVillageAnimalPen** [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/pen-animals-Jul-28.json) * <details> <summary>Prompt to contractors (click to show)</summary> ``` After spawning in a village, build an animal pen next to one of the houses in a village. Use your fence posts to build one animal pen that contains at least two of the same animal. (You are only allowed to pen chickens, cows, pigs, sheep or rabbits.) There should be at least one gate that allows players to enter and exit easily. The animal pen should not contain more than one type of animal. (You may kill any extra types of animals that accidentally got into the pen.) Don’t harm the village. After you are done, quit the game by opening the menu and pressing "Save and Quit to Title". You may need to terraform the area around a house to build a pen. When we say not to harm the village, examples include taking animals from existing pens, damaging existing houses or farms, and attacking villagers. Animal pens must have a single type of animal: pigs, cows, sheep, chicken or rabbits. The food items can be used to lure in the animals: if you hold seeds in your hand, this attracts nearby chickens to you, for example. Timelimit: 5 minutes. Example recordings: https://youtu.be/SLO7sep7BO8 ``` </details> * **BuildVillageHouse** [:arrow_down: index file](https://openaipublic.blob.core.windows.net/minecraft-rl/snapshots/build-house-Jul-28.json) * <details> <summary>Prompt to contractors (click to show)</summary> ``` Taking advantage of the items in your inventory, build a new house in the style of the village (random biome), in an appropriate location (e.g. next to the path through the village), without harming the village in the process. Then give a brief tour of the house (i.e. spin around slowly such that all of the walls and the roof are visible). * You start with a stone pickaxe and a stone axe, and various building blocks. It’s okay to break items that you misplaced (e.g. use the stone pickaxe to break cobblestone blocks). * You are allowed to craft new blocks. Please spend less than ten minutes constructing your house. You don’t need to copy another house in the village exactly (in fact, we’re more interested in having slight deviations, while keeping the same "style"). You may need to terraform the area to make space for a new house. When we say not to harm the village, examples include taking animals from existing pens, damaging existing houses or farms, and attacking villagers. After you are done, quit the game by opening the menu and pressing "Save and Quit to Title". Timelimit: 12 minutes. Example recordings: https://youtu.be/WeVqQN96V_g ``` </details> # Contribution This was a large effort by a dedicated team at OpenAI: [Bowen Baker](https://github.com/bowenbaker), [Ilge Akkaya](https://github.com/ilge), [Peter Zhokhov](https://github.com/pzhokhov), [Joost Huizinga](https://github.com/JoostHuizinga), [Jie Tang](https://github.com/jietang), [Adrien Ecoffet](https://github.com/AdrienLE), [Brandon Houghton](https://github.com/brandonhoughton), [Raul Sampedro](https://github.com/samraul), Jeff Clune The code here represents a minimal version of our model code which was prepared by [Anssi Kanervisto](https://github.com/miffyli) and others so that these models could be used as part of the MineRL BASALT competition.
{"requirements.txt": "torch==1.9.0\ngym3\nattrs\nopencv-python\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 aed46b90e8db2332801feabd8be2de01f92c0ad2 Hamza Amin <[email protected]> 1729337420 +0500\tclone: from https://github.com/openai/Video-Pre-Training.git\n", ".git\\refs\\heads\\main": "aed46b90e8db2332801feabd8be2de01f92c0ad2\n", "lib\\action_mapping.py": "import abc\nimport itertools\nfrom collections import OrderedDict\nfrom typing import Dict, List\n\nimport numpy as np\nfrom gym3.types import DictType, Discrete, TensorType\n\nfrom lib.actions import Buttons\n\n\nclass ActionMapping(abc.ABC):\n \"\"\"Class that maps between the standard MC factored action space and a new one you define!\n\n :param n_camera_bins: Need to specify this to define the original ac space for stats code\n \"\"\"\n\n # This is the default buttons groups, it can be changed for your action space\n BUTTONS_GROUPS = OrderedDict(\n hotbar=[\"none\"] + [f\"hotbar.{i}\" for i in range(1, 10)],\n fore_back=[\"none\", \"forward\", \"back\"],\n left_right=[\"none\", \"left\", \"right\"],\n sprint_sneak=[\"none\", \"sprint\", \"sneak\"],\n use=[\"none\", \"use\"],\n drop=[\"none\", \"drop\"],\n attack=[\"none\", \"attack\"],\n jump=[\"none\", \"jump\"],\n )\n\n def __init__(self, n_camera_bins: int = 11):\n assert n_camera_bins % 2 == 1, \"n_camera_bins should be odd\"\n self.n_camera_bins = n_camera_bins\n self.camera_null_bin = n_camera_bins // 2\n self.stats_ac_space = DictType(\n **{\n \"buttons\": TensorType(shape=(len(Buttons.ALL),), eltype=Discrete(2)),\n \"camera\": TensorType(shape=(2,), eltype=Discrete(n_camera_bins)),\n }\n )\n\n @abc.abstractmethod\n def from_factored(self, ac: Dict) -> Dict:\n \"\"\"Converts a factored action (ac) to the new space\n\n :param ac: Dictionary of actions that must have a batch dimension\n \"\"\"\n pass\n\n @abc.abstractmethod\n def to_factored(self, ac: Dict) -> Dict:\n \"\"\"Converts an action in the new space (ac) to the factored action space.\n\n :param ac: Dictionary of actions that must have a batch dimension\n \"\"\"\n pass\n\n @abc.abstractmethod\n def get_action_space_update(self):\n \"\"\"Return a magym (gym3) action space. This will be used to update the env action space.\"\"\"\n pass\n\n @abc.abstractmethod\n def get_zero_action(self):\n \"\"\"Return the zero or null action for this action space\"\"\"\n pass\n\n def factored_buttons_to_groups(self, ac_buttons: np.ndarray, button_group: List[str]) -> List[str]:\n \"\"\"For a mutually exclusive group of buttons in button_group, find which option\n in the group was chosen. Assumes that each button group has the option of 'none'\n meaning that no button in the group was pressed.\n\n :param ac_buttons: button actions from the factored action space. Should dims [B, len(Buttons.ALL)]\n :param button_group: List of buttons in a mutually exclusive group. Each item in the\n list should appear in Buttons.ALL except for the special case 'none' which means\n no button in the group was pressed. e.g. ['none', 'forward', 'back']. For now\n 'none' must be the first element of button_group\n\n Returns a list of length B, where each element is an item from button_group.\n \"\"\"\n assert ac_buttons.shape[1] == len(\n Buttons.ALL\n ), f\"There should be {len(Buttons.ALL)} buttons in the factored buttons space\"\n assert button_group[0] == \"none\", \"This function only works if 'none' is in button_group\"\n # Actions in ac_buttons with order according to button_group\n group_indices = [Buttons.ALL.index(b) for b in button_group if b != \"none\"]\n ac_choices = ac_buttons[:, group_indices]\n\n # Special cases for forward/back, left/right where mutual press means do neither\n if \"forward\" in button_group and \"back\" in button_group:\n ac_choices[np.all(ac_choices, axis=-1)] = 0\n if \"left\" in button_group and \"right\" in button_group:\n ac_choices[np.all(ac_choices, axis=-1)] = 0\n ac_non_zero = np.where(ac_choices)\n ac_choice = [\"none\" for _ in range(ac_buttons.shape[0])]\n # Iterate over the non-zero indices so that if two buttons in a group were pressed at the same time\n # we give priority to the button later in the group. E.g. if hotbar.1 and hotbar.2 are pressed during the same\n # timestep, hotbar.2 is marked as pressed\n for index, action in zip(ac_non_zero[0], ac_non_zero[1]):\n ac_choice[index] = button_group[action + 1] # the zero'th index will mean no button pressed\n return ac_choice\n\nclass IDMActionMapping(ActionMapping):\n \"\"\"For IDM, but essentially this is just an identity mapping\"\"\"\n def from_factored(self, ac: Dict) -> Dict:\n return ac\n\n def to_factored(self, ac: Dict) -> Dict:\n return ac\n\n def get_action_space_update(self):\n \"\"\"Return a magym (gym3) action space. This will be used to update the env action space.\"\"\"\n return {\n \"buttons\": TensorType(shape=(len(Buttons.ALL),), eltype=Discrete(2)),\n \"camera\": TensorType(shape=(2,), eltype=Discrete(self.n_camera_bins)),\n }\n\n def get_zero_action(self):\n raise NotImplementedError()\n\nclass CameraHierarchicalMapping(ActionMapping):\n \"\"\"Buttons are joint as in ButtonsJointMapping, but now a camera on/off meta action is added into this joint space.\n When this meta action is triggered, the separate camera head chooses a camera action which is also now a joint space.\n\n :param n_camera_bins: number of camera bins in the factored space\n \"\"\"\n\n # Add camera meta action to BUTTONS_GROUPS\n BUTTONS_GROUPS = ActionMapping.BUTTONS_GROUPS.copy()\n BUTTONS_GROUPS[\"camera\"] = [\"none\", \"camera\"]\n BUTTONS_COMBINATIONS = list(itertools.product(*BUTTONS_GROUPS.values())) + [\"inventory\"]\n BUTTONS_COMBINATION_TO_IDX = {comb: i for i, comb in enumerate(BUTTONS_COMBINATIONS)}\n BUTTONS_IDX_TO_COMBINATION = {i: comb for i, comb in enumerate(BUTTONS_COMBINATIONS)}\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.camera_groups = OrderedDict(\n camera_x=[f\"camera_x{i}\" for i in range(self.n_camera_bins)],\n camera_y=[f\"camera_y{i}\" for i in range(self.n_camera_bins)],\n )\n self.camera_combinations = list(itertools.product(*self.camera_groups.values()))\n self.camera_combination_to_idx = {comb: i for i, comb in enumerate(self.camera_combinations)}\n self.camera_idx_to_combination = {i: comb for i, comb in enumerate(self.camera_combinations)}\n self.camera_null_idx = self.camera_combination_to_idx[\n (f\"camera_x{self.camera_null_bin}\", f\"camera_y{self.camera_null_bin}\")\n ]\n self._null_action = {\n \"buttons\": self.BUTTONS_COMBINATION_TO_IDX[tuple(\"none\" for _ in range(len(self.BUTTONS_GROUPS)))]\n }\n self._precompute_to_factored()\n\n def _precompute_to_factored(self):\n \"\"\"Precompute the joint action -> factored action matrix.\"\"\"\n button_dim = self.stats_ac_space[\"buttons\"].size\n self.BUTTON_IDX_TO_FACTORED = np.zeros((len(self.BUTTONS_IDX_TO_COMBINATION), button_dim), dtype=int)\n self.BUTTON_IDX_TO_CAMERA_META_OFF = np.zeros((len(self.BUTTONS_IDX_TO_COMBINATION)), dtype=bool)\n self.CAMERA_IDX_TO_FACTORED = np.zeros((len(self.camera_idx_to_combination), 2), dtype=int)\n\n # Pre compute Buttons\n for jnt_ac, button_comb in self.BUTTONS_IDX_TO_COMBINATION.items():\n new_button_ac = np.zeros(len(Buttons.ALL), dtype=\"i\")\n if button_comb == \"inventory\":\n new_button_ac[Buttons.ALL.index(\"inventory\")] = 1\n else:\n for group_choice in button_comb[:-1]: # Last one is camera\n if group_choice != \"none\":\n new_button_ac[Buttons.ALL.index(group_choice)] = 1\n\n if button_comb[-1] != \"camera\": # This means camera meta action is off\n self.BUTTON_IDX_TO_CAMERA_META_OFF[jnt_ac] = True\n self.BUTTON_IDX_TO_FACTORED[jnt_ac] = new_button_ac\n\n # Pre compute camera\n for jnt_ac, camera_comb in self.camera_idx_to_combination.items():\n new_camera_ac = np.ones((2), dtype=\"i\") * self.camera_null_bin\n new_camera_ac[0] = self.camera_groups[\"camera_x\"].index(camera_comb[0])\n new_camera_ac[1] = self.camera_groups[\"camera_y\"].index(camera_comb[1])\n self.CAMERA_IDX_TO_FACTORED[jnt_ac] = new_camera_ac\n\n def from_factored(self, ac: Dict) -> Dict:\n \"\"\"Converts a factored action (ac) to the new space. Assumes ac has a batch dim\"\"\"\n assert ac[\"camera\"].ndim == 2, f\"bad camera label, {ac['camera']}\"\n assert ac[\"buttons\"].ndim == 2, f\"bad buttons label, {ac['buttons']}\"\n # Get button choices for everything but camera\n choices_by_group = OrderedDict(\n (k, self.factored_buttons_to_groups(ac[\"buttons\"], v)) for k, v in self.BUTTONS_GROUPS.items() if k != \"camera\"\n )\n # Set camera \"on off\" action based on whether non-null camera action was given\n camera_is_null = np.all(ac[\"camera\"] == self.camera_null_bin, axis=1)\n choices_by_group[\"camera\"] = [\"none\" if is_null else \"camera\" for is_null in camera_is_null]\n\n new_button_ac = []\n new_camera_ac = []\n for i in range(ac[\"buttons\"].shape[0]):\n # Buttons\n key = tuple([v[i] for v in choices_by_group.values()])\n if ac[\"buttons\"][i, Buttons.ALL.index(\"inventory\")] == 1:\n key = \"inventory\"\n new_button_ac.append(self.BUTTONS_COMBINATION_TO_IDX[key])\n\n # Camera -- inventory is also exclusive with camera\n if key == \"inventory\":\n key = (\n f\"camera_x{self.camera_null_bin}\",\n f\"camera_y{self.camera_null_bin}\",\n )\n else:\n key = (f\"camera_x{ac['camera'][i][0]}\", f\"camera_y{ac['camera'][i][1]}\")\n new_camera_ac.append(self.camera_combination_to_idx[key])\n\n return dict(\n buttons=np.array(new_button_ac)[:, None],\n camera=np.array(new_camera_ac)[:, None],\n )\n\n def to_factored(self, ac: Dict) -> Dict:\n \"\"\"Converts an action in the new space (ac) to the factored action space. Assumes ac has a batch dim\"\"\"\n assert ac[\"camera\"].shape[-1] == 1\n assert ac[\"buttons\"].shape[-1] == 1\n\n new_button_ac = self.BUTTON_IDX_TO_FACTORED[np.squeeze(ac[\"buttons\"], -1)]\n camera_off = self.BUTTON_IDX_TO_CAMERA_META_OFF[np.squeeze(ac[\"buttons\"], -1)]\n new_camera_ac = self.CAMERA_IDX_TO_FACTORED[np.squeeze(ac[\"camera\"], -1)]\n new_camera_ac[camera_off] = self.camera_null_bin\n\n return dict(buttons=new_button_ac, camera=new_camera_ac)\n\n def get_action_space_update(self):\n return {\n \"camera\": TensorType(shape=(1,), eltype=Discrete(len(self.camera_combinations))),\n \"buttons\": TensorType(shape=(1,), eltype=Discrete(len(self.BUTTONS_COMBINATIONS))),\n }\n\n def get_zero_action(self):\n return self._null_action\n\n"}
null
vime
{"type": "directory", "name": "vime", "children": [{"type": "directory", "name": "algos", "children": [{"type": "file", "name": "batch_polopt_expl.py"}, {"type": "file", "name": "erwr_expl.py"}, {"type": "file", "name": "npo_expl.py"}, {"type": "file", "name": "trpo_expl.py"}, {"type": "file", "name": "vpg_expl.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "dynamics", "children": [{"type": "file", "name": "bnn.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "envs", "children": [{"type": "file", "name": "cartpole_swingup_env_x.py"}, {"type": "file", "name": "double_pendulum_env_x.py"}, {"type": "file", "name": "half_cheetah_env_x.py"}, {"type": "file", "name": "mountain_car_env_x.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "directory", "name": "experiments", "children": [{"type": "file", "name": "run_experiment_lite.py"}, {"type": "file", "name": "run_trpo.py"}, {"type": "file", "name": "run_trpo_expl.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "sampler", "children": [{"type": "file", "name": "parallel_sampler_expl.py"}, {"type": "file", "name": "__init__.py"}]}, {"type": "file", "name": "__init__.py"}]}
**Status:** Archive (code is provided as-is, no updates expected) # How to run VIME Variational Information Maximizing Exploration (VIME) as presented in Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks by *R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, P. Abbeel* (http://arxiv.org/abs/1605.09674). To reproduce the results, you should first have [rllab](https://github.com/rllab/rllab) and Mujoco v1.31 configured. Then, run the following commands in the root folder of `rllab`: ```bash git submodule add -f [email protected]:openai/vime.git sandbox/vime touch sandbox/__init__.py ``` Then you can do the following: - Execute TRPO+VIME on the hierarchical SwimmerGather environment via `python sandbox/vime/experiments/run_trpo_expl.py`.
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null
weak-to-strong
{"type": "directory", "name": "weak-to-strong", "children": [{"type": "file", "name": "LICENSE.md"}, {"type": "directory", "name": "notebooks", "children": [{"type": "file", "name": "Plotting.ipynb"}, {"type": "file", "name": "Plotting_old.ipynb"}]}, {"type": "file", "name": "pyproject.toml"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "setup.py"}, {"type": "file", "name": "sweep.py"}, {"type": "file", "name": "train_simple.py"}, {"type": "file", "name": "train_weak_to_strong.py"}, {"type": "directory", "name": "vision", "children": [{"type": "file", "name": "data.py"}, {"type": "file", "name": "models.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "run_weak_strong.py"}]}, {"type": "directory", "name": "weak_to_strong", "children": [{"type": "file", "name": "common.py"}, {"type": "file", "name": "datasets.py"}, {"type": "file", "name": "eval.py"}, {"type": "file", "name": "logger.py"}, {"type": "file", "name": "loss.py"}, {"type": "file", "name": "model.py"}, {"type": "file", "name": "train.py"}, {"type": "file", "name": "__init__.py"}]}]}
# A Simple Weak-to-Strong Experiment on ImageNet We provide code for a simple weak-to-strong experiment on ImageNet. We generate the weak labels using an [AlexNet](https://pytorch.org/vision/main/models/generated/torchvision.models.alexnet.html) model pretrained on ImageNet and we use linear probes on top of [DINO](https://github.com/facebookresearch/dino) models as a strong student. The full training command: ```bash python3 run_weak_strong.py \ data_path: <DATA_PATH> \ weak_model_name: <WEAK_MODEL>\ strong_model_name: <STRONG_MODEL> \ batch_size <BATCH_SIZE> \ seed <SEED> \ n_epochs <N_EPOCHS> \ lr <LR> \ n_train <N_TRAIN> ``` Parameters: * ```DATA_PATH``` &mdash; path to the base directory containing ImageNet data, see [torchvision page](https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageNet.html) for instructions; should contain files `ILSVRC2012_devkit_t12.tar.gz` and `ILSVRC2012_img_val.tar` * ```WEAK_MODEL``` &mdash; weak model name: - `"alexnet"` is the only default model and the only one currently implemented * ```STRONG_MODEL``` &mdash; weak model name: - `"resnet50_dino"` (default) - `"vitb8_dino"` * ```BATCH_SIZE``` &mdash; batch size for weak label generation and embedding extraction (default: `128`) * ```SEED``` &mdash; random seed for dataset shuffling (default: `0`) * ```EPOCHS``` &mdash; number of training epochs (default: `10`) * ```LR``` &mdash; initial learning rate (default: `1e-3`) * ```N_TRAIN``` &mdash; number of datapoints used to train the linear probe; `50000 - N_TRAIN` datapoints are used as test (default: `40000`) Example commands: ```bash # AlexNet → ResNet50 (DINO): python3 run_weak_strong.py --strong_model_name resnet50_dino --n_epochs 20 # AlexNet → ViT-B/8 (DINO): python3 run_weak_strong.py --strong_model_name vitb8_dino --n_epochs 5 ``` With the commands above we get the following results (note that the results may not reproduce exactly due to randomness): | Model | Top-1 Accuracy | |-------------------------|----------------| | AlexNet | 56.6 | | Dino ResNet50 | 64.5 | | Dino ViT-B/8 | 74.0 | | AlexNet → DINO ResNet50 | 61.9 | | AlexNet → DINO ViT-B/8 | 66.6 | You can add new custom models to the `models.py` and new datasets to `data.py`.
{"setup.py": "import setuptools\n\nsetuptools.setup(\n name=\"weak_to_strong\",\n version=\"0.1\",\n description=\"Weak-to-strong generalization\",\n url=\"#\",\n author=\"OpenAI\",\n author_email=\"[email protected]\",\n packages=setuptools.find_packages(),\n zip_safe=False,\n)\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 6b450f2cee3714d6f886e5e1910bac73633bf69a Hamza Amin <[email protected]> 1729337465 +0500\tclone: from https://github.com/openai/weak-to-strong.git\n", ".git\\refs\\heads\\main": "6b450f2cee3714d6f886e5e1910bac73633bf69a\n"}
null
web-crawl-q-and-a-example
{"type": "directory", "name": "web-crawl-q-and-a-example", "children": [{"type": "file", "name": "README.md"}, {"type": "file", "name": "requirements.txt"}, {"type": "file", "name": "web-qa.ipynb"}, {"type": "file", "name": "web-qa.py"}]}
# Web Q&A with Embeddings Learn how to crawl your website and build a Q/A bot with the OpenAI API. You can find the full tutorial in the [OpenAI documentation](https://platform.openai.com/docs/tutorials/web-qa-embeddings).
{"requirements.txt": "aiohttp==3.8.5\naiosignal==1.3.1\nappnope==0.1.3\nasttokens==2.2.1\nasync-timeout==4.0.2\nattrs==22.2.0\nbackcall==0.2.0\nbeautifulsoup4==4.11.1\nblobfile==2.0.1\nbs4==0.0.1\ncertifi==2023.7.22\ncharset-normalizer==2.1.1\ncomm==0.1.2\ncontourpy==1.0.7\ncycler==0.11.0\ndebugpy==1.6.5\ndecorator==5.1.1\ndocopt==0.6.2\nentrypoints==0.4\nexecuting==1.2.0\nfilelock==3.9.0\nfonttools==4.38.0\nfrozenlist==1.3.3\nhuggingface-hub>=0.0.12\nidna==3.4\nipykernel==6.20.1\nipython==8.10.0\njedi==0.18.2\njoblib==1.2.0\njupyter_client==7.4.8\njupyter_core==5.1.3\nkiwisolver==1.4.4\nlxml==4.9.2\nmatplotlib==3.6.3\nmatplotlib-inline==0.1.6\nmultidict==6.0.4\nnest-asyncio==1.5.6\nnumpy==1.24.1\nopenai==0.26.1\npackaging==23.0\npandas==1.5.2\nparso==0.8.3\npexpect==4.8.0\npickleshare==0.7.5\nPillow==9.4.0\npipreqs==0.4.12\nplatformdirs==2.6.2\nplotly==5.12.0\nprompt-toolkit==3.0.36\npsutil==5.9.4\nptyprocess==0.7.0\npure-eval==0.2.2\npycryptodomex==3.17\nPygments==2.15.0\npyparsing==3.0.9\npython-dateutil==2.8.2\npytz==2022.7.1\nPyYAML==6.0\npyzmq==24.0.1\nregex==2022.10.31\nrequests==2.31.0\nscikit-learn==1.2.0\nscipy==1.10.0\nsix==1.16.0\nsoupsieve==2.3.2.post1\nstack-data==0.6.2\ntenacity==8.1.0\nthreadpoolctl==3.1.0\ntiktoken==0.1.2\ntokenizers==0.13.2\ntornado==6.3.3\ntqdm==4.64.1\ntraitlets==5.8.1\ntransformers==4.30.0\ntyping_extensions==4.4.0\nurllib3==1.26.13\nwcwidth==0.2.5\nyarg==0.1.9\nyarl==1.8.2\n", ".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n", ".git\\logs\\refs\\heads\\main": "0000000000000000000000000000000000000000 84c887fdf36fb2101516e9c5db2c6d6db0d1c5b9 Hamza Amin <[email protected]> 1729337467 +0500\tclone: from https://github.com/openai/web-crawl-q-and-a-example.git\n", ".git\\refs\\heads\\main": "84c887fdf36fb2101516e9c5db2c6d6db0d1c5b9\n"}
null
weightnorm
{"type": "directory", "name": "weightnorm", "children": [{"type": "directory", "name": "keras", "children": [{"type": "file", "name": "cifar10_cnn.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "weightnorm.py"}]}, {"type": "directory", "name": "lasagne", "children": [{"type": "file", "name": "nn.py"}, {"type": "file", "name": "README.md"}, {"type": "file", "name": "train.py"}]}, {"type": "file", "name": "LICENSE.md"}, {"type": "file", "name": "README.md"}, {"type": "directory", "name": "tensorflow", "children": [{"type": "file", "name": "nn.py"}, {"type": "file", "name": "README.md"}]}]}
# Direct implementation of Weight Normalization in Tensorflow The ```nn.py``` file contains an example of a direct implementation of weight normalization and data dependent initialization in Tensorflow. For use, see our [PixelCNN++](https://github.com/openai/pixel-cnn) repository.
{".git\\hooks\\applypatch-msg.sample": "#!/bin/sh\n#\n# An example hook script to check the commit log message taken by\n# applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit. The hook is\n# allowed to edit the commit message file.\n#\n# To enable this hook, rename this file to \"applypatch-msg\".\n\n. git-sh-setup\ncommitmsg=\"$(git rev-parse --git-path hooks/commit-msg)\"\ntest -x \"$commitmsg\" && exec \"$commitmsg\" ${1+\"$@\"}\n:\n", ".git\\hooks\\pre-applypatch.sample": "#!/bin/sh\n#\n# An example hook script to verify what is about to be committed\n# by applypatch from an e-mail message.\n#\n# The hook should exit with non-zero status after issuing an\n# appropriate message if it wants to stop the commit.\n#\n# To enable this hook, rename this file to \"pre-applypatch\".\n\n. git-sh-setup\nprecommit=\"$(git rev-parse --git-path hooks/pre-commit)\"\ntest -x \"$precommit\" && exec \"$precommit\" ${1+\"$@\"}\n:\n"}
null