|
[2023-02-25 09:54:34,751][00973] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2023-02-25 09:54:34,754][00973] Rollout worker 0 uses device cpu |
|
[2023-02-25 09:54:34,756][00973] Rollout worker 1 uses device cpu |
|
[2023-02-25 09:54:34,760][00973] Rollout worker 2 uses device cpu |
|
[2023-02-25 09:54:34,765][00973] Rollout worker 3 uses device cpu |
|
[2023-02-25 09:54:34,766][00973] Rollout worker 4 uses device cpu |
|
[2023-02-25 09:54:34,769][00973] Rollout worker 5 uses device cpu |
|
[2023-02-25 09:54:34,771][00973] Rollout worker 6 uses device cpu |
|
[2023-02-25 09:54:34,772][00973] Rollout worker 7 uses device cpu |
|
[2023-02-25 09:54:34,967][00973] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 09:54:34,969][00973] InferenceWorker_p0-w0: min num requests: 2 |
|
[2023-02-25 09:54:35,007][00973] Starting all processes... |
|
[2023-02-25 09:54:35,013][00973] Starting process learner_proc0 |
|
[2023-02-25 09:54:35,070][00973] Starting all processes... |
|
[2023-02-25 09:54:35,081][00973] Starting process inference_proc0-0 |
|
[2023-02-25 09:54:35,082][00973] Starting process rollout_proc0 |
|
[2023-02-25 09:54:35,085][00973] Starting process rollout_proc1 |
|
[2023-02-25 09:54:35,085][00973] Starting process rollout_proc2 |
|
[2023-02-25 09:54:35,085][00973] Starting process rollout_proc3 |
|
[2023-02-25 09:54:35,085][00973] Starting process rollout_proc4 |
|
[2023-02-25 09:54:35,085][00973] Starting process rollout_proc5 |
|
[2023-02-25 09:54:35,086][00973] Starting process rollout_proc6 |
|
[2023-02-25 09:54:35,086][00973] Starting process rollout_proc7 |
|
[2023-02-25 09:54:47,653][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 09:54:47,660][11765] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2023-02-25 09:54:47,908][11782] Worker 2 uses CPU cores [0] |
|
[2023-02-25 09:54:47,997][11780] Worker 0 uses CPU cores [0] |
|
[2023-02-25 09:54:48,027][11779] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 09:54:48,028][11779] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2023-02-25 09:54:48,056][11781] Worker 1 uses CPU cores [1] |
|
[2023-02-25 09:54:48,098][11784] Worker 3 uses CPU cores [1] |
|
[2023-02-25 09:54:48,458][11783] Worker 4 uses CPU cores [0] |
|
[2023-02-25 09:54:48,482][11786] Worker 7 uses CPU cores [1] |
|
[2023-02-25 09:54:48,511][11787] Worker 6 uses CPU cores [0] |
|
[2023-02-25 09:54:48,539][11785] Worker 5 uses CPU cores [1] |
|
[2023-02-25 09:54:48,861][11765] Num visible devices: 1 |
|
[2023-02-25 09:54:48,861][11779] Num visible devices: 1 |
|
[2023-02-25 09:54:48,877][11765] Starting seed is not provided |
|
[2023-02-25 09:54:48,877][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 09:54:48,878][11765] Initializing actor-critic model on device cuda:0 |
|
[2023-02-25 09:54:48,878][11765] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 09:54:48,886][11765] RunningMeanStd input shape: (1,) |
|
[2023-02-25 09:54:48,905][11765] ConvEncoder: input_channels=3 |
|
[2023-02-25 09:54:49,383][11765] Conv encoder output size: 512 |
|
[2023-02-25 09:54:49,384][11765] Policy head output size: 512 |
|
[2023-02-25 09:54:49,456][11765] Created Actor Critic model with architecture: |
|
[2023-02-25 09:54:49,456][11765] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2023-02-25 09:54:54,960][00973] Heartbeat connected on Batcher_0 |
|
[2023-02-25 09:54:54,968][00973] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2023-02-25 09:54:54,978][00973] Heartbeat connected on RolloutWorker_w0 |
|
[2023-02-25 09:54:54,982][00973] Heartbeat connected on RolloutWorker_w1 |
|
[2023-02-25 09:54:54,986][00973] Heartbeat connected on RolloutWorker_w2 |
|
[2023-02-25 09:54:54,993][00973] Heartbeat connected on RolloutWorker_w3 |
|
[2023-02-25 09:54:54,995][00973] Heartbeat connected on RolloutWorker_w4 |
|
[2023-02-25 09:54:54,998][00973] Heartbeat connected on RolloutWorker_w5 |
|
[2023-02-25 09:54:55,002][00973] Heartbeat connected on RolloutWorker_w6 |
|
[2023-02-25 09:54:55,010][00973] Heartbeat connected on RolloutWorker_w7 |
|
[2023-02-25 09:54:57,371][11765] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2023-02-25 09:54:57,372][11765] No checkpoints found |
|
[2023-02-25 09:54:57,372][11765] Did not load from checkpoint, starting from scratch! |
|
[2023-02-25 09:54:57,373][11765] Initialized policy 0 weights for model version 0 |
|
[2023-02-25 09:54:57,376][11765] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 09:54:57,383][11765] LearnerWorker_p0 finished initialization! |
|
[2023-02-25 09:54:57,384][00973] Heartbeat connected on LearnerWorker_p0 |
|
[2023-02-25 09:54:57,476][11779] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 09:54:57,478][11779] RunningMeanStd input shape: (1,) |
|
[2023-02-25 09:54:57,495][11779] ConvEncoder: input_channels=3 |
|
[2023-02-25 09:54:57,603][11779] Conv encoder output size: 512 |
|
[2023-02-25 09:54:57,603][11779] Policy head output size: 512 |
|
[2023-02-25 09:54:59,340][00973] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 09:55:00,514][00973] Inference worker 0-0 is ready! |
|
[2023-02-25 09:55:00,516][00973] All inference workers are ready! Signal rollout workers to start! |
|
[2023-02-25 09:55:00,699][11786] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,713][11785] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,719][11784] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,723][11781] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,725][11780] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,764][11783] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,804][11782] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:00,821][11787] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 09:55:02,156][11784] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:02,157][11786] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:02,159][11785] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:02,161][11780] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:02,158][11782] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:03,830][11786] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:03,835][11785] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:03,842][11784] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:03,867][11781] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:03,942][11787] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:03,947][11783] Decorrelating experience for 0 frames... |
|
[2023-02-25 09:55:03,961][11782] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:04,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 09:55:04,800][11781] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:04,970][11786] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:05,077][11787] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:05,085][11783] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:05,132][11780] Decorrelating experience for 32 frames... |
|
[2023-02-25 09:55:06,019][11787] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:06,022][11783] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:06,217][11781] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:06,430][11786] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:06,496][11785] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:06,559][11784] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:06,778][11780] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:07,168][11783] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:07,419][11785] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:07,546][11787] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:07,732][11781] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:08,082][11784] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:08,300][11782] Decorrelating experience for 64 frames... |
|
[2023-02-25 09:55:08,690][11780] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:08,851][11782] Decorrelating experience for 96 frames... |
|
[2023-02-25 09:55:09,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 09:55:13,352][11765] Signal inference workers to stop experience collection... |
|
[2023-02-25 09:55:13,366][11779] InferenceWorker_p0-w0: stopping experience collection |
|
[2023-02-25 09:55:14,340][00973] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 121.9. Samples: 1828. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 09:55:14,345][00973] Avg episode reward: [(0, '1.870')] |
|
[2023-02-25 09:55:16,208][11765] Signal inference workers to resume experience collection... |
|
[2023-02-25 09:55:16,208][11779] InferenceWorker_p0-w0: resuming experience collection |
|
[2023-02-25 09:55:19,340][00973] Fps is (10 sec: 1228.8, 60 sec: 614.4, 300 sec: 614.4). Total num frames: 12288. Throughput: 0: 163.8. Samples: 3276. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) |
|
[2023-02-25 09:55:19,342][00973] Avg episode reward: [(0, '2.788')] |
|
[2023-02-25 09:55:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 1146.9, 300 sec: 1146.9). Total num frames: 28672. Throughput: 0: 206.4. Samples: 5160. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-02-25 09:55:24,343][00973] Avg episode reward: [(0, '3.525')] |
|
[2023-02-25 09:55:26,894][11779] Updated weights for policy 0, policy_version 10 (0.0015) |
|
[2023-02-25 09:55:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 49152. Throughput: 0: 369.8. Samples: 11094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:55:29,348][00973] Avg episode reward: [(0, '4.295')] |
|
[2023-02-25 09:55:34,340][00973] Fps is (10 sec: 3686.2, 60 sec: 1872.4, 300 sec: 1872.4). Total num frames: 65536. Throughput: 0: 487.9. Samples: 17078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:55:34,345][00973] Avg episode reward: [(0, '4.595')] |
|
[2023-02-25 09:55:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 1945.6, 300 sec: 1945.6). Total num frames: 77824. Throughput: 0: 467.7. Samples: 18708. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:55:39,343][00973] Avg episode reward: [(0, '4.600')] |
|
[2023-02-25 09:55:40,648][11779] Updated weights for policy 0, policy_version 20 (0.0020) |
|
[2023-02-25 09:55:44,340][00973] Fps is (10 sec: 2048.1, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 86016. Throughput: 0: 478.1. Samples: 21516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:55:44,342][00973] Avg episode reward: [(0, '4.534')] |
|
[2023-02-25 09:55:49,340][00973] Fps is (10 sec: 2047.9, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 98304. Throughput: 0: 553.5. Samples: 24906. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:55:49,345][00973] Avg episode reward: [(0, '4.416')] |
|
[2023-02-25 09:55:54,340][00973] Fps is (10 sec: 3276.8, 60 sec: 2159.7, 300 sec: 2159.7). Total num frames: 118784. Throughput: 0: 616.2. Samples: 27728. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:55:54,346][00973] Avg episode reward: [(0, '4.252')] |
|
[2023-02-25 09:55:54,349][11765] Saving new best policy, reward=4.252! |
|
[2023-02-25 09:55:54,766][11779] Updated weights for policy 0, policy_version 30 (0.0029) |
|
[2023-02-25 09:55:59,346][00973] Fps is (10 sec: 4093.5, 60 sec: 2320.8, 300 sec: 2320.8). Total num frames: 139264. Throughput: 0: 721.5. Samples: 34302. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:55:59,349][00973] Avg episode reward: [(0, '4.381')] |
|
[2023-02-25 09:55:59,362][11765] Saving new best policy, reward=4.381! |
|
[2023-02-25 09:56:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 2525.9, 300 sec: 2331.6). Total num frames: 151552. Throughput: 0: 782.2. Samples: 38476. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:56:04,344][00973] Avg episode reward: [(0, '4.454')] |
|
[2023-02-25 09:56:04,351][11765] Saving new best policy, reward=4.454! |
|
[2023-02-25 09:56:07,654][11779] Updated weights for policy 0, policy_version 40 (0.0014) |
|
[2023-02-25 09:56:09,340][00973] Fps is (10 sec: 2868.9, 60 sec: 2798.9, 300 sec: 2399.1). Total num frames: 167936. Throughput: 0: 782.9. Samples: 40392. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:56:09,350][00973] Avg episode reward: [(0, '4.481')] |
|
[2023-02-25 09:56:09,363][11765] Saving new best policy, reward=4.481! |
|
[2023-02-25 09:56:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2512.2). Total num frames: 188416. Throughput: 0: 772.2. Samples: 45842. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 09:56:14,346][00973] Avg episode reward: [(0, '4.544')] |
|
[2023-02-25 09:56:14,348][11765] Saving new best policy, reward=4.544! |
|
[2023-02-25 09:56:18,071][11779] Updated weights for policy 0, policy_version 50 (0.0023) |
|
[2023-02-25 09:56:19,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3276.8, 300 sec: 2611.2). Total num frames: 208896. Throughput: 0: 780.7. Samples: 52208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:56:19,342][00973] Avg episode reward: [(0, '4.530')] |
|
[2023-02-25 09:56:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2602.2). Total num frames: 221184. Throughput: 0: 799.7. Samples: 54694. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:56:24,343][00973] Avg episode reward: [(0, '4.405')] |
|
[2023-02-25 09:56:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.3, 300 sec: 2639.6). Total num frames: 237568. Throughput: 0: 830.4. Samples: 58882. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:56:29,346][00973] Avg episode reward: [(0, '4.463')] |
|
[2023-02-25 09:56:29,356][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000058_237568.pth... |
|
[2023-02-25 09:56:31,364][11779] Updated weights for policy 0, policy_version 60 (0.0027) |
|
[2023-02-25 09:56:34,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3208.5, 300 sec: 2716.3). Total num frames: 258048. Throughput: 0: 875.0. Samples: 64280. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:56:34,348][00973] Avg episode reward: [(0, '4.554')] |
|
[2023-02-25 09:56:34,351][11765] Saving new best policy, reward=4.554! |
|
[2023-02-25 09:56:39,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3345.1, 300 sec: 2785.3). Total num frames: 278528. Throughput: 0: 883.4. Samples: 67480. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 09:56:39,345][00973] Avg episode reward: [(0, '4.615')] |
|
[2023-02-25 09:56:39,358][11765] Saving new best policy, reward=4.615! |
|
[2023-02-25 09:56:41,504][11779] Updated weights for policy 0, policy_version 70 (0.0017) |
|
[2023-02-25 09:56:44,340][00973] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 2808.7). Total num frames: 294912. Throughput: 0: 860.0. Samples: 72996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:56:44,343][00973] Avg episode reward: [(0, '4.596')] |
|
[2023-02-25 09:56:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 2792.7). Total num frames: 307200. Throughput: 0: 859.2. Samples: 77142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:56:49,345][00973] Avg episode reward: [(0, '4.593')] |
|
[2023-02-25 09:56:54,139][11779] Updated weights for policy 0, policy_version 80 (0.0018) |
|
[2023-02-25 09:56:54,341][00973] Fps is (10 sec: 3276.4, 60 sec: 3481.5, 300 sec: 2849.4). Total num frames: 327680. Throughput: 0: 870.5. Samples: 79564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 09:56:54,345][00973] Avg episode reward: [(0, '4.663')] |
|
[2023-02-25 09:56:54,348][11765] Saving new best policy, reward=4.663! |
|
[2023-02-25 09:56:59,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3482.0, 300 sec: 2901.3). Total num frames: 348160. Throughput: 0: 892.8. Samples: 86018. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:56:59,347][00973] Avg episode reward: [(0, '4.376')] |
|
[2023-02-25 09:57:04,340][00973] Fps is (10 sec: 3686.9, 60 sec: 3549.9, 300 sec: 2916.4). Total num frames: 364544. Throughput: 0: 870.4. Samples: 91376. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:57:04,342][00973] Avg episode reward: [(0, '4.325')] |
|
[2023-02-25 09:57:05,467][11779] Updated weights for policy 0, policy_version 90 (0.0016) |
|
[2023-02-25 09:57:09,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 2898.7). Total num frames: 376832. Throughput: 0: 860.7. Samples: 93424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:57:09,346][00973] Avg episode reward: [(0, '4.623')] |
|
[2023-02-25 09:57:14,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 2912.7). Total num frames: 393216. Throughput: 0: 865.8. Samples: 97842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:57:14,342][00973] Avg episode reward: [(0, '4.870')] |
|
[2023-02-25 09:57:14,381][11765] Saving new best policy, reward=4.870! |
|
[2023-02-25 09:57:17,272][11779] Updated weights for policy 0, policy_version 100 (0.0031) |
|
[2023-02-25 09:57:19,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 2984.2). Total num frames: 417792. Throughput: 0: 885.5. Samples: 104126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:57:19,342][00973] Avg episode reward: [(0, '4.650')] |
|
[2023-02-25 09:57:24,345][00973] Fps is (10 sec: 4093.8, 60 sec: 3549.6, 300 sec: 2994.2). Total num frames: 434176. Throughput: 0: 887.2. Samples: 107410. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:57:24,348][00973] Avg episode reward: [(0, '4.451')] |
|
[2023-02-25 09:57:29,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 2976.4). Total num frames: 446464. Throughput: 0: 855.0. Samples: 111472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:57:29,346][00973] Avg episode reward: [(0, '4.453')] |
|
[2023-02-25 09:57:30,040][11779] Updated weights for policy 0, policy_version 110 (0.0019) |
|
[2023-02-25 09:57:34,340][00973] Fps is (10 sec: 2868.7, 60 sec: 3413.4, 300 sec: 2986.1). Total num frames: 462848. Throughput: 0: 866.1. Samples: 116116. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:57:34,347][00973] Avg episode reward: [(0, '4.505')] |
|
[2023-02-25 09:57:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3020.8). Total num frames: 483328. Throughput: 0: 883.0. Samples: 119296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 09:57:39,343][00973] Avg episode reward: [(0, '4.459')] |
|
[2023-02-25 09:57:40,431][11779] Updated weights for policy 0, policy_version 120 (0.0022) |
|
[2023-02-25 09:57:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3053.4). Total num frames: 503808. Throughput: 0: 878.0. Samples: 125526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 09:57:44,346][00973] Avg episode reward: [(0, '4.495')] |
|
[2023-02-25 09:57:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3035.9). Total num frames: 516096. Throughput: 0: 848.8. Samples: 129570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:57:49,358][00973] Avg episode reward: [(0, '4.590')] |
|
[2023-02-25 09:57:53,883][11779] Updated weights for policy 0, policy_version 130 (0.0050) |
|
[2023-02-25 09:57:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.4, 300 sec: 3042.7). Total num frames: 532480. Throughput: 0: 849.2. Samples: 131640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 09:57:54,342][00973] Avg episode reward: [(0, '4.671')] |
|
[2023-02-25 09:57:59,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3072.0). Total num frames: 552960. Throughput: 0: 880.4. Samples: 137462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:57:59,343][00973] Avg episode reward: [(0, '4.739')] |
|
[2023-02-25 09:58:04,047][11779] Updated weights for policy 0, policy_version 140 (0.0030) |
|
[2023-02-25 09:58:04,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3099.7). Total num frames: 573440. Throughput: 0: 877.9. Samples: 143630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:58:04,345][00973] Avg episode reward: [(0, '4.714')] |
|
[2023-02-25 09:58:09,347][00973] Fps is (10 sec: 3274.4, 60 sec: 3481.2, 300 sec: 3082.7). Total num frames: 585728. Throughput: 0: 850.1. Samples: 145664. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:58:09,354][00973] Avg episode reward: [(0, '4.554')] |
|
[2023-02-25 09:58:14,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3087.8). Total num frames: 602112. Throughput: 0: 848.3. Samples: 149646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 09:58:14,343][00973] Avg episode reward: [(0, '4.442')] |
|
[2023-02-25 09:58:17,043][11779] Updated weights for policy 0, policy_version 150 (0.0028) |
|
[2023-02-25 09:58:19,340][00973] Fps is (10 sec: 3689.1, 60 sec: 3413.3, 300 sec: 3113.0). Total num frames: 622592. Throughput: 0: 882.1. Samples: 155812. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:58:19,342][00973] Avg episode reward: [(0, '4.321')] |
|
[2023-02-25 09:58:24,343][00973] Fps is (10 sec: 3685.0, 60 sec: 3413.4, 300 sec: 3116.9). Total num frames: 638976. Throughput: 0: 883.2. Samples: 159042. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 09:58:24,347][00973] Avg episode reward: [(0, '4.282')] |
|
[2023-02-25 09:58:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3413.3, 300 sec: 3101.3). Total num frames: 651264. Throughput: 0: 825.8. Samples: 162688. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 09:58:29,345][00973] Avg episode reward: [(0, '4.572')] |
|
[2023-02-25 09:58:29,359][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000159_651264.pth... |
|
[2023-02-25 09:58:30,403][11779] Updated weights for policy 0, policy_version 160 (0.0025) |
|
[2023-02-25 09:58:34,340][00973] Fps is (10 sec: 2458.5, 60 sec: 3345.1, 300 sec: 3086.3). Total num frames: 663552. Throughput: 0: 805.5. Samples: 165818. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-02-25 09:58:34,345][00973] Avg episode reward: [(0, '4.461')] |
|
[2023-02-25 09:58:39,340][00973] Fps is (10 sec: 2457.7, 60 sec: 3208.5, 300 sec: 3072.0). Total num frames: 675840. Throughput: 0: 796.8. Samples: 167496. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 09:58:39,350][00973] Avg episode reward: [(0, '4.692')] |
|
[2023-02-25 09:58:44,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.2, 300 sec: 3076.5). Total num frames: 692224. Throughput: 0: 785.6. Samples: 172816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:58:44,348][00973] Avg episode reward: [(0, '4.526')] |
|
[2023-02-25 09:58:44,357][11779] Updated weights for policy 0, policy_version 170 (0.0019) |
|
[2023-02-25 09:58:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3116.5). Total num frames: 716800. Throughput: 0: 791.4. Samples: 179242. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:58:49,345][00973] Avg episode reward: [(0, '4.481')] |
|
[2023-02-25 09:58:54,340][00973] Fps is (10 sec: 3686.5, 60 sec: 3276.8, 300 sec: 3102.5). Total num frames: 729088. Throughput: 0: 798.3. Samples: 181582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:58:54,346][00973] Avg episode reward: [(0, '4.511')] |
|
[2023-02-25 09:58:56,297][11779] Updated weights for policy 0, policy_version 180 (0.0022) |
|
[2023-02-25 09:58:59,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3089.1). Total num frames: 741376. Throughput: 0: 800.3. Samples: 185658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 09:58:59,347][00973] Avg episode reward: [(0, '4.547')] |
|
[2023-02-25 09:59:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3126.3). Total num frames: 765952. Throughput: 0: 787.8. Samples: 191264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:59:04,348][00973] Avg episode reward: [(0, '4.664')] |
|
[2023-02-25 09:59:07,165][11779] Updated weights for policy 0, policy_version 190 (0.0018) |
|
[2023-02-25 09:59:09,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3345.5, 300 sec: 3145.7). Total num frames: 786432. Throughput: 0: 786.1. Samples: 194414. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:59:09,342][00973] Avg episode reward: [(0, '4.829')] |
|
[2023-02-25 09:59:14,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3276.8, 300 sec: 3132.2). Total num frames: 798720. Throughput: 0: 824.4. Samples: 199786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:59:14,344][00973] Avg episode reward: [(0, '4.781')] |
|
[2023-02-25 09:59:19,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3135.0). Total num frames: 815104. Throughput: 0: 844.7. Samples: 203828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:59:19,347][00973] Avg episode reward: [(0, '4.849')] |
|
[2023-02-25 09:59:20,506][11779] Updated weights for policy 0, policy_version 200 (0.0019) |
|
[2023-02-25 09:59:24,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3208.7, 300 sec: 3137.7). Total num frames: 831488. Throughput: 0: 864.4. Samples: 206392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:59:24,348][00973] Avg episode reward: [(0, '5.087')] |
|
[2023-02-25 09:59:24,370][11765] Saving new best policy, reward=5.087! |
|
[2023-02-25 09:59:29,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3170.6). Total num frames: 856064. Throughput: 0: 886.2. Samples: 212694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:59:29,343][00973] Avg episode reward: [(0, '4.953')] |
|
[2023-02-25 09:59:30,370][11779] Updated weights for policy 0, policy_version 210 (0.0022) |
|
[2023-02-25 09:59:34,345][00973] Fps is (10 sec: 3684.6, 60 sec: 3413.1, 300 sec: 3157.6). Total num frames: 868352. Throughput: 0: 859.8. Samples: 217936. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-02-25 09:59:34,357][00973] Avg episode reward: [(0, '4.877')] |
|
[2023-02-25 09:59:39,340][00973] Fps is (10 sec: 2867.0, 60 sec: 3481.6, 300 sec: 3159.8). Total num frames: 884736. Throughput: 0: 852.0. Samples: 219922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 09:59:39,348][00973] Avg episode reward: [(0, '4.988')] |
|
[2023-02-25 09:59:43,655][11779] Updated weights for policy 0, policy_version 220 (0.0013) |
|
[2023-02-25 09:59:44,340][00973] Fps is (10 sec: 3278.4, 60 sec: 3481.6, 300 sec: 3161.8). Total num frames: 901120. Throughput: 0: 868.9. Samples: 224760. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 09:59:44,348][00973] Avg episode reward: [(0, '5.206')] |
|
[2023-02-25 09:59:44,352][11765] Saving new best policy, reward=5.206! |
|
[2023-02-25 09:59:49,340][00973] Fps is (10 sec: 3686.6, 60 sec: 3413.3, 300 sec: 3177.9). Total num frames: 921600. Throughput: 0: 883.6. Samples: 231028. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 09:59:49,345][00973] Avg episode reward: [(0, '5.193')] |
|
[2023-02-25 09:59:54,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3179.6). Total num frames: 937984. Throughput: 0: 881.6. Samples: 234084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 09:59:54,346][00973] Avg episode reward: [(0, '5.000')] |
|
[2023-02-25 09:59:54,394][11779] Updated weights for policy 0, policy_version 230 (0.0023) |
|
[2023-02-25 09:59:59,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3235.1). Total num frames: 954368. Throughput: 0: 852.5. Samples: 238148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 09:59:59,347][00973] Avg episode reward: [(0, '4.887')] |
|
[2023-02-25 10:00:04,342][00973] Fps is (10 sec: 3276.0, 60 sec: 3413.2, 300 sec: 3290.7). Total num frames: 970752. Throughput: 0: 869.6. Samples: 242962. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:00:04,345][00973] Avg episode reward: [(0, '4.917')] |
|
[2023-02-25 10:00:06,590][11779] Updated weights for policy 0, policy_version 240 (0.0021) |
|
[2023-02-25 10:00:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 995328. Throughput: 0: 884.8. Samples: 246208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:00:09,347][00973] Avg episode reward: [(0, '5.000')] |
|
[2023-02-25 10:00:14,340][00973] Fps is (10 sec: 4097.0, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 1011712. Throughput: 0: 880.3. Samples: 252308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:00:14,349][00973] Avg episode reward: [(0, '4.909')] |
|
[2023-02-25 10:00:18,740][11779] Updated weights for policy 0, policy_version 250 (0.0032) |
|
[2023-02-25 10:00:19,341][00973] Fps is (10 sec: 2866.8, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 1024000. Throughput: 0: 853.5. Samples: 256342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:00:19,347][00973] Avg episode reward: [(0, '4.980')] |
|
[2023-02-25 10:00:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3360.1). Total num frames: 1040384. Throughput: 0: 855.0. Samples: 258398. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 10:00:24,344][00973] Avg episode reward: [(0, '5.126')] |
|
[2023-02-25 10:00:29,340][00973] Fps is (10 sec: 3686.9, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1060864. Throughput: 0: 882.0. Samples: 264448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:00:29,342][00973] Avg episode reward: [(0, '5.087')] |
|
[2023-02-25 10:00:29,360][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000259_1060864.pth... |
|
[2023-02-25 10:00:29,478][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000058_237568.pth |
|
[2023-02-25 10:00:29,806][11779] Updated weights for policy 0, policy_version 260 (0.0029) |
|
[2023-02-25 10:00:34,343][00973] Fps is (10 sec: 4094.6, 60 sec: 3550.0, 300 sec: 3401.7). Total num frames: 1081344. Throughput: 0: 873.7. Samples: 270346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:00:34,348][00973] Avg episode reward: [(0, '4.896')] |
|
[2023-02-25 10:00:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1093632. Throughput: 0: 851.2. Samples: 272390. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:00:39,342][00973] Avg episode reward: [(0, '4.871')] |
|
[2023-02-25 10:00:43,350][11779] Updated weights for policy 0, policy_version 270 (0.0024) |
|
[2023-02-25 10:00:44,340][00973] Fps is (10 sec: 2868.1, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1110016. Throughput: 0: 851.6. Samples: 276472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:00:44,342][00973] Avg episode reward: [(0, '5.040')] |
|
[2023-02-25 10:00:49,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1130496. Throughput: 0: 886.2. Samples: 282838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:00:49,342][00973] Avg episode reward: [(0, '5.012')] |
|
[2023-02-25 10:00:52,760][11779] Updated weights for policy 0, policy_version 280 (0.0013) |
|
[2023-02-25 10:00:54,341][00973] Fps is (10 sec: 4095.5, 60 sec: 3549.8, 300 sec: 3429.6). Total num frames: 1150976. Throughput: 0: 883.7. Samples: 285976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:00:54,345][00973] Avg episode reward: [(0, '5.111')] |
|
[2023-02-25 10:00:59,344][00973] Fps is (10 sec: 3275.5, 60 sec: 3481.4, 300 sec: 3429.5). Total num frames: 1163264. Throughput: 0: 852.8. Samples: 290688. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:00:59,346][00973] Avg episode reward: [(0, '5.119')] |
|
[2023-02-25 10:01:04,340][00973] Fps is (10 sec: 2867.5, 60 sec: 3481.7, 300 sec: 3429.5). Total num frames: 1179648. Throughput: 0: 855.1. Samples: 294822. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:01:04,343][00973] Avg episode reward: [(0, '5.469')] |
|
[2023-02-25 10:01:04,351][11765] Saving new best policy, reward=5.469! |
|
[2023-02-25 10:01:06,132][11779] Updated weights for policy 0, policy_version 290 (0.0013) |
|
[2023-02-25 10:01:09,340][00973] Fps is (10 sec: 3687.9, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1200128. Throughput: 0: 878.7. Samples: 297938. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:01:09,342][00973] Avg episode reward: [(0, '5.625')] |
|
[2023-02-25 10:01:09,356][11765] Saving new best policy, reward=5.625! |
|
[2023-02-25 10:01:14,343][00973] Fps is (10 sec: 3686.5, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1216512. Throughput: 0: 869.3. Samples: 303566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:01:14,347][00973] Avg episode reward: [(0, '6.109')] |
|
[2023-02-25 10:01:14,353][11765] Saving new best policy, reward=6.109! |
|
[2023-02-25 10:01:19,340][00973] Fps is (10 sec: 2457.5, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 1224704. Throughput: 0: 811.5. Samples: 306860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:01:19,343][00973] Avg episode reward: [(0, '5.947')] |
|
[2023-02-25 10:01:19,835][11779] Updated weights for policy 0, policy_version 300 (0.0016) |
|
[2023-02-25 10:01:24,340][00973] Fps is (10 sec: 2047.9, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1236992. Throughput: 0: 800.5. Samples: 308414. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 10:01:24,346][00973] Avg episode reward: [(0, '6.019')] |
|
[2023-02-25 10:01:29,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3374.0). Total num frames: 1253376. Throughput: 0: 787.7. Samples: 311920. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 10:01:29,342][00973] Avg episode reward: [(0, '6.018')] |
|
[2023-02-25 10:01:33,210][11779] Updated weights for policy 0, policy_version 310 (0.0022) |
|
[2023-02-25 10:01:34,340][00973] Fps is (10 sec: 3686.6, 60 sec: 3208.7, 300 sec: 3374.0). Total num frames: 1273856. Throughput: 0: 789.0. Samples: 318342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:01:34,347][00973] Avg episode reward: [(0, '5.850')] |
|
[2023-02-25 10:01:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1290240. Throughput: 0: 790.3. Samples: 321538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:01:39,347][00973] Avg episode reward: [(0, '5.517')] |
|
[2023-02-25 10:01:44,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1306624. Throughput: 0: 785.4. Samples: 326026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:01:44,348][00973] Avg episode reward: [(0, '5.538')] |
|
[2023-02-25 10:01:45,539][11779] Updated weights for policy 0, policy_version 320 (0.0033) |
|
[2023-02-25 10:01:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3140.2, 300 sec: 3360.1). Total num frames: 1318912. Throughput: 0: 785.7. Samples: 330178. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-02-25 10:01:49,347][00973] Avg episode reward: [(0, '5.656')] |
|
[2023-02-25 10:01:54,344][00973] Fps is (10 sec: 3684.9, 60 sec: 3208.4, 300 sec: 3373.9). Total num frames: 1343488. Throughput: 0: 788.7. Samples: 333434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:01:54,347][00973] Avg episode reward: [(0, '5.996')] |
|
[2023-02-25 10:01:56,213][11779] Updated weights for policy 0, policy_version 330 (0.0017) |
|
[2023-02-25 10:01:59,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3277.0, 300 sec: 3374.0). Total num frames: 1359872. Throughput: 0: 806.2. Samples: 339844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:01:59,345][00973] Avg episode reward: [(0, '5.944')] |
|
[2023-02-25 10:02:04,340][00973] Fps is (10 sec: 3278.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1376256. Throughput: 0: 829.7. Samples: 344194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:02:04,346][00973] Avg episode reward: [(0, '5.896')] |
|
[2023-02-25 10:02:09,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3374.0). Total num frames: 1388544. Throughput: 0: 839.6. Samples: 346196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:02:09,344][00973] Avg episode reward: [(0, '6.013')] |
|
[2023-02-25 10:02:09,734][11779] Updated weights for policy 0, policy_version 340 (0.0021) |
|
[2023-02-25 10:02:14,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3360.1). Total num frames: 1409024. Throughput: 0: 885.2. Samples: 351754. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:02:14,347][00973] Avg episode reward: [(0, '6.206')] |
|
[2023-02-25 10:02:14,351][11765] Saving new best policy, reward=6.206! |
|
[2023-02-25 10:02:19,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3413.4, 300 sec: 3374.1). Total num frames: 1429504. Throughput: 0: 883.3. Samples: 358090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:02:19,351][00973] Avg episode reward: [(0, '6.269')] |
|
[2023-02-25 10:02:19,371][11765] Saving new best policy, reward=6.269! |
|
[2023-02-25 10:02:19,855][11779] Updated weights for policy 0, policy_version 350 (0.0027) |
|
[2023-02-25 10:02:24,341][00973] Fps is (10 sec: 3686.0, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 1445888. Throughput: 0: 858.9. Samples: 360188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:02:24,345][00973] Avg episode reward: [(0, '6.210')] |
|
[2023-02-25 10:02:29,342][00973] Fps is (10 sec: 2866.6, 60 sec: 3413.2, 300 sec: 3374.0). Total num frames: 1458176. Throughput: 0: 845.3. Samples: 364064. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:02:29,350][00973] Avg episode reward: [(0, '6.184')] |
|
[2023-02-25 10:02:29,366][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000356_1458176.pth... |
|
[2023-02-25 10:02:29,561][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000159_651264.pth |
|
[2023-02-25 10:02:33,301][11779] Updated weights for policy 0, policy_version 360 (0.0036) |
|
[2023-02-25 10:02:34,340][00973] Fps is (10 sec: 3277.1, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1478656. Throughput: 0: 875.3. Samples: 369568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:02:34,342][00973] Avg episode reward: [(0, '6.513')] |
|
[2023-02-25 10:02:34,349][11765] Saving new best policy, reward=6.513! |
|
[2023-02-25 10:02:39,340][00973] Fps is (10 sec: 4096.8, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1499136. Throughput: 0: 872.3. Samples: 372686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:02:39,348][00973] Avg episode reward: [(0, '7.097')] |
|
[2023-02-25 10:02:39,360][11765] Saving new best policy, reward=7.097! |
|
[2023-02-25 10:02:44,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1511424. Throughput: 0: 846.4. Samples: 377932. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:02:44,343][00973] Avg episode reward: [(0, '7.363')] |
|
[2023-02-25 10:02:44,350][11765] Saving new best policy, reward=7.363! |
|
[2023-02-25 10:02:44,656][11779] Updated weights for policy 0, policy_version 370 (0.0020) |
|
[2023-02-25 10:02:49,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1527808. Throughput: 0: 837.4. Samples: 381878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:02:49,343][00973] Avg episode reward: [(0, '7.667')] |
|
[2023-02-25 10:02:49,358][11765] Saving new best policy, reward=7.667! |
|
[2023-02-25 10:02:54,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3345.3, 300 sec: 3360.1). Total num frames: 1544192. Throughput: 0: 849.9. Samples: 384440. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:02:54,345][00973] Avg episode reward: [(0, '7.314')] |
|
[2023-02-25 10:02:56,298][11779] Updated weights for policy 0, policy_version 380 (0.0033) |
|
[2023-02-25 10:02:59,342][00973] Fps is (10 sec: 4095.1, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 1568768. Throughput: 0: 869.4. Samples: 390880. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:02:59,349][00973] Avg episode reward: [(0, '7.921')] |
|
[2023-02-25 10:02:59,360][11765] Saving new best policy, reward=7.921! |
|
[2023-02-25 10:03:04,342][00973] Fps is (10 sec: 3685.5, 60 sec: 3413.2, 300 sec: 3374.1). Total num frames: 1581056. Throughput: 0: 843.9. Samples: 396066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:03:04,347][00973] Avg episode reward: [(0, '7.578')] |
|
[2023-02-25 10:03:09,119][11779] Updated weights for policy 0, policy_version 390 (0.0018) |
|
[2023-02-25 10:03:09,340][00973] Fps is (10 sec: 2867.9, 60 sec: 3481.6, 300 sec: 3374.0). Total num frames: 1597440. Throughput: 0: 843.3. Samples: 398136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:03:09,346][00973] Avg episode reward: [(0, '7.861')] |
|
[2023-02-25 10:03:14,340][00973] Fps is (10 sec: 3277.6, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 1613824. Throughput: 0: 862.5. Samples: 402876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:03:14,344][00973] Avg episode reward: [(0, '7.311')] |
|
[2023-02-25 10:03:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1634304. Throughput: 0: 880.8. Samples: 409202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:03:19,342][00973] Avg episode reward: [(0, '7.926')] |
|
[2023-02-25 10:03:19,355][11765] Saving new best policy, reward=7.926! |
|
[2023-02-25 10:03:19,628][11779] Updated weights for policy 0, policy_version 400 (0.0025) |
|
[2023-02-25 10:03:24,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.4, 300 sec: 3387.9). Total num frames: 1650688. Throughput: 0: 878.0. Samples: 412196. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:03:24,343][00973] Avg episode reward: [(0, '7.924')] |
|
[2023-02-25 10:03:29,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3401.8). Total num frames: 1667072. Throughput: 0: 856.2. Samples: 416462. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:03:29,347][00973] Avg episode reward: [(0, '8.465')] |
|
[2023-02-25 10:03:29,374][11765] Saving new best policy, reward=8.465! |
|
[2023-02-25 10:03:32,795][11779] Updated weights for policy 0, policy_version 410 (0.0030) |
|
[2023-02-25 10:03:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3415.6). Total num frames: 1683456. Throughput: 0: 873.6. Samples: 421192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:03:34,342][00973] Avg episode reward: [(0, '8.780')] |
|
[2023-02-25 10:03:34,346][11765] Saving new best policy, reward=8.780! |
|
[2023-02-25 10:03:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 1703936. Throughput: 0: 887.6. Samples: 424380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:03:39,346][00973] Avg episode reward: [(0, '8.629')] |
|
[2023-02-25 10:03:42,437][11779] Updated weights for policy 0, policy_version 420 (0.0014) |
|
[2023-02-25 10:03:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 1724416. Throughput: 0: 884.4. Samples: 430674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:03:44,343][00973] Avg episode reward: [(0, '8.427')] |
|
[2023-02-25 10:03:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 1736704. Throughput: 0: 858.4. Samples: 434692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:03:49,349][00973] Avg episode reward: [(0, '7.943')] |
|
[2023-02-25 10:03:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 1753088. Throughput: 0: 857.8. Samples: 436736. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:03:54,343][00973] Avg episode reward: [(0, '7.938')] |
|
[2023-02-25 10:03:55,675][11779] Updated weights for policy 0, policy_version 430 (0.0023) |
|
[2023-02-25 10:03:59,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3345.2, 300 sec: 3401.8). Total num frames: 1769472. Throughput: 0: 881.9. Samples: 442560. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:03:59,342][00973] Avg episode reward: [(0, '7.741')] |
|
[2023-02-25 10:04:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.5, 300 sec: 3387.9). Total num frames: 1785856. Throughput: 0: 831.1. Samples: 446602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:04:04,347][00973] Avg episode reward: [(0, '8.023')] |
|
[2023-02-25 10:04:09,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1794048. Throughput: 0: 800.3. Samples: 448208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:04:09,347][00973] Avg episode reward: [(0, '8.386')] |
|
[2023-02-25 10:04:11,356][11779] Updated weights for policy 0, policy_version 440 (0.0012) |
|
[2023-02-25 10:04:14,340][00973] Fps is (10 sec: 2457.5, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1810432. Throughput: 0: 789.1. Samples: 451970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:04:14,349][00973] Avg episode reward: [(0, '8.694')] |
|
[2023-02-25 10:04:19,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1830912. Throughput: 0: 804.6. Samples: 457398. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:04:19,343][00973] Avg episode reward: [(0, '9.099')] |
|
[2023-02-25 10:04:19,357][11765] Saving new best policy, reward=9.099! |
|
[2023-02-25 10:04:22,225][11779] Updated weights for policy 0, policy_version 450 (0.0018) |
|
[2023-02-25 10:04:24,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 1851392. Throughput: 0: 805.3. Samples: 460620. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:04:24,343][00973] Avg episode reward: [(0, '9.553')] |
|
[2023-02-25 10:04:24,350][11765] Saving new best policy, reward=9.553! |
|
[2023-02-25 10:04:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3345.0, 300 sec: 3387.9). Total num frames: 1867776. Throughput: 0: 793.9. Samples: 466402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:04:29,343][00973] Avg episode reward: [(0, '9.601')] |
|
[2023-02-25 10:04:29,351][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000456_1867776.pth... |
|
[2023-02-25 10:04:29,476][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000259_1060864.pth |
|
[2023-02-25 10:04:29,492][11765] Saving new best policy, reward=9.601! |
|
[2023-02-25 10:04:34,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1880064. Throughput: 0: 794.9. Samples: 470464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:04:34,346][00973] Avg episode reward: [(0, '9.125')] |
|
[2023-02-25 10:04:35,251][11779] Updated weights for policy 0, policy_version 460 (0.0030) |
|
[2023-02-25 10:04:39,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3208.5, 300 sec: 3374.0). Total num frames: 1896448. Throughput: 0: 796.4. Samples: 472572. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:04:39,348][00973] Avg episode reward: [(0, '9.435')] |
|
[2023-02-25 10:04:44,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 1921024. Throughput: 0: 814.8. Samples: 479224. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:04:44,342][00973] Avg episode reward: [(0, '9.517')] |
|
[2023-02-25 10:04:45,078][11779] Updated weights for policy 0, policy_version 470 (0.0024) |
|
[2023-02-25 10:04:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3345.1, 300 sec: 3387.9). Total num frames: 1937408. Throughput: 0: 847.1. Samples: 484720. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:04:49,342][00973] Avg episode reward: [(0, '10.231')] |
|
[2023-02-25 10:04:49,361][11765] Saving new best policy, reward=10.231! |
|
[2023-02-25 10:04:54,342][00973] Fps is (10 sec: 2866.6, 60 sec: 3276.7, 300 sec: 3374.0). Total num frames: 1949696. Throughput: 0: 854.1. Samples: 486644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:04:54,348][00973] Avg episode reward: [(0, '10.014')] |
|
[2023-02-25 10:04:58,419][11779] Updated weights for policy 0, policy_version 480 (0.0037) |
|
[2023-02-25 10:04:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3374.0). Total num frames: 1966080. Throughput: 0: 871.5. Samples: 491186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:04:59,343][00973] Avg episode reward: [(0, '9.786')] |
|
[2023-02-25 10:05:04,340][00973] Fps is (10 sec: 4096.9, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 1990656. Throughput: 0: 896.8. Samples: 497752. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:05:04,343][00973] Avg episode reward: [(0, '9.063')] |
|
[2023-02-25 10:05:08,283][11779] Updated weights for policy 0, policy_version 490 (0.0017) |
|
[2023-02-25 10:05:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3374.0). Total num frames: 2007040. Throughput: 0: 899.8. Samples: 501112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:05:09,345][00973] Avg episode reward: [(0, '9.233')] |
|
[2023-02-25 10:05:14,341][00973] Fps is (10 sec: 2866.8, 60 sec: 3481.5, 300 sec: 3374.0). Total num frames: 2019328. Throughput: 0: 865.1. Samples: 505332. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:05:14,344][00973] Avg episode reward: [(0, '9.084')] |
|
[2023-02-25 10:05:19,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 2039808. Throughput: 0: 877.6. Samples: 509954. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-02-25 10:05:19,342][00973] Avg episode reward: [(0, '9.008')] |
|
[2023-02-25 10:05:21,007][11779] Updated weights for policy 0, policy_version 500 (0.0019) |
|
[2023-02-25 10:05:24,340][00973] Fps is (10 sec: 4096.5, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 2060288. Throughput: 0: 904.4. Samples: 513270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:05:24,343][00973] Avg episode reward: [(0, '9.847')] |
|
[2023-02-25 10:05:29,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2080768. Throughput: 0: 897.9. Samples: 519628. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:05:29,346][00973] Avg episode reward: [(0, '10.980')] |
|
[2023-02-25 10:05:29,357][11765] Saving new best policy, reward=10.980! |
|
[2023-02-25 10:05:32,239][11779] Updated weights for policy 0, policy_version 510 (0.0013) |
|
[2023-02-25 10:05:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2093056. Throughput: 0: 869.3. Samples: 523838. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:05:34,344][00973] Avg episode reward: [(0, '11.295')] |
|
[2023-02-25 10:05:34,353][11765] Saving new best policy, reward=11.295! |
|
[2023-02-25 10:05:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2109440. Throughput: 0: 871.7. Samples: 525868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:05:39,342][00973] Avg episode reward: [(0, '11.503')] |
|
[2023-02-25 10:05:39,354][11765] Saving new best policy, reward=11.503! |
|
[2023-02-25 10:05:43,448][11779] Updated weights for policy 0, policy_version 520 (0.0034) |
|
[2023-02-25 10:05:44,342][00973] Fps is (10 sec: 3685.5, 60 sec: 3481.5, 300 sec: 3387.9). Total num frames: 2129920. Throughput: 0: 907.6. Samples: 532030. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:05:44,345][00973] Avg episode reward: [(0, '12.100')] |
|
[2023-02-25 10:05:44,406][11765] Saving new best policy, reward=12.100! |
|
[2023-02-25 10:05:49,343][00973] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 2150400. Throughput: 0: 898.4. Samples: 538178. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:05:49,346][00973] Avg episode reward: [(0, '11.840')] |
|
[2023-02-25 10:05:54,345][00973] Fps is (10 sec: 3275.8, 60 sec: 3549.7, 300 sec: 3387.9). Total num frames: 2162688. Throughput: 0: 867.1. Samples: 540134. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:05:54,350][00973] Avg episode reward: [(0, '11.884')] |
|
[2023-02-25 10:05:55,931][11779] Updated weights for policy 0, policy_version 530 (0.0015) |
|
[2023-02-25 10:05:59,342][00973] Fps is (10 sec: 2866.5, 60 sec: 3549.7, 300 sec: 3387.9). Total num frames: 2179072. Throughput: 0: 867.1. Samples: 544352. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:05:59,348][00973] Avg episode reward: [(0, '11.715')] |
|
[2023-02-25 10:06:04,340][00973] Fps is (10 sec: 4098.1, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 2203648. Throughput: 0: 905.4. Samples: 550696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:06:04,345][00973] Avg episode reward: [(0, '11.562')] |
|
[2023-02-25 10:06:06,200][11779] Updated weights for policy 0, policy_version 540 (0.0012) |
|
[2023-02-25 10:06:09,345][00973] Fps is (10 sec: 4094.8, 60 sec: 3549.6, 300 sec: 3401.7). Total num frames: 2220032. Throughput: 0: 904.0. Samples: 553956. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:06:09,357][00973] Avg episode reward: [(0, '10.078')] |
|
[2023-02-25 10:06:14,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3618.2, 300 sec: 3429.5). Total num frames: 2236416. Throughput: 0: 870.3. Samples: 558792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:06:14,347][00973] Avg episode reward: [(0, '10.272')] |
|
[2023-02-25 10:06:19,340][00973] Fps is (10 sec: 2868.7, 60 sec: 3481.6, 300 sec: 3429.5). Total num frames: 2248704. Throughput: 0: 868.2. Samples: 562906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:06:19,343][00973] Avg episode reward: [(0, '9.997')] |
|
[2023-02-25 10:06:19,442][11779] Updated weights for policy 0, policy_version 550 (0.0037) |
|
[2023-02-25 10:06:24,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2273280. Throughput: 0: 893.6. Samples: 566082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:06:24,343][00973] Avg episode reward: [(0, '9.779')] |
|
[2023-02-25 10:06:28,825][11779] Updated weights for policy 0, policy_version 560 (0.0012) |
|
[2023-02-25 10:06:29,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 2293760. Throughput: 0: 906.0. Samples: 572800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:06:29,344][00973] Avg episode reward: [(0, '9.551')] |
|
[2023-02-25 10:06:29,366][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000560_2293760.pth... |
|
[2023-02-25 10:06:29,506][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000356_1458176.pth |
|
[2023-02-25 10:06:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2306048. Throughput: 0: 868.0. Samples: 577240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:06:34,342][00973] Avg episode reward: [(0, '9.562')] |
|
[2023-02-25 10:06:39,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2322432. Throughput: 0: 870.1. Samples: 579282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:06:39,343][00973] Avg episode reward: [(0, '8.640')] |
|
[2023-02-25 10:06:42,392][11779] Updated weights for policy 0, policy_version 570 (0.0031) |
|
[2023-02-25 10:06:44,346][00973] Fps is (10 sec: 3274.7, 60 sec: 3481.4, 300 sec: 3457.2). Total num frames: 2338816. Throughput: 0: 886.9. Samples: 584264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:06:44,353][00973] Avg episode reward: [(0, '8.455')] |
|
[2023-02-25 10:06:49,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3415.7). Total num frames: 2351104. Throughput: 0: 836.6. Samples: 588344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:06:49,344][00973] Avg episode reward: [(0, '8.946')] |
|
[2023-02-25 10:06:54,340][00973] Fps is (10 sec: 2459.2, 60 sec: 3345.4, 300 sec: 3401.8). Total num frames: 2363392. Throughput: 0: 802.6. Samples: 590068. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:06:54,345][00973] Avg episode reward: [(0, '9.212')] |
|
[2023-02-25 10:06:58,114][11779] Updated weights for policy 0, policy_version 580 (0.0032) |
|
[2023-02-25 10:06:59,341][00973] Fps is (10 sec: 2457.2, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 2375680. Throughput: 0: 784.6. Samples: 594102. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:06:59,344][00973] Avg episode reward: [(0, '9.381')] |
|
[2023-02-25 10:07:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 2396160. Throughput: 0: 809.0. Samples: 599312. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:07:04,348][00973] Avg episode reward: [(0, '10.310')] |
|
[2023-02-25 10:07:08,658][11779] Updated weights for policy 0, policy_version 590 (0.0025) |
|
[2023-02-25 10:07:09,340][00973] Fps is (10 sec: 4096.5, 60 sec: 3277.1, 300 sec: 3415.6). Total num frames: 2416640. Throughput: 0: 810.2. Samples: 602542. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:07:09,343][00973] Avg episode reward: [(0, '10.403')] |
|
[2023-02-25 10:07:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2433024. Throughput: 0: 791.2. Samples: 608406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:07:14,342][00973] Avg episode reward: [(0, '9.986')] |
|
[2023-02-25 10:07:19,341][00973] Fps is (10 sec: 3276.9, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 2449408. Throughput: 0: 784.2. Samples: 612528. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:07:19,350][00973] Avg episode reward: [(0, '9.112')] |
|
[2023-02-25 10:07:22,080][11779] Updated weights for policy 0, policy_version 600 (0.0022) |
|
[2023-02-25 10:07:24,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3208.5, 300 sec: 3415.7). Total num frames: 2465792. Throughput: 0: 784.8. Samples: 614600. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-02-25 10:07:24,343][00973] Avg episode reward: [(0, '8.909')] |
|
[2023-02-25 10:07:29,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 2486272. Throughput: 0: 816.9. Samples: 621018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:07:29,346][00973] Avg episode reward: [(0, '9.283')] |
|
[2023-02-25 10:07:31,657][11779] Updated weights for policy 0, policy_version 610 (0.0026) |
|
[2023-02-25 10:07:34,342][00973] Fps is (10 sec: 3685.6, 60 sec: 3276.7, 300 sec: 3401.7). Total num frames: 2502656. Throughput: 0: 849.6. Samples: 626576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:07:34,346][00973] Avg episode reward: [(0, '9.720')] |
|
[2023-02-25 10:07:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3415.7). Total num frames: 2519040. Throughput: 0: 856.4. Samples: 628604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:07:39,342][00973] Avg episode reward: [(0, '9.803')] |
|
[2023-02-25 10:07:44,340][00973] Fps is (10 sec: 3277.6, 60 sec: 3277.2, 300 sec: 3415.6). Total num frames: 2535424. Throughput: 0: 868.1. Samples: 633166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:07:44,345][00973] Avg episode reward: [(0, '10.261')] |
|
[2023-02-25 10:07:44,784][11779] Updated weights for policy 0, policy_version 620 (0.0012) |
|
[2023-02-25 10:07:49,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 2560000. Throughput: 0: 897.4. Samples: 639696. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:07:49,343][00973] Avg episode reward: [(0, '10.078')] |
|
[2023-02-25 10:07:54,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3415.7). Total num frames: 2576384. Throughput: 0: 898.4. Samples: 642970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:07:54,346][00973] Avg episode reward: [(0, '10.387')] |
|
[2023-02-25 10:07:55,282][11779] Updated weights for policy 0, policy_version 630 (0.0022) |
|
[2023-02-25 10:07:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3550.0, 300 sec: 3415.7). Total num frames: 2588672. Throughput: 0: 862.0. Samples: 647198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:07:59,349][00973] Avg episode reward: [(0, '10.726')] |
|
[2023-02-25 10:08:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2609152. Throughput: 0: 875.1. Samples: 651906. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:08:04,343][00973] Avg episode reward: [(0, '12.357')] |
|
[2023-02-25 10:08:04,352][11765] Saving new best policy, reward=12.357! |
|
[2023-02-25 10:08:07,123][11779] Updated weights for policy 0, policy_version 640 (0.0018) |
|
[2023-02-25 10:08:09,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2629632. Throughput: 0: 899.2. Samples: 655066. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:08:09,343][00973] Avg episode reward: [(0, '12.903')] |
|
[2023-02-25 10:08:09,355][11765] Saving new best policy, reward=12.903! |
|
[2023-02-25 10:08:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2646016. Throughput: 0: 903.2. Samples: 661664. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:08:14,343][00973] Avg episode reward: [(0, '13.720')] |
|
[2023-02-25 10:08:14,348][11765] Saving new best policy, reward=13.720! |
|
[2023-02-25 10:08:18,932][11779] Updated weights for policy 0, policy_version 650 (0.0012) |
|
[2023-02-25 10:08:19,347][00973] Fps is (10 sec: 3274.4, 60 sec: 3549.4, 300 sec: 3429.4). Total num frames: 2662400. Throughput: 0: 871.6. Samples: 665800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:08:19,350][00973] Avg episode reward: [(0, '13.213')] |
|
[2023-02-25 10:08:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3429.5). Total num frames: 2678784. Throughput: 0: 871.4. Samples: 667818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:08:24,347][00973] Avg episode reward: [(0, '13.421')] |
|
[2023-02-25 10:08:29,340][00973] Fps is (10 sec: 3689.1, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 2699264. Throughput: 0: 902.1. Samples: 673762. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:08:29,348][00973] Avg episode reward: [(0, '14.502')] |
|
[2023-02-25 10:08:29,364][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth... |
|
[2023-02-25 10:08:29,492][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000456_1867776.pth |
|
[2023-02-25 10:08:29,503][11765] Saving new best policy, reward=14.502! |
|
[2023-02-25 10:08:30,147][11779] Updated weights for policy 0, policy_version 660 (0.0024) |
|
[2023-02-25 10:08:34,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3550.0, 300 sec: 3429.5). Total num frames: 2715648. Throughput: 0: 891.2. Samples: 679802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:08:34,347][00973] Avg episode reward: [(0, '14.911')] |
|
[2023-02-25 10:08:34,350][11765] Saving new best policy, reward=14.911! |
|
[2023-02-25 10:08:39,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 2732032. Throughput: 0: 864.6. Samples: 681878. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:08:39,348][00973] Avg episode reward: [(0, '14.547')] |
|
[2023-02-25 10:08:43,403][11779] Updated weights for policy 0, policy_version 670 (0.0015) |
|
[2023-02-25 10:08:44,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 2744320. Throughput: 0: 861.2. Samples: 685954. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2023-02-25 10:08:44,343][00973] Avg episode reward: [(0, '15.188')] |
|
[2023-02-25 10:08:44,345][11765] Saving new best policy, reward=15.188! |
|
[2023-02-25 10:08:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 2764800. Throughput: 0: 886.5. Samples: 691798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:08:49,343][00973] Avg episode reward: [(0, '14.590')] |
|
[2023-02-25 10:08:53,259][11779] Updated weights for policy 0, policy_version 680 (0.0013) |
|
[2023-02-25 10:08:54,347][00973] Fps is (10 sec: 4093.0, 60 sec: 3481.2, 300 sec: 3443.3). Total num frames: 2785280. Throughput: 0: 889.0. Samples: 695076. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:08:54,354][00973] Avg episode reward: [(0, '14.000')] |
|
[2023-02-25 10:08:59,344][00973] Fps is (10 sec: 3684.8, 60 sec: 3549.6, 300 sec: 3443.4). Total num frames: 2801664. Throughput: 0: 856.4. Samples: 700208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:08:59,351][00973] Avg episode reward: [(0, '13.908')] |
|
[2023-02-25 10:09:04,340][00973] Fps is (10 sec: 3279.0, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 2818048. Throughput: 0: 858.8. Samples: 704438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:09:04,349][00973] Avg episode reward: [(0, '14.016')] |
|
[2023-02-25 10:09:06,400][11779] Updated weights for policy 0, policy_version 690 (0.0029) |
|
[2023-02-25 10:09:09,340][00973] Fps is (10 sec: 3688.0, 60 sec: 3481.6, 300 sec: 3485.1). Total num frames: 2838528. Throughput: 0: 878.8. Samples: 707362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:09:09,343][00973] Avg episode reward: [(0, '14.066')] |
|
[2023-02-25 10:09:14,340][00973] Fps is (10 sec: 4096.3, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 2859008. Throughput: 0: 895.4. Samples: 714054. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:09:14,342][00973] Avg episode reward: [(0, '14.408')] |
|
[2023-02-25 10:09:16,039][11779] Updated weights for policy 0, policy_version 700 (0.0014) |
|
[2023-02-25 10:09:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3550.3, 300 sec: 3471.2). Total num frames: 2875392. Throughput: 0: 871.8. Samples: 719032. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:09:19,342][00973] Avg episode reward: [(0, '15.386')] |
|
[2023-02-25 10:09:19,355][11765] Saving new best policy, reward=15.386! |
|
[2023-02-25 10:09:24,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3457.3). Total num frames: 2887680. Throughput: 0: 870.6. Samples: 721056. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:09:24,345][00973] Avg episode reward: [(0, '14.425')] |
|
[2023-02-25 10:09:29,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3471.2). Total num frames: 2904064. Throughput: 0: 883.1. Samples: 725694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:09:29,343][00973] Avg episode reward: [(0, '13.731')] |
|
[2023-02-25 10:09:30,416][11779] Updated weights for policy 0, policy_version 710 (0.0038) |
|
[2023-02-25 10:09:34,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3457.3). Total num frames: 2916352. Throughput: 0: 844.8. Samples: 729816. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:09:34,346][00973] Avg episode reward: [(0, '13.368')] |
|
[2023-02-25 10:09:39,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 2928640. Throughput: 0: 813.0. Samples: 731656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:09:39,342][00973] Avg episode reward: [(0, '12.963')] |
|
[2023-02-25 10:09:44,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 2940928. Throughput: 0: 786.3. Samples: 735588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:09:44,346][00973] Avg episode reward: [(0, '11.955')] |
|
[2023-02-25 10:09:45,976][11779] Updated weights for policy 0, policy_version 720 (0.0026) |
|
[2023-02-25 10:09:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3429.6). Total num frames: 2961408. Throughput: 0: 799.3. Samples: 740404. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:09:49,342][00973] Avg episode reward: [(0, '12.376')] |
|
[2023-02-25 10:09:54,340][00973] Fps is (10 sec: 4095.9, 60 sec: 3277.2, 300 sec: 3443.4). Total num frames: 2981888. Throughput: 0: 808.0. Samples: 743720. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:09:54,343][00973] Avg episode reward: [(0, '13.819')] |
|
[2023-02-25 10:09:55,744][11779] Updated weights for policy 0, policy_version 730 (0.0016) |
|
[2023-02-25 10:09:59,347][00973] Fps is (10 sec: 3683.7, 60 sec: 3276.6, 300 sec: 3415.6). Total num frames: 2998272. Throughput: 0: 792.6. Samples: 749728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:09:59,350][00973] Avg episode reward: [(0, '15.726')] |
|
[2023-02-25 10:09:59,371][11765] Saving new best policy, reward=15.726! |
|
[2023-02-25 10:10:04,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 3014656. Throughput: 0: 771.5. Samples: 753748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:10:04,347][00973] Avg episode reward: [(0, '15.942')] |
|
[2023-02-25 10:10:04,350][11765] Saving new best policy, reward=15.942! |
|
[2023-02-25 10:10:09,324][11779] Updated weights for policy 0, policy_version 740 (0.0016) |
|
[2023-02-25 10:10:09,340][00973] Fps is (10 sec: 3279.2, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 3031040. Throughput: 0: 769.7. Samples: 755692. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:10:09,344][00973] Avg episode reward: [(0, '15.750')] |
|
[2023-02-25 10:10:14,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3429.5). Total num frames: 3051520. Throughput: 0: 800.8. Samples: 761728. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:10:14,345][00973] Avg episode reward: [(0, '14.744')] |
|
[2023-02-25 10:10:19,340][00973] Fps is (10 sec: 3686.2, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 3067904. Throughput: 0: 842.1. Samples: 767710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:10:19,344][00973] Avg episode reward: [(0, '14.873')] |
|
[2023-02-25 10:10:19,822][11779] Updated weights for policy 0, policy_version 750 (0.0025) |
|
[2023-02-25 10:10:24,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3084288. Throughput: 0: 848.3. Samples: 769830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:10:24,347][00973] Avg episode reward: [(0, '14.879')] |
|
[2023-02-25 10:10:29,340][00973] Fps is (10 sec: 3277.0, 60 sec: 3276.8, 300 sec: 3415.6). Total num frames: 3100672. Throughput: 0: 853.2. Samples: 773984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:10:29,342][00973] Avg episode reward: [(0, '15.624')] |
|
[2023-02-25 10:10:29,356][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000757_3100672.pth... |
|
[2023-02-25 10:10:29,478][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000560_2293760.pth |
|
[2023-02-25 10:10:32,117][11779] Updated weights for policy 0, policy_version 760 (0.0019) |
|
[2023-02-25 10:10:34,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3429.5). Total num frames: 3121152. Throughput: 0: 887.3. Samples: 780332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:10:34,343][00973] Avg episode reward: [(0, '16.201')] |
|
[2023-02-25 10:10:34,351][11765] Saving new best policy, reward=16.201! |
|
[2023-02-25 10:10:39,340][00973] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3429.6). Total num frames: 3141632. Throughput: 0: 883.6. Samples: 783482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:10:39,346][00973] Avg episode reward: [(0, '17.320')] |
|
[2023-02-25 10:10:39,357][11765] Saving new best policy, reward=17.320! |
|
[2023-02-25 10:10:43,766][11779] Updated weights for policy 0, policy_version 770 (0.0029) |
|
[2023-02-25 10:10:44,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3153920. Throughput: 0: 851.6. Samples: 788044. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:10:44,344][00973] Avg episode reward: [(0, '16.805')] |
|
[2023-02-25 10:10:49,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3415.7). Total num frames: 3170304. Throughput: 0: 855.1. Samples: 792226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:10:49,342][00973] Avg episode reward: [(0, '16.849')] |
|
[2023-02-25 10:10:54,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 3190784. Throughput: 0: 881.6. Samples: 795366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:10:54,343][00973] Avg episode reward: [(0, '16.498')] |
|
[2023-02-25 10:10:55,280][11779] Updated weights for policy 0, policy_version 780 (0.0024) |
|
[2023-02-25 10:10:59,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3550.3, 300 sec: 3415.7). Total num frames: 3211264. Throughput: 0: 892.1. Samples: 801874. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:10:59,346][00973] Avg episode reward: [(0, '17.555')] |
|
[2023-02-25 10:10:59,358][11765] Saving new best policy, reward=17.555! |
|
[2023-02-25 10:11:04,342][00973] Fps is (10 sec: 3276.1, 60 sec: 3481.5, 300 sec: 3401.8). Total num frames: 3223552. Throughput: 0: 861.4. Samples: 806476. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:11:04,344][00973] Avg episode reward: [(0, '16.913')] |
|
[2023-02-25 10:11:08,008][11779] Updated weights for policy 0, policy_version 790 (0.0041) |
|
[2023-02-25 10:11:09,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3413.3, 300 sec: 3387.9). Total num frames: 3235840. Throughput: 0: 860.0. Samples: 808528. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:11:09,346][00973] Avg episode reward: [(0, '17.433')] |
|
[2023-02-25 10:11:14,346][00973] Fps is (10 sec: 3684.9, 60 sec: 3481.2, 300 sec: 3429.5). Total num frames: 3260416. Throughput: 0: 893.1. Samples: 814180. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:11:14,348][00973] Avg episode reward: [(0, '17.639')] |
|
[2023-02-25 10:11:14,351][11765] Saving new best policy, reward=17.639! |
|
[2023-02-25 10:11:17,787][11779] Updated weights for policy 0, policy_version 800 (0.0019) |
|
[2023-02-25 10:11:19,340][00973] Fps is (10 sec: 4505.5, 60 sec: 3549.9, 300 sec: 3415.6). Total num frames: 3280896. Throughput: 0: 897.1. Samples: 820704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:11:19,343][00973] Avg episode reward: [(0, '17.091')] |
|
[2023-02-25 10:11:24,340][00973] Fps is (10 sec: 3688.8, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 3297280. Throughput: 0: 876.9. Samples: 822944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:11:24,348][00973] Avg episode reward: [(0, '18.097')] |
|
[2023-02-25 10:11:24,350][11765] Saving new best policy, reward=18.097! |
|
[2023-02-25 10:11:29,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3481.6, 300 sec: 3401.8). Total num frames: 3309568. Throughput: 0: 866.0. Samples: 827014. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-02-25 10:11:29,347][00973] Avg episode reward: [(0, '17.990')] |
|
[2023-02-25 10:11:31,160][11779] Updated weights for policy 0, policy_version 810 (0.0018) |
|
[2023-02-25 10:11:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3415.6). Total num frames: 3330048. Throughput: 0: 898.0. Samples: 832638. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:11:34,342][00973] Avg episode reward: [(0, '18.579')] |
|
[2023-02-25 10:11:34,345][11765] Saving new best policy, reward=18.579! |
|
[2023-02-25 10:11:39,340][00973] Fps is (10 sec: 4096.2, 60 sec: 3481.6, 300 sec: 3429.6). Total num frames: 3350528. Throughput: 0: 901.6. Samples: 835936. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:11:39,343][00973] Avg episode reward: [(0, '19.553')] |
|
[2023-02-25 10:11:39,351][11765] Saving new best policy, reward=19.553! |
|
[2023-02-25 10:11:40,926][11779] Updated weights for policy 0, policy_version 820 (0.0018) |
|
[2023-02-25 10:11:44,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3443.4). Total num frames: 3366912. Throughput: 0: 878.9. Samples: 841424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:11:44,344][00973] Avg episode reward: [(0, '19.930')] |
|
[2023-02-25 10:11:44,349][11765] Saving new best policy, reward=19.930! |
|
[2023-02-25 10:11:49,340][00973] Fps is (10 sec: 2867.3, 60 sec: 3481.6, 300 sec: 3443.4). Total num frames: 3379200. Throughput: 0: 870.5. Samples: 845646. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-02-25 10:11:49,348][00973] Avg episode reward: [(0, '19.825')] |
|
[2023-02-25 10:11:53,647][11779] Updated weights for policy 0, policy_version 830 (0.0019) |
|
[2023-02-25 10:11:54,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3471.2). Total num frames: 3399680. Throughput: 0: 882.7. Samples: 848250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:11:54,346][00973] Avg episode reward: [(0, '20.190')] |
|
[2023-02-25 10:11:54,351][11765] Saving new best policy, reward=20.190! |
|
[2023-02-25 10:11:59,340][00973] Fps is (10 sec: 4505.6, 60 sec: 3549.9, 300 sec: 3485.1). Total num frames: 3424256. Throughput: 0: 900.4. Samples: 854690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:11:59,343][00973] Avg episode reward: [(0, '21.194')] |
|
[2023-02-25 10:11:59,358][11765] Saving new best policy, reward=21.194! |
|
[2023-02-25 10:12:04,341][00973] Fps is (10 sec: 3685.8, 60 sec: 3549.9, 300 sec: 3457.3). Total num frames: 3436544. Throughput: 0: 873.7. Samples: 860022. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:12:04,348][00973] Avg episode reward: [(0, '20.666')] |
|
[2023-02-25 10:12:04,701][11779] Updated weights for policy 0, policy_version 840 (0.0016) |
|
[2023-02-25 10:12:09,340][00973] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3457.3). Total num frames: 3452928. Throughput: 0: 866.5. Samples: 861938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:12:09,344][00973] Avg episode reward: [(0, '20.974')] |
|
[2023-02-25 10:12:14,346][00973] Fps is (10 sec: 3275.2, 60 sec: 3481.6, 300 sec: 3457.2). Total num frames: 3469312. Throughput: 0: 880.5. Samples: 866640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:12:14,349][00973] Avg episode reward: [(0, '21.331')] |
|
[2023-02-25 10:12:14,351][11765] Saving new best policy, reward=21.331! |
|
[2023-02-25 10:12:18,764][11779] Updated weights for policy 0, policy_version 850 (0.0045) |
|
[2023-02-25 10:12:19,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3443.4). Total num frames: 3481600. Throughput: 0: 847.3. Samples: 870768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:12:19,344][00973] Avg episode reward: [(0, '21.171')] |
|
[2023-02-25 10:12:24,341][00973] Fps is (10 sec: 2458.7, 60 sec: 3276.7, 300 sec: 3415.6). Total num frames: 3493888. Throughput: 0: 820.3. Samples: 872850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:12:24,344][00973] Avg episode reward: [(0, '22.179')] |
|
[2023-02-25 10:12:24,350][11765] Saving new best policy, reward=22.179! |
|
[2023-02-25 10:12:29,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3506176. Throughput: 0: 777.4. Samples: 876408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:12:29,345][00973] Avg episode reward: [(0, '22.592')] |
|
[2023-02-25 10:12:29,362][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth... |
|
[2023-02-25 10:12:29,535][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000659_2699264.pth |
|
[2023-02-25 10:12:29,549][11765] Saving new best policy, reward=22.592! |
|
[2023-02-25 10:12:34,105][11779] Updated weights for policy 0, policy_version 860 (0.0026) |
|
[2023-02-25 10:12:34,340][00973] Fps is (10 sec: 2867.6, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 3522560. Throughput: 0: 779.2. Samples: 880712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:12:34,345][00973] Avg episode reward: [(0, '23.969')] |
|
[2023-02-25 10:12:34,352][11765] Saving new best policy, reward=23.969! |
|
[2023-02-25 10:12:39,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.5, 300 sec: 3415.6). Total num frames: 3543040. Throughput: 0: 790.0. Samples: 883800. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:12:39,350][00973] Avg episode reward: [(0, '24.993')] |
|
[2023-02-25 10:12:39,361][11765] Saving new best policy, reward=24.993! |
|
[2023-02-25 10:12:44,341][00973] Fps is (10 sec: 3686.1, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 3559424. Throughput: 0: 792.2. Samples: 890338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-02-25 10:12:44,344][00973] Avg episode reward: [(0, '24.266')] |
|
[2023-02-25 10:12:44,434][11779] Updated weights for policy 0, policy_version 870 (0.0012) |
|
[2023-02-25 10:12:49,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3387.9). Total num frames: 3575808. Throughput: 0: 764.7. Samples: 894432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:12:49,344][00973] Avg episode reward: [(0, '23.781')] |
|
[2023-02-25 10:12:54,340][00973] Fps is (10 sec: 3277.1, 60 sec: 3208.5, 300 sec: 3401.8). Total num frames: 3592192. Throughput: 0: 766.7. Samples: 896438. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:12:54,345][00973] Avg episode reward: [(0, '23.545')] |
|
[2023-02-25 10:12:57,340][11779] Updated weights for policy 0, policy_version 880 (0.0032) |
|
[2023-02-25 10:12:59,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3401.8). Total num frames: 3612672. Throughput: 0: 787.3. Samples: 902064. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:12:59,343][00973] Avg episode reward: [(0, '21.619')] |
|
[2023-02-25 10:13:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3208.6, 300 sec: 3387.9). Total num frames: 3629056. Throughput: 0: 836.9. Samples: 908428. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:13:04,343][00973] Avg episode reward: [(0, '21.183')] |
|
[2023-02-25 10:13:08,808][11779] Updated weights for policy 0, policy_version 890 (0.0025) |
|
[2023-02-25 10:13:09,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3387.9). Total num frames: 3645440. Throughput: 0: 836.3. Samples: 910482. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-02-25 10:13:09,344][00973] Avg episode reward: [(0, '20.958')] |
|
[2023-02-25 10:13:14,343][00973] Fps is (10 sec: 2866.2, 60 sec: 3140.4, 300 sec: 3374.0). Total num frames: 3657728. Throughput: 0: 849.6. Samples: 914642. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:13:14,350][00973] Avg episode reward: [(0, '21.649')] |
|
[2023-02-25 10:13:19,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 3682304. Throughput: 0: 885.2. Samples: 920544. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:13:19,343][00973] Avg episode reward: [(0, '22.709')] |
|
[2023-02-25 10:13:20,096][11779] Updated weights for policy 0, policy_version 900 (0.0025) |
|
[2023-02-25 10:13:24,341][00973] Fps is (10 sec: 4506.5, 60 sec: 3481.6, 300 sec: 3401.7). Total num frames: 3702784. Throughput: 0: 885.8. Samples: 923662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:13:24,348][00973] Avg episode reward: [(0, '22.951')] |
|
[2023-02-25 10:13:29,340][00973] Fps is (10 sec: 3276.7, 60 sec: 3481.6, 300 sec: 3387.9). Total num frames: 3715072. Throughput: 0: 852.8. Samples: 928712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:13:29,343][00973] Avg episode reward: [(0, '23.482')] |
|
[2023-02-25 10:13:33,434][11779] Updated weights for policy 0, policy_version 910 (0.0021) |
|
[2023-02-25 10:13:34,345][00973] Fps is (10 sec: 2456.6, 60 sec: 3413.0, 300 sec: 3373.9). Total num frames: 3727360. Throughput: 0: 845.8. Samples: 932496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:13:34,348][00973] Avg episode reward: [(0, '22.566')] |
|
[2023-02-25 10:13:39,340][00973] Fps is (10 sec: 3276.9, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 3747840. Throughput: 0: 853.8. Samples: 934860. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:13:39,345][00973] Avg episode reward: [(0, '23.062')] |
|
[2023-02-25 10:13:43,773][11779] Updated weights for policy 0, policy_version 920 (0.0026) |
|
[2023-02-25 10:13:44,340][00973] Fps is (10 sec: 4098.3, 60 sec: 3481.7, 300 sec: 3401.8). Total num frames: 3768320. Throughput: 0: 870.2. Samples: 941224. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:13:44,346][00973] Avg episode reward: [(0, '23.130')] |
|
[2023-02-25 10:13:49,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3388.0). Total num frames: 3784704. Throughput: 0: 840.5. Samples: 946250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:13:49,345][00973] Avg episode reward: [(0, '22.801')] |
|
[2023-02-25 10:13:54,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3796992. Throughput: 0: 837.6. Samples: 948172. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:13:54,342][00973] Avg episode reward: [(0, '23.083')] |
|
[2023-02-25 10:13:58,032][11779] Updated weights for policy 0, policy_version 930 (0.0015) |
|
[2023-02-25 10:13:59,340][00973] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3374.0). Total num frames: 3813376. Throughput: 0: 839.8. Samples: 952430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-02-25 10:13:59,343][00973] Avg episode reward: [(0, '22.552')] |
|
[2023-02-25 10:14:04,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3374.0). Total num frames: 3833856. Throughput: 0: 844.0. Samples: 958522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:14:04,346][00973] Avg episode reward: [(0, '22.678')] |
|
[2023-02-25 10:14:09,305][11779] Updated weights for policy 0, policy_version 940 (0.0015) |
|
[2023-02-25 10:14:09,340][00973] Fps is (10 sec: 3686.4, 60 sec: 3413.3, 300 sec: 3360.1). Total num frames: 3850240. Throughput: 0: 839.5. Samples: 961440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:14:09,346][00973] Avg episode reward: [(0, '22.876')] |
|
[2023-02-25 10:14:14,340][00973] Fps is (10 sec: 2867.0, 60 sec: 3413.5, 300 sec: 3346.2). Total num frames: 3862528. Throughput: 0: 807.3. Samples: 965042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-02-25 10:14:14,344][00973] Avg episode reward: [(0, '22.983')] |
|
[2023-02-25 10:14:19,340][00973] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 3346.2). Total num frames: 3874816. Throughput: 0: 810.6. Samples: 968970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:14:19,349][00973] Avg episode reward: [(0, '23.801')] |
|
[2023-02-25 10:14:22,617][11779] Updated weights for policy 0, policy_version 950 (0.0047) |
|
[2023-02-25 10:14:24,340][00973] Fps is (10 sec: 3277.0, 60 sec: 3208.6, 300 sec: 3360.1). Total num frames: 3895296. Throughput: 0: 830.4. Samples: 972226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:14:24,342][00973] Avg episode reward: [(0, '23.321')] |
|
[2023-02-25 10:14:29,342][00973] Fps is (10 sec: 4095.0, 60 sec: 3345.0, 300 sec: 3387.9). Total num frames: 3915776. Throughput: 0: 828.3. Samples: 978498. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:14:29,350][00973] Avg episode reward: [(0, '23.643')] |
|
[2023-02-25 10:14:29,362][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000956_3915776.pth... |
|
[2023-02-25 10:14:29,570][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000757_3100672.pth |
|
[2023-02-25 10:14:34,340][00973] Fps is (10 sec: 3276.8, 60 sec: 3345.4, 300 sec: 3387.9). Total num frames: 3928064. Throughput: 0: 814.8. Samples: 982916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:14:34,342][00973] Avg episode reward: [(0, '23.400')] |
|
[2023-02-25 10:14:34,411][11779] Updated weights for policy 0, policy_version 960 (0.0019) |
|
[2023-02-25 10:14:39,340][00973] Fps is (10 sec: 2867.8, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3944448. Throughput: 0: 817.0. Samples: 984938. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:14:39,346][00973] Avg episode reward: [(0, '22.333')] |
|
[2023-02-25 10:14:44,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3276.8, 300 sec: 3401.8). Total num frames: 3964928. Throughput: 0: 841.7. Samples: 990306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-02-25 10:14:44,348][00973] Avg episode reward: [(0, '20.671')] |
|
[2023-02-25 10:14:45,788][11779] Updated weights for policy 0, policy_version 970 (0.0026) |
|
[2023-02-25 10:14:49,340][00973] Fps is (10 sec: 4096.1, 60 sec: 3345.1, 300 sec: 3401.8). Total num frames: 3985408. Throughput: 0: 850.8. Samples: 996810. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:14:49,347][00973] Avg episode reward: [(0, '20.031')] |
|
[2023-02-25 10:14:54,340][00973] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3401.8). Total num frames: 4001792. Throughput: 0: 837.7. Samples: 999136. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-02-25 10:14:54,346][00973] Avg episode reward: [(0, '20.402')] |
|
[2023-02-25 10:14:55,613][11765] Stopping Batcher_0... |
|
[2023-02-25 10:14:55,614][11765] Loop batcher_evt_loop terminating... |
|
[2023-02-25 10:14:55,614][00973] Component Batcher_0 stopped! |
|
[2023-02-25 10:14:55,620][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-25 10:14:55,654][11779] Weights refcount: 2 0 |
|
[2023-02-25 10:14:55,670][11779] Stopping InferenceWorker_p0-w0... |
|
[2023-02-25 10:14:55,671][00973] Component InferenceWorker_p0-w0 stopped! |
|
[2023-02-25 10:14:55,672][11779] Loop inference_proc0-0_evt_loop terminating... |
|
[2023-02-25 10:14:55,703][11783] Stopping RolloutWorker_w4... |
|
[2023-02-25 10:14:55,701][11787] Stopping RolloutWorker_w6... |
|
[2023-02-25 10:14:55,704][00973] Component RolloutWorker_w6 stopped! |
|
[2023-02-25 10:14:55,706][00973] Component RolloutWorker_w4 stopped! |
|
[2023-02-25 10:14:55,705][11783] Loop rollout_proc4_evt_loop terminating... |
|
[2023-02-25 10:14:55,706][11787] Loop rollout_proc6_evt_loop terminating... |
|
[2023-02-25 10:14:55,729][00973] Component RolloutWorker_w0 stopped! |
|
[2023-02-25 10:14:55,728][11780] Stopping RolloutWorker_w0... |
|
[2023-02-25 10:14:55,746][11780] Loop rollout_proc0_evt_loop terminating... |
|
[2023-02-25 10:14:55,763][11782] Stopping RolloutWorker_w2... |
|
[2023-02-25 10:14:55,763][00973] Component RolloutWorker_w2 stopped! |
|
[2023-02-25 10:14:55,766][11782] Loop rollout_proc2_evt_loop terminating... |
|
[2023-02-25 10:14:55,783][00973] Component RolloutWorker_w3 stopped! |
|
[2023-02-25 10:14:55,786][11784] Stopping RolloutWorker_w3... |
|
[2023-02-25 10:14:55,788][11784] Loop rollout_proc3_evt_loop terminating... |
|
[2023-02-25 10:14:55,812][00973] Component RolloutWorker_w5 stopped! |
|
[2023-02-25 10:14:55,815][11785] Stopping RolloutWorker_w5... |
|
[2023-02-25 10:14:55,816][11785] Loop rollout_proc5_evt_loop terminating... |
|
[2023-02-25 10:14:55,846][00973] Component RolloutWorker_w7 stopped! |
|
[2023-02-25 10:14:55,848][11786] Stopping RolloutWorker_w7... |
|
[2023-02-25 10:14:55,852][11786] Loop rollout_proc7_evt_loop terminating... |
|
[2023-02-25 10:14:55,869][00973] Component RolloutWorker_w1 stopped! |
|
[2023-02-25 10:14:55,871][11781] Stopping RolloutWorker_w1... |
|
[2023-02-25 10:14:55,873][11781] Loop rollout_proc1_evt_loop terminating... |
|
[2023-02-25 10:14:55,893][11765] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth |
|
[2023-02-25 10:14:55,905][11765] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-25 10:14:56,195][00973] Component LearnerWorker_p0 stopped! |
|
[2023-02-25 10:14:56,198][00973] Waiting for process learner_proc0 to stop... |
|
[2023-02-25 10:14:56,202][11765] Stopping LearnerWorker_p0... |
|
[2023-02-25 10:14:56,202][11765] Loop learner_proc0_evt_loop terminating... |
|
[2023-02-25 10:14:58,355][00973] Waiting for process inference_proc0-0 to join... |
|
[2023-02-25 10:14:59,128][00973] Waiting for process rollout_proc0 to join... |
|
[2023-02-25 10:14:59,190][00973] Waiting for process rollout_proc1 to join... |
|
[2023-02-25 10:14:59,786][00973] Waiting for process rollout_proc2 to join... |
|
[2023-02-25 10:14:59,788][00973] Waiting for process rollout_proc3 to join... |
|
[2023-02-25 10:14:59,790][00973] Waiting for process rollout_proc4 to join... |
|
[2023-02-25 10:14:59,791][00973] Waiting for process rollout_proc5 to join... |
|
[2023-02-25 10:14:59,792][00973] Waiting for process rollout_proc6 to join... |
|
[2023-02-25 10:14:59,794][00973] Waiting for process rollout_proc7 to join... |
|
[2023-02-25 10:14:59,795][00973] Batcher 0 profile tree view: |
|
batching: 26.3925, releasing_batches: 0.0252 |
|
[2023-02-25 10:14:59,797][00973] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 581.9217 |
|
update_model: 8.1537 |
|
weight_update: 0.0023 |
|
one_step: 0.0024 |
|
handle_policy_step: 558.6319 |
|
deserialize: 15.7481, stack: 3.1351, obs_to_device_normalize: 119.7986, forward: 274.0754, send_messages: 27.8994 |
|
prepare_outputs: 90.0917 |
|
to_cpu: 56.2430 |
|
[2023-02-25 10:14:59,799][00973] Learner 0 profile tree view: |
|
misc: 0.0071, prepare_batch: 17.3469 |
|
train: 77.7597 |
|
epoch_init: 0.0120, minibatch_init: 0.0061, losses_postprocess: 0.6025, kl_divergence: 0.5279, after_optimizer: 33.2662 |
|
calculate_losses: 27.6484 |
|
losses_init: 0.0036, forward_head: 1.8058, bptt_initial: 18.1834, tail: 1.1459, advantages_returns: 0.3229, losses: 3.4555 |
|
bptt: 2.3611 |
|
bptt_forward_core: 2.2454 |
|
update: 15.0283 |
|
clip: 1.4433 |
|
[2023-02-25 10:14:59,801][00973] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.4007, enqueue_policy_requests: 163.2068, env_step: 891.1734, overhead: 24.9476, complete_rollouts: 7.8290 |
|
save_policy_outputs: 22.3440 |
|
split_output_tensors: 10.8092 |
|
[2023-02-25 10:14:59,802][00973] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3332, enqueue_policy_requests: 169.1143, env_step: 886.7080, overhead: 23.7999, complete_rollouts: 7.5443 |
|
save_policy_outputs: 21.8265 |
|
split_output_tensors: 10.5381 |
|
[2023-02-25 10:14:59,806][00973] Loop Runner_EvtLoop terminating... |
|
[2023-02-25 10:14:59,808][00973] Runner profile tree view: |
|
main_loop: 1224.8012 |
|
[2023-02-25 10:14:59,810][00973] Collected {0: 4005888}, FPS: 3270.6 |
|
[2023-02-25 10:15:00,039][00973] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-02-25 10:15:00,040][00973] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-02-25 10:15:00,042][00973] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-02-25 10:15:00,043][00973] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-02-25 10:15:00,045][00973] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-25 10:15:00,047][00973] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-02-25 10:15:00,048][00973] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-25 10:15:00,049][00973] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-02-25 10:15:00,051][00973] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2023-02-25 10:15:00,052][00973] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2023-02-25 10:15:00,054][00973] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-02-25 10:15:00,055][00973] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-02-25 10:15:00,057][00973] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-02-25 10:15:00,058][00973] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-02-25 10:15:00,060][00973] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-02-25 10:15:00,100][00973] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:15:00,104][00973] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 10:15:00,107][00973] RunningMeanStd input shape: (1,) |
|
[2023-02-25 10:15:00,127][00973] ConvEncoder: input_channels=3 |
|
[2023-02-25 10:15:00,777][00973] Conv encoder output size: 512 |
|
[2023-02-25 10:15:00,779][00973] Policy head output size: 512 |
|
[2023-02-25 10:15:03,825][00973] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-25 10:15:05,246][00973] Num frames 100... |
|
[2023-02-25 10:15:05,360][00973] Num frames 200... |
|
[2023-02-25 10:15:05,475][00973] Num frames 300... |
|
[2023-02-25 10:15:05,592][00973] Num frames 400... |
|
[2023-02-25 10:15:05,710][00973] Num frames 500... |
|
[2023-02-25 10:15:05,819][00973] Num frames 600... |
|
[2023-02-25 10:15:05,935][00973] Num frames 700... |
|
[2023-02-25 10:15:06,071][00973] Avg episode rewards: #0: 14.700, true rewards: #0: 7.700 |
|
[2023-02-25 10:15:06,072][00973] Avg episode reward: 14.700, avg true_objective: 7.700 |
|
[2023-02-25 10:15:06,110][00973] Num frames 800... |
|
[2023-02-25 10:15:06,227][00973] Num frames 900... |
|
[2023-02-25 10:15:06,345][00973] Num frames 1000... |
|
[2023-02-25 10:15:06,453][00973] Num frames 1100... |
|
[2023-02-25 10:15:06,568][00973] Num frames 1200... |
|
[2023-02-25 10:15:06,687][00973] Num frames 1300... |
|
[2023-02-25 10:15:06,801][00973] Num frames 1400... |
|
[2023-02-25 10:15:06,916][00973] Num frames 1500... |
|
[2023-02-25 10:15:07,026][00973] Num frames 1600... |
|
[2023-02-25 10:15:07,142][00973] Num frames 1700... |
|
[2023-02-25 10:15:07,263][00973] Num frames 1800... |
|
[2023-02-25 10:15:07,381][00973] Num frames 1900... |
|
[2023-02-25 10:15:07,462][00973] Avg episode rewards: #0: 20.110, true rewards: #0: 9.610 |
|
[2023-02-25 10:15:07,463][00973] Avg episode reward: 20.110, avg true_objective: 9.610 |
|
[2023-02-25 10:15:07,562][00973] Num frames 2000... |
|
[2023-02-25 10:15:07,695][00973] Num frames 2100... |
|
[2023-02-25 10:15:07,822][00973] Num frames 2200... |
|
[2023-02-25 10:15:07,949][00973] Num frames 2300... |
|
[2023-02-25 10:15:08,095][00973] Avg episode rewards: #0: 15.577, true rewards: #0: 7.910 |
|
[2023-02-25 10:15:08,096][00973] Avg episode reward: 15.577, avg true_objective: 7.910 |
|
[2023-02-25 10:15:08,132][00973] Num frames 2400... |
|
[2023-02-25 10:15:08,253][00973] Num frames 2500... |
|
[2023-02-25 10:15:08,368][00973] Num frames 2600... |
|
[2023-02-25 10:15:08,477][00973] Num frames 2700... |
|
[2023-02-25 10:15:08,588][00973] Num frames 2800... |
|
[2023-02-25 10:15:08,705][00973] Num frames 2900... |
|
[2023-02-25 10:15:08,820][00973] Num frames 3000... |
|
[2023-02-25 10:15:08,933][00973] Num frames 3100... |
|
[2023-02-25 10:15:09,042][00973] Num frames 3200... |
|
[2023-02-25 10:15:09,152][00973] Num frames 3300... |
|
[2023-02-25 10:15:09,271][00973] Num frames 3400... |
|
[2023-02-25 10:15:09,357][00973] Avg episode rewards: #0: 18.315, true rewards: #0: 8.565 |
|
[2023-02-25 10:15:09,358][00973] Avg episode reward: 18.315, avg true_objective: 8.565 |
|
[2023-02-25 10:15:09,451][00973] Num frames 3500... |
|
[2023-02-25 10:15:09,581][00973] Num frames 3600... |
|
[2023-02-25 10:15:09,752][00973] Num frames 3700... |
|
[2023-02-25 10:15:09,907][00973] Num frames 3800... |
|
[2023-02-25 10:15:10,070][00973] Num frames 3900... |
|
[2023-02-25 10:15:10,229][00973] Num frames 4000... |
|
[2023-02-25 10:15:10,383][00973] Num frames 4100... |
|
[2023-02-25 10:15:10,536][00973] Num frames 4200... |
|
[2023-02-25 10:15:10,695][00973] Num frames 4300... |
|
[2023-02-25 10:15:10,854][00973] Num frames 4400... |
|
[2023-02-25 10:15:11,008][00973] Num frames 4500... |
|
[2023-02-25 10:15:11,181][00973] Num frames 4600... |
|
[2023-02-25 10:15:11,338][00973] Num frames 4700... |
|
[2023-02-25 10:15:11,407][00973] Avg episode rewards: #0: 20.216, true rewards: #0: 9.416 |
|
[2023-02-25 10:15:11,409][00973] Avg episode reward: 20.216, avg true_objective: 9.416 |
|
[2023-02-25 10:15:11,557][00973] Num frames 4800... |
|
[2023-02-25 10:15:11,714][00973] Num frames 4900... |
|
[2023-02-25 10:15:11,878][00973] Num frames 5000... |
|
[2023-02-25 10:15:12,039][00973] Num frames 5100... |
|
[2023-02-25 10:15:12,197][00973] Num frames 5200... |
|
[2023-02-25 10:15:12,361][00973] Num frames 5300... |
|
[2023-02-25 10:15:12,525][00973] Avg episode rewards: #0: 18.947, true rewards: #0: 8.947 |
|
[2023-02-25 10:15:12,528][00973] Avg episode reward: 18.947, avg true_objective: 8.947 |
|
[2023-02-25 10:15:12,589][00973] Num frames 5400... |
|
[2023-02-25 10:15:12,745][00973] Num frames 5500... |
|
[2023-02-25 10:15:12,910][00973] Num frames 5600... |
|
[2023-02-25 10:15:13,071][00973] Num frames 5700... |
|
[2023-02-25 10:15:13,193][00973] Avg episode rewards: #0: 17.217, true rewards: #0: 8.217 |
|
[2023-02-25 10:15:13,196][00973] Avg episode reward: 17.217, avg true_objective: 8.217 |
|
[2023-02-25 10:15:13,255][00973] Num frames 5800... |
|
[2023-02-25 10:15:13,379][00973] Num frames 5900... |
|
[2023-02-25 10:15:13,499][00973] Num frames 6000... |
|
[2023-02-25 10:15:13,613][00973] Num frames 6100... |
|
[2023-02-25 10:15:13,728][00973] Num frames 6200... |
|
[2023-02-25 10:15:13,844][00973] Num frames 6300... |
|
[2023-02-25 10:15:14,021][00973] Avg episode rewards: #0: 17.115, true rewards: #0: 7.990 |
|
[2023-02-25 10:15:14,023][00973] Avg episode reward: 17.115, avg true_objective: 7.990 |
|
[2023-02-25 10:15:14,038][00973] Num frames 6400... |
|
[2023-02-25 10:15:14,151][00973] Num frames 6500... |
|
[2023-02-25 10:15:14,264][00973] Num frames 6600... |
|
[2023-02-25 10:15:14,378][00973] Num frames 6700... |
|
[2023-02-25 10:15:14,491][00973] Num frames 6800... |
|
[2023-02-25 10:15:14,606][00973] Num frames 6900... |
|
[2023-02-25 10:15:14,725][00973] Num frames 7000... |
|
[2023-02-25 10:15:14,835][00973] Num frames 7100... |
|
[2023-02-25 10:15:14,955][00973] Num frames 7200... |
|
[2023-02-25 10:15:15,065][00973] Num frames 7300... |
|
[2023-02-25 10:15:15,174][00973] Num frames 7400... |
|
[2023-02-25 10:15:15,295][00973] Num frames 7500... |
|
[2023-02-25 10:15:15,411][00973] Num frames 7600... |
|
[2023-02-25 10:15:15,526][00973] Num frames 7700... |
|
[2023-02-25 10:15:15,643][00973] Num frames 7800... |
|
[2023-02-25 10:15:15,779][00973] Avg episode rewards: #0: 19.302, true rewards: #0: 8.747 |
|
[2023-02-25 10:15:15,780][00973] Avg episode reward: 19.302, avg true_objective: 8.747 |
|
[2023-02-25 10:15:15,819][00973] Num frames 7900... |
|
[2023-02-25 10:15:15,943][00973] Num frames 8000... |
|
[2023-02-25 10:15:16,053][00973] Num frames 8100... |
|
[2023-02-25 10:15:16,177][00973] Num frames 8200... |
|
[2023-02-25 10:15:16,291][00973] Num frames 8300... |
|
[2023-02-25 10:15:16,404][00973] Num frames 8400... |
|
[2023-02-25 10:15:16,516][00973] Num frames 8500... |
|
[2023-02-25 10:15:16,634][00973] Num frames 8600... |
|
[2023-02-25 10:15:16,749][00973] Num frames 8700... |
|
[2023-02-25 10:15:16,864][00973] Num frames 8800... |
|
[2023-02-25 10:15:16,985][00973] Num frames 8900... |
|
[2023-02-25 10:15:17,102][00973] Num frames 9000... |
|
[2023-02-25 10:15:17,218][00973] Num frames 9100... |
|
[2023-02-25 10:15:17,330][00973] Num frames 9200... |
|
[2023-02-25 10:15:17,447][00973] Num frames 9300... |
|
[2023-02-25 10:15:17,568][00973] Num frames 9400... |
|
[2023-02-25 10:15:17,683][00973] Num frames 9500... |
|
[2023-02-25 10:15:17,793][00973] Num frames 9600... |
|
[2023-02-25 10:15:17,908][00973] Num frames 9700... |
|
[2023-02-25 10:15:18,029][00973] Num frames 9800... |
|
[2023-02-25 10:15:18,139][00973] Num frames 9900... |
|
[2023-02-25 10:15:18,276][00973] Avg episode rewards: #0: 23.572, true rewards: #0: 9.972 |
|
[2023-02-25 10:15:18,278][00973] Avg episode reward: 23.572, avg true_objective: 9.972 |
|
[2023-02-25 10:16:22,375][00973] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-02-25 10:26:19,083][22177] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2023-02-25 10:26:19,085][22177] Rollout worker 0 uses device cpu |
|
[2023-02-25 10:26:19,087][22177] Rollout worker 1 uses device cpu |
|
[2023-02-25 10:26:19,088][22177] Rollout worker 2 uses device cpu |
|
[2023-02-25 10:26:19,090][22177] Rollout worker 3 uses device cpu |
|
[2023-02-25 10:26:19,091][22177] Rollout worker 4 uses device cpu |
|
[2023-02-25 10:26:19,092][22177] Rollout worker 5 uses device cpu |
|
[2023-02-25 10:26:19,093][22177] Rollout worker 6 uses device cpu |
|
[2023-02-25 10:26:19,095][22177] Rollout worker 7 uses device cpu |
|
[2023-02-25 10:26:19,425][22177] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 10:26:19,429][22177] InferenceWorker_p0-w0: min num requests: 2 |
|
[2023-02-25 10:26:19,507][22177] Starting all processes... |
|
[2023-02-25 10:26:19,509][22177] Starting process learner_proc0 |
|
[2023-02-25 10:26:19,630][22177] Starting all processes... |
|
[2023-02-25 10:26:19,660][22177] Starting process inference_proc0-0 |
|
[2023-02-25 10:26:19,665][22177] Starting process rollout_proc2 |
|
[2023-02-25 10:26:19,665][22177] Starting process rollout_proc1 |
|
[2023-02-25 10:26:19,661][22177] Starting process rollout_proc0 |
|
[2023-02-25 10:26:19,666][22177] Starting process rollout_proc3 |
|
[2023-02-25 10:26:19,666][22177] Starting process rollout_proc4 |
|
[2023-02-25 10:26:19,666][22177] Starting process rollout_proc5 |
|
[2023-02-25 10:26:19,666][22177] Starting process rollout_proc6 |
|
[2023-02-25 10:26:19,666][22177] Starting process rollout_proc7 |
|
[2023-02-25 10:26:29,701][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 10:26:29,701][22567] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2023-02-25 10:26:30,480][22581] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 10:26:30,480][22581] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2023-02-25 10:26:30,617][22589] Worker 7 uses CPU cores [1] |
|
[2023-02-25 10:26:31,124][22582] Worker 2 uses CPU cores [0] |
|
[2023-02-25 10:26:31,141][22586] Worker 5 uses CPU cores [1] |
|
[2023-02-25 10:26:31,239][22583] Worker 1 uses CPU cores [1] |
|
[2023-02-25 10:26:31,245][22584] Worker 0 uses CPU cores [0] |
|
[2023-02-25 10:26:31,279][22588] Worker 6 uses CPU cores [0] |
|
[2023-02-25 10:26:31,373][22587] Worker 4 uses CPU cores [0] |
|
[2023-02-25 10:26:31,421][22585] Worker 3 uses CPU cores [1] |
|
[2023-02-25 10:26:31,698][22581] Num visible devices: 1 |
|
[2023-02-25 10:26:31,705][22567] Num visible devices: 1 |
|
[2023-02-25 10:26:31,758][22567] Starting seed is not provided |
|
[2023-02-25 10:26:31,758][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 10:26:31,759][22567] Initializing actor-critic model on device cuda:0 |
|
[2023-02-25 10:26:31,759][22567] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 10:26:31,775][22567] RunningMeanStd input shape: (1,) |
|
[2023-02-25 10:26:31,826][22567] ConvEncoder: input_channels=3 |
|
[2023-02-25 10:26:32,100][22567] Conv encoder output size: 512 |
|
[2023-02-25 10:26:32,102][22567] Policy head output size: 512 |
|
[2023-02-25 10:26:32,186][22567] Created Actor Critic model with architecture: |
|
[2023-02-25 10:26:32,192][22567] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2023-02-25 10:26:39,404][22177] Heartbeat connected on Batcher_0 |
|
[2023-02-25 10:26:39,425][22177] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2023-02-25 10:26:39,440][22177] Heartbeat connected on RolloutWorker_w0 |
|
[2023-02-25 10:26:39,453][22177] Heartbeat connected on RolloutWorker_w1 |
|
[2023-02-25 10:26:39,464][22177] Heartbeat connected on RolloutWorker_w2 |
|
[2023-02-25 10:26:39,467][22177] Heartbeat connected on RolloutWorker_w3 |
|
[2023-02-25 10:26:39,478][22177] Heartbeat connected on RolloutWorker_w4 |
|
[2023-02-25 10:26:39,487][22177] Heartbeat connected on RolloutWorker_w5 |
|
[2023-02-25 10:26:39,499][22177] Heartbeat connected on RolloutWorker_w6 |
|
[2023-02-25 10:26:39,505][22177] Heartbeat connected on RolloutWorker_w7 |
|
[2023-02-25 10:26:39,702][22567] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2023-02-25 10:26:39,703][22567] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-02-25 10:26:39,759][22567] Loading model from checkpoint |
|
[2023-02-25 10:26:39,766][22567] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
|
[2023-02-25 10:26:39,767][22567] Initialized policy 0 weights for model version 978 |
|
[2023-02-25 10:26:39,772][22567] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-02-25 10:26:39,775][22177] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 10:26:39,781][22567] LearnerWorker_p0 finished initialization! |
|
[2023-02-25 10:26:39,782][22177] Heartbeat connected on LearnerWorker_p0 |
|
[2023-02-25 10:26:39,971][22581] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 10:26:39,972][22581] RunningMeanStd input shape: (1,) |
|
[2023-02-25 10:26:40,083][22581] ConvEncoder: input_channels=3 |
|
[2023-02-25 10:26:40,395][22581] Conv encoder output size: 512 |
|
[2023-02-25 10:26:40,396][22581] Policy head output size: 512 |
|
[2023-02-25 10:26:42,715][22177] Inference worker 0-0 is ready! |
|
[2023-02-25 10:26:42,718][22177] All inference workers are ready! Signal rollout workers to start! |
|
[2023-02-25 10:26:42,814][22589] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,816][22586] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,817][22585] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,818][22583] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,823][22584] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,824][22588] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,820][22587] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:42,822][22582] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:26:43,623][22585] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:43,628][22586] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:43,973][22585] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:44,043][22587] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:44,051][22588] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:44,046][22584] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:44,749][22582] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:44,775][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 10:26:44,993][22588] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:44,995][22584] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:45,034][22585] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:45,177][22586] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:45,203][22583] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:45,782][22585] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:46,802][22582] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:47,304][22584] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:47,324][22588] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:47,567][22587] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:47,877][22586] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:49,775][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 9.4. Samples: 94. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 10:26:49,783][22177] Avg episode reward: [(0, '0.800')] |
|
[2023-02-25 10:26:49,800][22583] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:50,101][22589] Decorrelating experience for 0 frames... |
|
[2023-02-25 10:26:50,161][22584] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:50,196][22588] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:50,413][22582] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:50,517][22586] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:54,391][22587] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:54,512][22589] Decorrelating experience for 32 frames... |
|
[2023-02-25 10:26:54,777][22177] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 128.7. Samples: 1930. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-02-25 10:26:54,782][22177] Avg episode reward: [(0, '5.506')] |
|
[2023-02-25 10:26:54,830][22582] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:55,502][22567] Signal inference workers to stop experience collection... |
|
[2023-02-25 10:26:55,520][22581] InferenceWorker_p0-w0: stopping experience collection |
|
[2023-02-25 10:26:56,006][22587] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:56,300][22583] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:56,354][22567] Signal inference workers to resume experience collection... |
|
[2023-02-25 10:26:56,356][22581] InferenceWorker_p0-w0: resuming experience collection |
|
[2023-02-25 10:26:56,362][22567] Stopping Batcher_0... |
|
[2023-02-25 10:26:56,363][22567] Loop batcher_evt_loop terminating... |
|
[2023-02-25 10:26:56,364][22177] Component Batcher_0 stopped! |
|
[2023-02-25 10:26:56,378][22177] Component RolloutWorker_w5 stopped! |
|
[2023-02-25 10:26:56,378][22586] Stopping RolloutWorker_w5... |
|
[2023-02-25 10:26:56,392][22586] Loop rollout_proc5_evt_loop terminating... |
|
[2023-02-25 10:26:56,400][22585] Stopping RolloutWorker_w3... |
|
[2023-02-25 10:26:56,401][22177] Component RolloutWorker_w3 stopped! |
|
[2023-02-25 10:26:56,401][22585] Loop rollout_proc3_evt_loop terminating... |
|
[2023-02-25 10:26:56,406][22177] Component RolloutWorker_w4 stopped! |
|
[2023-02-25 10:26:56,408][22587] Stopping RolloutWorker_w4... |
|
[2023-02-25 10:26:56,431][22177] Component RolloutWorker_w2 stopped! |
|
[2023-02-25 10:26:56,433][22582] Stopping RolloutWorker_w2... |
|
[2023-02-25 10:26:56,438][22582] Loop rollout_proc2_evt_loop terminating... |
|
[2023-02-25 10:26:56,436][22587] Loop rollout_proc4_evt_loop terminating... |
|
[2023-02-25 10:26:56,444][22177] Component RolloutWorker_w6 stopped! |
|
[2023-02-25 10:26:56,446][22588] Stopping RolloutWorker_w6... |
|
[2023-02-25 10:26:56,449][22588] Loop rollout_proc6_evt_loop terminating... |
|
[2023-02-25 10:26:56,452][22177] Component RolloutWorker_w0 stopped! |
|
[2023-02-25 10:26:56,455][22584] Stopping RolloutWorker_w0... |
|
[2023-02-25 10:26:56,458][22584] Loop rollout_proc0_evt_loop terminating... |
|
[2023-02-25 10:26:56,469][22589] Decorrelating experience for 64 frames... |
|
[2023-02-25 10:26:56,486][22581] Weights refcount: 2 0 |
|
[2023-02-25 10:26:56,497][22177] Component InferenceWorker_p0-w0 stopped! |
|
[2023-02-25 10:26:56,497][22581] Stopping InferenceWorker_p0-w0... |
|
[2023-02-25 10:26:56,502][22581] Loop inference_proc0-0_evt_loop terminating... |
|
[2023-02-25 10:26:58,478][22567] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2023-02-25 10:26:58,574][22567] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000956_3915776.pth |
|
[2023-02-25 10:26:58,587][22567] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2023-02-25 10:26:58,714][22177] Component LearnerWorker_p0 stopped! |
|
[2023-02-25 10:26:58,716][22567] Stopping LearnerWorker_p0... |
|
[2023-02-25 10:26:58,724][22567] Loop learner_proc0_evt_loop terminating... |
|
[2023-02-25 10:26:59,202][22583] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:59,531][22177] Component RolloutWorker_w1 stopped! |
|
[2023-02-25 10:26:59,539][22583] Stopping RolloutWorker_w1... |
|
[2023-02-25 10:26:59,540][22583] Loop rollout_proc1_evt_loop terminating... |
|
[2023-02-25 10:26:59,549][22589] Decorrelating experience for 96 frames... |
|
[2023-02-25 10:26:59,741][22177] Component RolloutWorker_w7 stopped! |
|
[2023-02-25 10:26:59,748][22177] Waiting for process learner_proc0 to stop... |
|
[2023-02-25 10:26:59,753][22589] Stopping RolloutWorker_w7... |
|
[2023-02-25 10:26:59,755][22589] Loop rollout_proc7_evt_loop terminating... |
|
[2023-02-25 10:27:00,072][22177] Waiting for process inference_proc0-0 to join... |
|
[2023-02-25 10:27:00,074][22177] Waiting for process rollout_proc0 to join... |
|
[2023-02-25 10:27:00,078][22177] Waiting for process rollout_proc1 to join... |
|
[2023-02-25 10:27:00,264][22177] Waiting for process rollout_proc2 to join... |
|
[2023-02-25 10:27:00,265][22177] Waiting for process rollout_proc3 to join... |
|
[2023-02-25 10:27:00,266][22177] Waiting for process rollout_proc4 to join... |
|
[2023-02-25 10:27:00,268][22177] Waiting for process rollout_proc5 to join... |
|
[2023-02-25 10:27:00,269][22177] Waiting for process rollout_proc6 to join... |
|
[2023-02-25 10:27:00,270][22177] Waiting for process rollout_proc7 to join... |
|
[2023-02-25 10:27:00,313][22177] Batcher 0 profile tree view: |
|
batching: 0.0368, releasing_batches: 0.0014 |
|
[2023-02-25 10:27:00,314][22177] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0052 |
|
wait_policy_total: 8.4476 |
|
update_model: 0.0237 |
|
weight_update: 0.0014 |
|
one_step: 0.0791 |
|
handle_policy_step: 4.2077 |
|
deserialize: 0.0563, stack: 0.0162, obs_to_device_normalize: 0.3851, forward: 3.3138, send_messages: 0.0713 |
|
prepare_outputs: 0.2529 |
|
to_cpu: 0.1335 |
|
[2023-02-25 10:27:00,317][22177] Learner 0 profile tree view: |
|
misc: 0.0000, prepare_batch: 5.5813 |
|
train: 0.6991 |
|
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0006, kl_divergence: 0.0006, after_optimizer: 0.0044 |
|
calculate_losses: 0.1493 |
|
losses_init: 0.0000, forward_head: 0.1165, bptt_initial: 0.0176, tail: 0.0018, advantages_returns: 0.0031, losses: 0.0031 |
|
bptt: 0.0066 |
|
bptt_forward_core: 0.0065 |
|
update: 0.5433 |
|
clip: 0.0039 |
|
[2023-02-25 10:27:00,320][22177] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.0010, enqueue_policy_requests: 1.0403, env_step: 3.2737, overhead: 0.1866, complete_rollouts: 0.0033 |
|
save_policy_outputs: 0.0888 |
|
split_output_tensors: 0.0503 |
|
[2023-02-25 10:27:00,322][22177] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.0003, enqueue_policy_requests: 0.0005 |
|
[2023-02-25 10:27:00,325][22177] Loop Runner_EvtLoop terminating... |
|
[2023-02-25 10:27:00,327][22177] Runner profile tree view: |
|
main_loop: 40.8230 |
|
[2023-02-25 10:27:00,330][22177] Collected {0: 4014080}, FPS: 200.7 |
|
[2023-02-25 10:27:00,364][22177] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-02-25 10:27:00,365][22177] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-02-25 10:27:00,367][22177] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-02-25 10:27:00,370][22177] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-02-25 10:27:00,372][22177] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-25 10:27:00,373][22177] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-02-25 10:27:00,374][22177] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-25 10:27:00,376][22177] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-02-25 10:27:00,377][22177] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2023-02-25 10:27:00,378][22177] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2023-02-25 10:27:00,379][22177] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-02-25 10:27:00,386][22177] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-02-25 10:27:00,388][22177] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-02-25 10:27:00,390][22177] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-02-25 10:27:00,392][22177] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-02-25 10:27:00,420][22177] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-02-25 10:27:00,423][22177] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 10:27:00,426][22177] RunningMeanStd input shape: (1,) |
|
[2023-02-25 10:27:00,441][22177] ConvEncoder: input_channels=3 |
|
[2023-02-25 10:27:01,082][22177] Conv encoder output size: 512 |
|
[2023-02-25 10:27:01,085][22177] Policy head output size: 512 |
|
[2023-02-25 10:27:03,413][22177] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2023-02-25 10:27:04,921][22177] Num frames 100... |
|
[2023-02-25 10:27:05,081][22177] Num frames 200... |
|
[2023-02-25 10:27:05,242][22177] Num frames 300... |
|
[2023-02-25 10:27:05,411][22177] Num frames 400... |
|
[2023-02-25 10:27:05,571][22177] Num frames 500... |
|
[2023-02-25 10:27:05,727][22177] Num frames 600... |
|
[2023-02-25 10:27:05,887][22177] Num frames 700... |
|
[2023-02-25 10:27:06,043][22177] Num frames 800... |
|
[2023-02-25 10:27:06,217][22177] Num frames 900... |
|
[2023-02-25 10:27:06,377][22177] Num frames 1000... |
|
[2023-02-25 10:27:06,537][22177] Num frames 1100... |
|
[2023-02-25 10:27:06,718][22177] Num frames 1200... |
|
[2023-02-25 10:27:06,883][22177] Num frames 1300... |
|
[2023-02-25 10:27:07,105][22177] Num frames 1400... |
|
[2023-02-25 10:27:07,390][22177] Num frames 1500... |
|
[2023-02-25 10:27:07,589][22177] Num frames 1600... |
|
[2023-02-25 10:27:07,758][22177] Num frames 1700... |
|
[2023-02-25 10:27:07,922][22177] Num frames 1800... |
|
[2023-02-25 10:27:08,071][22177] Num frames 1900... |
|
[2023-02-25 10:27:08,190][22177] Num frames 2000... |
|
[2023-02-25 10:27:08,305][22177] Num frames 2100... |
|
[2023-02-25 10:27:08,358][22177] Avg episode rewards: #0: 59.999, true rewards: #0: 21.000 |
|
[2023-02-25 10:27:08,359][22177] Avg episode reward: 59.999, avg true_objective: 21.000 |
|
[2023-02-25 10:27:08,494][22177] Num frames 2200... |
|
[2023-02-25 10:27:08,602][22177] Num frames 2300... |
|
[2023-02-25 10:27:08,717][22177] Num frames 2400... |
|
[2023-02-25 10:27:08,828][22177] Num frames 2500... |
|
[2023-02-25 10:27:08,946][22177] Num frames 2600... |
|
[2023-02-25 10:27:09,063][22177] Num frames 2700... |
|
[2023-02-25 10:27:09,177][22177] Num frames 2800... |
|
[2023-02-25 10:27:09,259][22177] Avg episode rewards: #0: 38.109, true rewards: #0: 14.110 |
|
[2023-02-25 10:27:09,261][22177] Avg episode reward: 38.109, avg true_objective: 14.110 |
|
[2023-02-25 10:27:09,355][22177] Num frames 2900... |
|
[2023-02-25 10:27:09,483][22177] Num frames 3000... |
|
[2023-02-25 10:27:09,604][22177] Num frames 3100... |
|
[2023-02-25 10:27:09,724][22177] Num frames 3200... |
|
[2023-02-25 10:27:09,841][22177] Num frames 3300... |
|
[2023-02-25 10:27:09,954][22177] Num frames 3400... |
|
[2023-02-25 10:27:10,072][22177] Num frames 3500... |
|
[2023-02-25 10:27:10,195][22177] Num frames 3600... |
|
[2023-02-25 10:27:10,306][22177] Num frames 3700... |
|
[2023-02-25 10:27:10,423][22177] Num frames 3800... |
|
[2023-02-25 10:27:10,570][22177] Num frames 3900... |
|
[2023-02-25 10:27:10,696][22177] Num frames 4000... |
|
[2023-02-25 10:27:10,825][22177] Num frames 4100... |
|
[2023-02-25 10:27:10,942][22177] Num frames 4200... |
|
[2023-02-25 10:27:11,059][22177] Num frames 4300... |
|
[2023-02-25 10:27:11,180][22177] Avg episode rewards: #0: 37.523, true rewards: #0: 14.523 |
|
[2023-02-25 10:27:11,182][22177] Avg episode reward: 37.523, avg true_objective: 14.523 |
|
[2023-02-25 10:27:11,239][22177] Num frames 4400... |
|
[2023-02-25 10:27:11,362][22177] Num frames 4500... |
|
[2023-02-25 10:27:11,474][22177] Num frames 4600... |
|
[2023-02-25 10:27:11,597][22177] Num frames 4700... |
|
[2023-02-25 10:27:11,713][22177] Num frames 4800... |
|
[2023-02-25 10:27:11,830][22177] Num frames 4900... |
|
[2023-02-25 10:27:11,968][22177] Num frames 5000... |
|
[2023-02-25 10:27:12,174][22177] Num frames 5100... |
|
[2023-02-25 10:27:12,364][22177] Num frames 5200... |
|
[2023-02-25 10:27:12,579][22177] Num frames 5300... |
|
[2023-02-25 10:27:12,764][22177] Num frames 5400... |
|
[2023-02-25 10:27:12,957][22177] Num frames 5500... |
|
[2023-02-25 10:27:13,132][22177] Num frames 5600... |
|
[2023-02-25 10:27:13,327][22177] Num frames 5700... |
|
[2023-02-25 10:27:13,533][22177] Num frames 5800... |
|
[2023-02-25 10:27:13,709][22177] Num frames 5900... |
|
[2023-02-25 10:27:13,806][22177] Avg episode rewards: #0: 38.052, true rewards: #0: 14.802 |
|
[2023-02-25 10:27:13,810][22177] Avg episode reward: 38.052, avg true_objective: 14.802 |
|
[2023-02-25 10:27:13,943][22177] Num frames 6000... |
|
[2023-02-25 10:27:14,155][22177] Num frames 6100... |
|
[2023-02-25 10:27:14,370][22177] Num frames 6200... |
|
[2023-02-25 10:27:14,595][22177] Num frames 6300... |
|
[2023-02-25 10:27:14,766][22177] Num frames 6400... |
|
[2023-02-25 10:27:14,936][22177] Num frames 6500... |
|
[2023-02-25 10:27:15,148][22177] Num frames 6600... |
|
[2023-02-25 10:27:15,377][22177] Num frames 6700... |
|
[2023-02-25 10:27:15,572][22177] Num frames 6800... |
|
[2023-02-25 10:27:15,836][22177] Num frames 6900... |
|
[2023-02-25 10:27:16,032][22177] Num frames 7000... |
|
[2023-02-25 10:27:16,327][22177] Num frames 7100... |
|
[2023-02-25 10:27:16,518][22177] Num frames 7200... |
|
[2023-02-25 10:27:16,873][22177] Num frames 7300... |
|
[2023-02-25 10:27:17,109][22177] Num frames 7400... |
|
[2023-02-25 10:27:17,417][22177] Num frames 7500... |
|
[2023-02-25 10:27:17,615][22177] Num frames 7600... |
|
[2023-02-25 10:27:17,808][22177] Num frames 7700... |
|
[2023-02-25 10:27:18,005][22177] Num frames 7800... |
|
[2023-02-25 10:27:18,128][22177] Avg episode rewards: #0: 39.833, true rewards: #0: 15.634 |
|
[2023-02-25 10:27:18,130][22177] Avg episode reward: 39.833, avg true_objective: 15.634 |
|
[2023-02-25 10:27:18,459][22177] Num frames 7900... |
|
[2023-02-25 10:27:18,862][22177] Num frames 8000... |
|
[2023-02-25 10:27:19,074][22177] Num frames 8100... |
|
[2023-02-25 10:27:19,234][22177] Num frames 8200... |
|
[2023-02-25 10:27:19,397][22177] Num frames 8300... |
|
[2023-02-25 10:27:19,559][22177] Num frames 8400... |
|
[2023-02-25 10:27:19,716][22177] Num frames 8500... |
|
[2023-02-25 10:27:19,889][22177] Num frames 8600... |
|
[2023-02-25 10:27:20,049][22177] Num frames 8700... |
|
[2023-02-25 10:27:20,209][22177] Num frames 8800... |
|
[2023-02-25 10:27:20,373][22177] Num frames 8900... |
|
[2023-02-25 10:27:20,539][22177] Num frames 9000... |
|
[2023-02-25 10:27:20,700][22177] Num frames 9100... |
|
[2023-02-25 10:27:20,864][22177] Num frames 9200... |
|
[2023-02-25 10:27:21,030][22177] Num frames 9300... |
|
[2023-02-25 10:27:21,226][22177] Avg episode rewards: #0: 39.133, true rewards: #0: 15.633 |
|
[2023-02-25 10:27:21,229][22177] Avg episode reward: 39.133, avg true_objective: 15.633 |
|
[2023-02-25 10:27:21,267][22177] Num frames 9400... |
|
[2023-02-25 10:27:21,424][22177] Num frames 9500... |
|
[2023-02-25 10:27:21,588][22177] Num frames 9600... |
|
[2023-02-25 10:27:21,748][22177] Num frames 9700... |
|
[2023-02-25 10:27:21,867][22177] Num frames 9800... |
|
[2023-02-25 10:27:21,990][22177] Num frames 9900... |
|
[2023-02-25 10:27:22,107][22177] Num frames 10000... |
|
[2023-02-25 10:27:22,222][22177] Num frames 10100... |
|
[2023-02-25 10:27:22,334][22177] Num frames 10200... |
|
[2023-02-25 10:27:22,446][22177] Num frames 10300... |
|
[2023-02-25 10:27:22,559][22177] Num frames 10400... |
|
[2023-02-25 10:27:22,676][22177] Num frames 10500... |
|
[2023-02-25 10:27:22,792][22177] Num frames 10600... |
|
[2023-02-25 10:27:22,908][22177] Num frames 10700... |
|
[2023-02-25 10:27:23,029][22177] Num frames 10800... |
|
[2023-02-25 10:27:23,141][22177] Num frames 10900... |
|
[2023-02-25 10:27:23,254][22177] Num frames 11000... |
|
[2023-02-25 10:27:23,367][22177] Num frames 11100... |
|
[2023-02-25 10:27:23,483][22177] Num frames 11200... |
|
[2023-02-25 10:27:23,597][22177] Num frames 11300... |
|
[2023-02-25 10:27:23,714][22177] Num frames 11400... |
|
[2023-02-25 10:27:23,858][22177] Avg episode rewards: #0: 40.828, true rewards: #0: 16.400 |
|
[2023-02-25 10:27:23,860][22177] Avg episode reward: 40.828, avg true_objective: 16.400 |
|
[2023-02-25 10:27:23,889][22177] Num frames 11500... |
|
[2023-02-25 10:27:24,009][22177] Num frames 11600... |
|
[2023-02-25 10:27:24,119][22177] Num frames 11700... |
|
[2023-02-25 10:27:24,232][22177] Num frames 11800... |
|
[2023-02-25 10:27:24,391][22177] Avg episode rewards: #0: 36.495, true rewards: #0: 14.870 |
|
[2023-02-25 10:27:24,396][22177] Avg episode reward: 36.495, avg true_objective: 14.870 |
|
[2023-02-25 10:27:24,405][22177] Num frames 11900... |
|
[2023-02-25 10:27:24,529][22177] Num frames 12000... |
|
[2023-02-25 10:27:24,652][22177] Num frames 12100... |
|
[2023-02-25 10:27:24,765][22177] Num frames 12200... |
|
[2023-02-25 10:27:24,887][22177] Num frames 12300... |
|
[2023-02-25 10:27:25,009][22177] Num frames 12400... |
|
[2023-02-25 10:27:25,128][22177] Num frames 12500... |
|
[2023-02-25 10:27:25,240][22177] Num frames 12600... |
|
[2023-02-25 10:27:25,312][22177] Avg episode rewards: #0: 33.792, true rewards: #0: 14.014 |
|
[2023-02-25 10:27:25,313][22177] Avg episode reward: 33.792, avg true_objective: 14.014 |
|
[2023-02-25 10:27:25,420][22177] Num frames 12700... |
|
[2023-02-25 10:27:25,550][22177] Num frames 12800... |
|
[2023-02-25 10:27:25,672][22177] Num frames 12900... |
|
[2023-02-25 10:27:25,799][22177] Num frames 13000... |
|
[2023-02-25 10:27:25,920][22177] Num frames 13100... |
|
[2023-02-25 10:27:26,040][22177] Num frames 13200... |
|
[2023-02-25 10:27:26,158][22177] Num frames 13300... |
|
[2023-02-25 10:27:26,278][22177] Num frames 13400... |
|
[2023-02-25 10:27:26,388][22177] Num frames 13500... |
|
[2023-02-25 10:27:26,503][22177] Num frames 13600... |
|
[2023-02-25 10:27:26,615][22177] Num frames 13700... |
|
[2023-02-25 10:27:26,726][22177] Num frames 13800... |
|
[2023-02-25 10:27:26,850][22177] Num frames 13900... |
|
[2023-02-25 10:27:26,968][22177] Num frames 14000... |
|
[2023-02-25 10:27:27,080][22177] Num frames 14100... |
|
[2023-02-25 10:27:27,196][22177] Num frames 14200... |
|
[2023-02-25 10:27:27,267][22177] Avg episode rewards: #0: 34.513, true rewards: #0: 14.213 |
|
[2023-02-25 10:27:27,275][22177] Avg episode reward: 34.513, avg true_objective: 14.213 |
|
[2023-02-25 10:29:01,782][22177] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-02-25 10:29:02,433][22177] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-02-25 10:29:02,436][22177] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-02-25 10:29:02,439][22177] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-02-25 10:29:02,442][22177] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-02-25 10:29:02,444][22177] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-02-25 10:29:02,446][22177] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-02-25 10:29:02,449][22177] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-02-25 10:29:02,450][22177] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-02-25 10:29:02,452][22177] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-02-25 10:29:02,453][22177] Adding new argument 'hf_repository'='RegisGraptin/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2023-02-25 10:29:02,454][22177] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-02-25 10:29:02,456][22177] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-02-25 10:29:02,457][22177] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-02-25 10:29:02,458][22177] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-02-25 10:29:02,460][22177] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-02-25 10:29:02,485][22177] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-02-25 10:29:02,490][22177] RunningMeanStd input shape: (1,) |
|
[2023-02-25 10:29:02,513][22177] ConvEncoder: input_channels=3 |
|
[2023-02-25 10:29:02,600][22177] Conv encoder output size: 512 |
|
[2023-02-25 10:29:02,606][22177] Policy head output size: 512 |
|
[2023-02-25 10:29:02,653][22177] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000980_4014080.pth... |
|
[2023-02-25 10:29:03,375][22177] Num frames 100... |
|
[2023-02-25 10:29:03,530][22177] Num frames 200... |
|
[2023-02-25 10:29:03,639][22177] Num frames 300... |
|
[2023-02-25 10:29:03,746][22177] Num frames 400... |
|
[2023-02-25 10:29:03,859][22177] Num frames 500... |
|
[2023-02-25 10:29:03,971][22177] Num frames 600... |
|
[2023-02-25 10:29:04,036][22177] Avg episode rewards: #0: 9.080, true rewards: #0: 6.080 |
|
[2023-02-25 10:29:04,040][22177] Avg episode reward: 9.080, avg true_objective: 6.080 |
|
[2023-02-25 10:29:04,156][22177] Num frames 700... |
|
[2023-02-25 10:29:04,283][22177] Num frames 800... |
|
[2023-02-25 10:29:04,403][22177] Num frames 900... |
|
[2023-02-25 10:29:04,607][22177] Num frames 1000... |
|
[2023-02-25 10:29:04,720][22177] Num frames 1100... |
|
[2023-02-25 10:29:04,839][22177] Num frames 1200... |
|
[2023-02-25 10:29:04,956][22177] Num frames 1300... |
|
[2023-02-25 10:29:05,068][22177] Num frames 1400... |
|
[2023-02-25 10:29:05,187][22177] Num frames 1500... |
|
[2023-02-25 10:29:05,297][22177] Num frames 1600... |
|
[2023-02-25 10:29:05,430][22177] Num frames 1700... |
|
[2023-02-25 10:29:05,545][22177] Num frames 1800... |
|
[2023-02-25 10:29:05,676][22177] Num frames 1900... |
|
[2023-02-25 10:29:05,788][22177] Num frames 2000... |
|
[2023-02-25 10:29:05,921][22177] Num frames 2100... |
|
[2023-02-25 10:29:06,039][22177] Num frames 2200... |
|
[2023-02-25 10:29:06,155][22177] Num frames 2300... |
|
[2023-02-25 10:29:06,268][22177] Num frames 2400... |
|
[2023-02-25 10:29:06,378][22177] Num frames 2500... |
|
[2023-02-25 10:29:06,488][22177] Num frames 2600... |
|
[2023-02-25 10:29:06,603][22177] Num frames 2700... |
|
[2023-02-25 10:29:06,668][22177] Avg episode rewards: #0: 33.539, true rewards: #0: 13.540 |
|
[2023-02-25 10:29:06,670][22177] Avg episode reward: 33.539, avg true_objective: 13.540 |
|
[2023-02-25 10:29:06,774][22177] Num frames 2800... |
|
[2023-02-25 10:29:06,898][22177] Num frames 2900... |
|
[2023-02-25 10:29:07,014][22177] Num frames 3000... |
|
[2023-02-25 10:29:07,121][22177] Num frames 3100... |
|
[2023-02-25 10:29:07,229][22177] Num frames 3200... |
|
[2023-02-25 10:29:07,306][22177] Avg episode rewards: #0: 24.400, true rewards: #0: 10.733 |
|
[2023-02-25 10:29:07,309][22177] Avg episode reward: 24.400, avg true_objective: 10.733 |
|
[2023-02-25 10:29:07,407][22177] Num frames 3300... |
|
[2023-02-25 10:29:07,530][22177] Num frames 3400... |
|
[2023-02-25 10:29:07,648][22177] Num frames 3500... |
|
[2023-02-25 10:29:07,760][22177] Num frames 3600... |
|
[2023-02-25 10:29:07,876][22177] Num frames 3700... |
|
[2023-02-25 10:29:08,006][22177] Num frames 3800... |
|
[2023-02-25 10:29:08,120][22177] Num frames 3900... |
|
[2023-02-25 10:29:08,202][22177] Avg episode rewards: #0: 22.310, true rewards: #0: 9.810 |
|
[2023-02-25 10:29:08,206][22177] Avg episode reward: 22.310, avg true_objective: 9.810 |
|
[2023-02-25 10:29:08,296][22177] Num frames 4000... |
|
[2023-02-25 10:29:08,408][22177] Num frames 4100... |
|
[2023-02-25 10:29:08,535][22177] Num frames 4200... |
|
[2023-02-25 10:29:08,605][22177] Avg episode rewards: #0: 18.824, true rewards: #0: 8.424 |
|
[2023-02-25 10:29:08,607][22177] Avg episode reward: 18.824, avg true_objective: 8.424 |
|
[2023-02-25 10:29:08,717][22177] Num frames 4300... |
|
[2023-02-25 10:29:08,841][22177] Num frames 4400... |
|
[2023-02-25 10:29:08,976][22177] Num frames 4500... |
|
[2023-02-25 10:29:09,096][22177] Num frames 4600... |
|
[2023-02-25 10:29:09,210][22177] Num frames 4700... |
|
[2023-02-25 10:29:09,320][22177] Num frames 4800... |
|
[2023-02-25 10:29:09,436][22177] Num frames 4900... |
|
[2023-02-25 10:29:09,553][22177] Num frames 5000... |
|
[2023-02-25 10:29:09,667][22177] Num frames 5100... |
|
[2023-02-25 10:29:09,776][22177] Num frames 5200... |
|
[2023-02-25 10:29:09,890][22177] Num frames 5300... |
|
[2023-02-25 10:29:10,011][22177] Num frames 5400... |
|
[2023-02-25 10:29:10,127][22177] Num frames 5500... |
|
[2023-02-25 10:29:10,239][22177] Num frames 5600... |
|
[2023-02-25 10:29:10,362][22177] Num frames 5700... |
|
[2023-02-25 10:29:10,521][22177] Avg episode rewards: #0: 22.815, true rewards: #0: 9.648 |
|
[2023-02-25 10:29:10,524][22177] Avg episode reward: 22.815, avg true_objective: 9.648 |
|
[2023-02-25 10:29:10,539][22177] Num frames 5800... |
|
[2023-02-25 10:29:10,658][22177] Num frames 5900... |
|
[2023-02-25 10:29:10,782][22177] Num frames 6000... |
|
[2023-02-25 10:29:10,904][22177] Num frames 6100... |
|
[2023-02-25 10:29:11,030][22177] Num frames 6200... |
|
[2023-02-25 10:29:11,148][22177] Num frames 6300... |
|
[2023-02-25 10:29:11,259][22177] Num frames 6400... |
|
[2023-02-25 10:29:11,369][22177] Num frames 6500... |
|
[2023-02-25 10:29:11,479][22177] Num frames 6600... |
|
[2023-02-25 10:29:11,588][22177] Num frames 6700... |
|
[2023-02-25 10:29:11,700][22177] Avg episode rewards: #0: 22.780, true rewards: #0: 9.637 |
|
[2023-02-25 10:29:11,702][22177] Avg episode reward: 22.780, avg true_objective: 9.637 |
|
[2023-02-25 10:29:11,767][22177] Num frames 6800... |
|
[2023-02-25 10:29:11,885][22177] Num frames 6900... |
|
[2023-02-25 10:29:12,007][22177] Num frames 7000... |
|
[2023-02-25 10:29:12,129][22177] Num frames 7100... |
|
[2023-02-25 10:29:12,239][22177] Num frames 7200... |
|
[2023-02-25 10:29:12,353][22177] Num frames 7300... |
|
[2023-02-25 10:29:12,462][22177] Num frames 7400... |
|
[2023-02-25 10:29:12,580][22177] Num frames 7500... |
|
[2023-02-25 10:29:12,690][22177] Num frames 7600... |
|
[2023-02-25 10:29:12,853][22177] Num frames 7700... |
|
[2023-02-25 10:29:13,023][22177] Num frames 7800... |
|
[2023-02-25 10:29:13,184][22177] Num frames 7900... |
|
[2023-02-25 10:29:13,337][22177] Num frames 8000... |
|
[2023-02-25 10:29:13,495][22177] Num frames 8100... |
|
[2023-02-25 10:29:13,646][22177] Num frames 8200... |
|
[2023-02-25 10:29:13,800][22177] Num frames 8300... |
|
[2023-02-25 10:29:13,881][22177] Avg episode rewards: #0: 24.892, true rewards: #0: 10.392 |
|
[2023-02-25 10:29:13,884][22177] Avg episode reward: 24.892, avg true_objective: 10.392 |
|
[2023-02-25 10:29:14,029][22177] Num frames 8400... |
|
[2023-02-25 10:29:14,196][22177] Num frames 8500... |
|
[2023-02-25 10:29:14,359][22177] Num frames 8600... |
|
[2023-02-25 10:29:14,524][22177] Num frames 8700... |
|
[2023-02-25 10:29:14,681][22177] Num frames 8800... |
|
[2023-02-25 10:29:14,845][22177] Num frames 8900... |
|
[2023-02-25 10:29:15,023][22177] Num frames 9000... |
|
[2023-02-25 10:29:15,226][22177] Avg episode rewards: #0: 23.647, true rewards: #0: 10.091 |
|
[2023-02-25 10:29:15,228][22177] Avg episode reward: 23.647, avg true_objective: 10.091 |
|
[2023-02-25 10:29:15,265][22177] Num frames 9100... |
|
[2023-02-25 10:29:15,435][22177] Num frames 9200... |
|
[2023-02-25 10:29:15,606][22177] Num frames 9300... |
|
[2023-02-25 10:29:15,733][22177] Num frames 9400... |
|
[2023-02-25 10:29:15,848][22177] Num frames 9500... |
|
[2023-02-25 10:29:15,967][22177] Num frames 9600... |
|
[2023-02-25 10:29:16,088][22177] Num frames 9700... |
|
[2023-02-25 10:29:16,213][22177] Num frames 9800... |
|
[2023-02-25 10:29:16,329][22177] Num frames 9900... |
|
[2023-02-25 10:29:16,447][22177] Num frames 10000... |
|
[2023-02-25 10:29:16,564][22177] Num frames 10100... |
|
[2023-02-25 10:29:16,683][22177] Num frames 10200... |
|
[2023-02-25 10:29:16,795][22177] Num frames 10300... |
|
[2023-02-25 10:29:16,910][22177] Num frames 10400... |
|
[2023-02-25 10:29:17,027][22177] Num frames 10500... |
|
[2023-02-25 10:29:17,158][22177] Num frames 10600... |
|
[2023-02-25 10:29:17,282][22177] Num frames 10700... |
|
[2023-02-25 10:29:17,399][22177] Num frames 10800... |
|
[2023-02-25 10:29:17,476][22177] Avg episode rewards: #0: 25.919, true rewards: #0: 10.819 |
|
[2023-02-25 10:29:17,477][22177] Avg episode reward: 25.919, avg true_objective: 10.819 |
|
[2023-02-25 10:30:25,058][22177] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|