|
[2024-12-28 20:05:47,478][00626] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-12-28 20:05:47,481][00626] Rollout worker 0 uses device cpu |
|
[2024-12-28 20:05:47,482][00626] Rollout worker 1 uses device cpu |
|
[2024-12-28 20:05:47,483][00626] Rollout worker 2 uses device cpu |
|
[2024-12-28 20:05:47,484][00626] Rollout worker 3 uses device cpu |
|
[2024-12-28 20:05:47,485][00626] Rollout worker 4 uses device cpu |
|
[2024-12-28 20:05:47,486][00626] Rollout worker 5 uses device cpu |
|
[2024-12-28 20:05:47,488][00626] Rollout worker 6 uses device cpu |
|
[2024-12-28 20:05:47,489][00626] Rollout worker 7 uses device cpu |
|
[2024-12-28 20:05:47,843][00626] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-12-28 20:05:47,847][00626] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-12-28 20:05:47,901][00626] Starting all processes... |
|
[2024-12-28 20:05:47,904][00626] Starting process learner_proc0 |
|
[2024-12-28 20:05:47,971][00626] Starting all processes... |
|
[2024-12-28 20:05:48,012][00626] Starting process inference_proc0-0 |
|
[2024-12-28 20:05:48,013][00626] Starting process rollout_proc0 |
|
[2024-12-28 20:05:48,015][00626] Starting process rollout_proc1 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc2 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc3 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc4 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc5 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc6 |
|
[2024-12-28 20:05:48,023][00626] Starting process rollout_proc7 |
|
[2024-12-28 20:06:05,834][04114] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-12-28 20:06:05,839][04114] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-12-28 20:06:05,929][04114] Num visible devices: 1 |
|
[2024-12-28 20:06:05,934][04128] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-12-28 20:06:05,948][04128] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-12-28 20:06:05,989][04114] Starting seed is not provided |
|
[2024-12-28 20:06:05,990][04114] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-12-28 20:06:05,991][04114] Initializing actor-critic model on device cuda:0 |
|
[2024-12-28 20:06:05,992][04114] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-12-28 20:06:05,996][04114] RunningMeanStd input shape: (1,) |
|
[2024-12-28 20:06:06,101][04128] Num visible devices: 1 |
|
[2024-12-28 20:06:06,153][04114] ConvEncoder: input_channels=3 |
|
[2024-12-28 20:06:06,237][04129] Worker 1 uses CPU cores [1] |
|
[2024-12-28 20:06:06,314][04133] Worker 4 uses CPU cores [0] |
|
[2024-12-28 20:06:06,362][04132] Worker 5 uses CPU cores [1] |
|
[2024-12-28 20:06:06,362][04130] Worker 2 uses CPU cores [0] |
|
[2024-12-28 20:06:06,380][04131] Worker 3 uses CPU cores [1] |
|
[2024-12-28 20:06:06,397][04135] Worker 7 uses CPU cores [1] |
|
[2024-12-28 20:06:06,420][04127] Worker 0 uses CPU cores [0] |
|
[2024-12-28 20:06:06,483][04134] Worker 6 uses CPU cores [0] |
|
[2024-12-28 20:06:06,545][04114] Conv encoder output size: 512 |
|
[2024-12-28 20:06:06,546][04114] Policy head output size: 512 |
|
[2024-12-28 20:06:06,601][04114] Created Actor Critic model with architecture: |
|
[2024-12-28 20:06:06,601][04114] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-12-28 20:06:07,011][04114] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-12-28 20:06:07,810][00626] Heartbeat connected on Batcher_0 |
|
[2024-12-28 20:06:07,847][00626] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-12-28 20:06:07,873][00626] Heartbeat connected on RolloutWorker_w0 |
|
[2024-12-28 20:06:07,876][00626] Heartbeat connected on RolloutWorker_w1 |
|
[2024-12-28 20:06:07,885][00626] Heartbeat connected on RolloutWorker_w2 |
|
[2024-12-28 20:06:07,886][00626] Heartbeat connected on RolloutWorker_w3 |
|
[2024-12-28 20:06:07,889][00626] Heartbeat connected on RolloutWorker_w4 |
|
[2024-12-28 20:06:07,894][00626] Heartbeat connected on RolloutWorker_w5 |
|
[2024-12-28 20:06:07,897][00626] Heartbeat connected on RolloutWorker_w6 |
|
[2024-12-28 20:06:07,902][00626] Heartbeat connected on RolloutWorker_w7 |
|
[2024-12-28 20:06:10,442][04114] No checkpoints found |
|
[2024-12-28 20:06:10,443][04114] Did not load from checkpoint, starting from scratch! |
|
[2024-12-28 20:06:10,443][04114] Initialized policy 0 weights for model version 0 |
|
[2024-12-28 20:06:10,447][04114] LearnerWorker_p0 finished initialization! |
|
[2024-12-28 20:06:10,450][04114] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-12-28 20:06:10,448][00626] Heartbeat connected on LearnerWorker_p0 |
|
[2024-12-28 20:06:10,639][04128] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-12-28 20:06:10,641][04128] RunningMeanStd input shape: (1,) |
|
[2024-12-28 20:06:10,655][04128] ConvEncoder: input_channels=3 |
|
[2024-12-28 20:06:10,756][04128] Conv encoder output size: 512 |
|
[2024-12-28 20:06:10,757][04128] Policy head output size: 512 |
|
[2024-12-28 20:06:10,813][00626] Inference worker 0-0 is ready! |
|
[2024-12-28 20:06:10,814][00626] All inference workers are ready! Signal rollout workers to start! |
|
[2024-12-28 20:06:11,016][04130] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,028][04133] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,027][04134] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,022][04127] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,028][04135] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,030][04131] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,033][04132] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:11,031][04129] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:06:12,423][04131] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,420][04135] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,422][04129] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,732][04127] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,733][04133] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,725][04130] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:12,730][04134] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:13,181][04132] Decorrelating experience for 0 frames... |
|
[2024-12-28 20:06:13,190][04131] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:13,824][04133] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:13,828][04130] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:13,827][04127] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:13,993][00626] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-12-28 20:06:14,159][04132] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:14,168][04135] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:14,538][04130] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:14,946][04130] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:14,952][04131] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:15,408][04129] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:15,525][04133] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:16,532][04132] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:16,691][04134] Decorrelating experience for 32 frames... |
|
[2024-12-28 20:06:16,769][04131] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:17,282][04133] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:18,149][04129] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:18,993][00626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 2.4. Samples: 12. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-12-28 20:06:19,000][00626] Avg episode reward: [(0, '1.216')] |
|
[2024-12-28 20:06:19,724][04135] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:20,002][04127] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:20,088][04132] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:20,791][04134] Decorrelating experience for 64 frames... |
|
[2024-12-28 20:06:22,593][04129] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:23,999][00626] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 188.9. Samples: 1890. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-12-28 20:06:24,004][00626] Avg episode reward: [(0, '2.987')] |
|
[2024-12-28 20:06:24,714][04135] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:25,360][04127] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:25,493][04114] Signal inference workers to stop experience collection... |
|
[2024-12-28 20:06:25,507][04128] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-12-28 20:06:25,639][04134] Decorrelating experience for 96 frames... |
|
[2024-12-28 20:06:27,613][04114] Signal inference workers to resume experience collection... |
|
[2024-12-28 20:06:27,613][04128] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-12-28 20:06:28,993][00626] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 262.3. Samples: 3934. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-12-28 20:06:28,996][00626] Avg episode reward: [(0, '3.519')] |
|
[2024-12-28 20:06:33,994][00626] Fps is (10 sec: 3688.2, 60 sec: 1843.1, 300 sec: 1843.1). Total num frames: 36864. Throughput: 0: 382.1. Samples: 7642. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:06:33,997][00626] Avg episode reward: [(0, '4.087')] |
|
[2024-12-28 20:06:34,724][04128] Updated weights for policy 0, policy_version 10 (0.0019) |
|
[2024-12-28 20:06:38,993][00626] Fps is (10 sec: 3686.3, 60 sec: 1966.1, 300 sec: 1966.1). Total num frames: 49152. Throughput: 0: 508.1. Samples: 12702. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:06:38,996][00626] Avg episode reward: [(0, '4.404')] |
|
[2024-12-28 20:06:43,993][00626] Fps is (10 sec: 3277.2, 60 sec: 2321.1, 300 sec: 2321.1). Total num frames: 69632. Throughput: 0: 593.5. Samples: 17804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:06:43,995][00626] Avg episode reward: [(0, '4.536')] |
|
[2024-12-28 20:06:46,646][04128] Updated weights for policy 0, policy_version 20 (0.0014) |
|
[2024-12-28 20:06:48,994][00626] Fps is (10 sec: 4096.0, 60 sec: 2574.6, 300 sec: 2574.6). Total num frames: 90112. Throughput: 0: 606.5. Samples: 21228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:06:48,996][00626] Avg episode reward: [(0, '4.387')] |
|
[2024-12-28 20:06:53,993][00626] Fps is (10 sec: 4096.0, 60 sec: 2764.8, 300 sec: 2764.8). Total num frames: 110592. Throughput: 0: 686.6. Samples: 27464. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:06:53,997][00626] Avg episode reward: [(0, '4.546')] |
|
[2024-12-28 20:06:54,003][04114] Saving new best policy, reward=4.546! |
|
[2024-12-28 20:06:57,863][04128] Updated weights for policy 0, policy_version 30 (0.0021) |
|
[2024-12-28 20:06:58,993][00626] Fps is (10 sec: 3686.5, 60 sec: 2821.7, 300 sec: 2821.7). Total num frames: 126976. Throughput: 0: 718.3. Samples: 32322. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:06:58,996][00626] Avg episode reward: [(0, '4.510')] |
|
[2024-12-28 20:07:03,993][00626] Fps is (10 sec: 4095.9, 60 sec: 3031.0, 300 sec: 3031.0). Total num frames: 151552. Throughput: 0: 796.6. Samples: 35858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:07:03,997][00626] Avg episode reward: [(0, '4.382')] |
|
[2024-12-28 20:07:06,499][04128] Updated weights for policy 0, policy_version 40 (0.0020) |
|
[2024-12-28 20:07:08,993][00626] Fps is (10 sec: 4505.6, 60 sec: 3127.9, 300 sec: 3127.9). Total num frames: 172032. Throughput: 0: 911.7. Samples: 42912. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:07:08,996][00626] Avg episode reward: [(0, '4.586')] |
|
[2024-12-28 20:07:09,011][04114] Saving new best policy, reward=4.586! |
|
[2024-12-28 20:07:13,993][00626] Fps is (10 sec: 3276.9, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 184320. Throughput: 0: 956.4. Samples: 46974. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:07:14,002][00626] Avg episode reward: [(0, '4.583')] |
|
[2024-12-28 20:07:18,352][04128] Updated weights for policy 0, policy_version 50 (0.0029) |
|
[2024-12-28 20:07:18,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 3150.8). Total num frames: 204800. Throughput: 0: 939.6. Samples: 49922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:07:18,998][00626] Avg episode reward: [(0, '4.606')] |
|
[2024-12-28 20:07:19,007][04114] Saving new best policy, reward=4.606! |
|
[2024-12-28 20:07:23,993][00626] Fps is (10 sec: 4505.6, 60 sec: 3823.3, 300 sec: 3276.8). Total num frames: 229376. Throughput: 0: 984.0. Samples: 56984. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:07:23,995][00626] Avg episode reward: [(0, '4.493')] |
|
[2024-12-28 20:07:27,862][04128] Updated weights for policy 0, policy_version 60 (0.0014) |
|
[2024-12-28 20:07:28,993][00626] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3276.8). Total num frames: 245760. Throughput: 0: 995.2. Samples: 62590. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:07:28,996][00626] Avg episode reward: [(0, '4.267')] |
|
[2024-12-28 20:07:33,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3823.0, 300 sec: 3328.0). Total num frames: 266240. Throughput: 0: 970.1. Samples: 64884. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:07:33,998][00626] Avg episode reward: [(0, '4.282')] |
|
[2024-12-28 20:07:38,200][04128] Updated weights for policy 0, policy_version 70 (0.0018) |
|
[2024-12-28 20:07:38,993][00626] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3373.2). Total num frames: 286720. Throughput: 0: 987.6. Samples: 71908. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:07:38,999][00626] Avg episode reward: [(0, '4.478')] |
|
[2024-12-28 20:07:39,007][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth... |
|
[2024-12-28 20:07:43,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3413.3). Total num frames: 307200. Throughput: 0: 1015.3. Samples: 78010. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:07:43,999][00626] Avg episode reward: [(0, '4.504')] |
|
[2024-12-28 20:07:48,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3406.1). Total num frames: 323584. Throughput: 0: 984.2. Samples: 80146. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:07:48,996][00626] Avg episode reward: [(0, '4.638')] |
|
[2024-12-28 20:07:49,002][04114] Saving new best policy, reward=4.638! |
|
[2024-12-28 20:07:49,731][04128] Updated weights for policy 0, policy_version 80 (0.0021) |
|
[2024-12-28 20:07:53,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3440.6). Total num frames: 344064. Throughput: 0: 961.3. Samples: 86170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:07:53,998][00626] Avg episode reward: [(0, '4.606')] |
|
[2024-12-28 20:07:58,514][04128] Updated weights for policy 0, policy_version 90 (0.0022) |
|
[2024-12-28 20:07:58,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3510.9). Total num frames: 368640. Throughput: 0: 1029.9. Samples: 93318. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:07:59,000][00626] Avg episode reward: [(0, '4.455')] |
|
[2024-12-28 20:08:03,994][00626] Fps is (10 sec: 4095.5, 60 sec: 3891.1, 300 sec: 3500.2). Total num frames: 385024. Throughput: 0: 1018.6. Samples: 95762. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2024-12-28 20:08:03,998][00626] Avg episode reward: [(0, '4.411')] |
|
[2024-12-28 20:08:08,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3526.1). Total num frames: 405504. Throughput: 0: 976.0. Samples: 100904. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:08:08,996][00626] Avg episode reward: [(0, '4.462')] |
|
[2024-12-28 20:08:09,598][04128] Updated weights for policy 0, policy_version 100 (0.0033) |
|
[2024-12-28 20:08:13,993][00626] Fps is (10 sec: 4096.5, 60 sec: 4027.7, 300 sec: 3549.9). Total num frames: 425984. Throughput: 0: 1006.2. Samples: 107868. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:08:13,995][00626] Avg episode reward: [(0, '4.414')] |
|
[2024-12-28 20:08:18,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3571.7). Total num frames: 446464. Throughput: 0: 1025.2. Samples: 111016. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:08:18,997][00626] Avg episode reward: [(0, '4.503')] |
|
[2024-12-28 20:08:19,749][04128] Updated weights for policy 0, policy_version 110 (0.0029) |
|
[2024-12-28 20:08:23,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3560.4). Total num frames: 462848. Throughput: 0: 962.4. Samples: 115214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:08:23,997][00626] Avg episode reward: [(0, '4.496')] |
|
[2024-12-28 20:08:28,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3580.2). Total num frames: 483328. Throughput: 0: 973.5. Samples: 121816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:08:28,996][00626] Avg episode reward: [(0, '4.341')] |
|
[2024-12-28 20:08:30,100][04128] Updated weights for policy 0, policy_version 120 (0.0015) |
|
[2024-12-28 20:08:33,994][00626] Fps is (10 sec: 4505.1, 60 sec: 4027.7, 300 sec: 3627.9). Total num frames: 507904. Throughput: 0: 1004.1. Samples: 125330. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:08:33,997][00626] Avg episode reward: [(0, '4.491')] |
|
[2024-12-28 20:08:38,994][00626] Fps is (10 sec: 3686.2, 60 sec: 3891.2, 300 sec: 3587.5). Total num frames: 520192. Throughput: 0: 982.6. Samples: 130386. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:08:38,998][00626] Avg episode reward: [(0, '4.452')] |
|
[2024-12-28 20:08:41,468][04128] Updated weights for policy 0, policy_version 130 (0.0022) |
|
[2024-12-28 20:08:43,997][00626] Fps is (10 sec: 3275.8, 60 sec: 3890.9, 300 sec: 3604.4). Total num frames: 540672. Throughput: 0: 955.8. Samples: 136334. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:08:43,999][00626] Avg episode reward: [(0, '4.678')] |
|
[2024-12-28 20:08:44,009][04114] Saving new best policy, reward=4.678! |
|
[2024-12-28 20:08:48,993][00626] Fps is (10 sec: 4505.9, 60 sec: 4027.7, 300 sec: 3646.8). Total num frames: 565248. Throughput: 0: 978.6. Samples: 139798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:08:49,001][00626] Avg episode reward: [(0, '4.635')] |
|
[2024-12-28 20:08:50,464][04128] Updated weights for policy 0, policy_version 140 (0.0024) |
|
[2024-12-28 20:08:53,993][00626] Fps is (10 sec: 4097.7, 60 sec: 3959.5, 300 sec: 3635.2). Total num frames: 581632. Throughput: 0: 992.9. Samples: 145586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:08:53,998][00626] Avg episode reward: [(0, '4.420')] |
|
[2024-12-28 20:08:58,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3624.3). Total num frames: 598016. Throughput: 0: 949.4. Samples: 150592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:08:58,995][00626] Avg episode reward: [(0, '4.384')] |
|
[2024-12-28 20:09:01,728][04128] Updated weights for policy 0, policy_version 150 (0.0017) |
|
[2024-12-28 20:09:03,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3662.3). Total num frames: 622592. Throughput: 0: 961.2. Samples: 154270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:09:03,995][00626] Avg episode reward: [(0, '4.394')] |
|
[2024-12-28 20:09:08,997][00626] Fps is (10 sec: 4913.2, 60 sec: 4027.5, 300 sec: 3698.0). Total num frames: 647168. Throughput: 0: 1028.8. Samples: 161512. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:09:09,003][00626] Avg episode reward: [(0, '4.528')] |
|
[2024-12-28 20:09:11,666][04128] Updated weights for policy 0, policy_version 160 (0.0026) |
|
[2024-12-28 20:09:13,996][00626] Fps is (10 sec: 3685.5, 60 sec: 3891.1, 300 sec: 3663.6). Total num frames: 659456. Throughput: 0: 977.1. Samples: 165788. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-12-28 20:09:14,004][00626] Avg episode reward: [(0, '4.738')] |
|
[2024-12-28 20:09:14,007][04114] Saving new best policy, reward=4.738! |
|
[2024-12-28 20:09:18,993][00626] Fps is (10 sec: 3278.1, 60 sec: 3891.2, 300 sec: 3675.3). Total num frames: 679936. Throughput: 0: 971.8. Samples: 169058. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:09:18,997][00626] Avg episode reward: [(0, '4.797')] |
|
[2024-12-28 20:09:19,010][04114] Saving new best policy, reward=4.797! |
|
[2024-12-28 20:09:21,662][04128] Updated weights for policy 0, policy_version 170 (0.0019) |
|
[2024-12-28 20:09:23,993][00626] Fps is (10 sec: 4506.6, 60 sec: 4027.7, 300 sec: 3708.0). Total num frames: 704512. Throughput: 0: 1014.9. Samples: 176054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:09:23,998][00626] Avg episode reward: [(0, '4.691')] |
|
[2024-12-28 20:09:29,000][00626] Fps is (10 sec: 4093.2, 60 sec: 3959.0, 300 sec: 3696.8). Total num frames: 720896. Throughput: 0: 993.4. Samples: 181040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:09:29,005][00626] Avg episode reward: [(0, '4.501')] |
|
[2024-12-28 20:09:32,952][04128] Updated weights for policy 0, policy_version 180 (0.0032) |
|
[2024-12-28 20:09:33,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.3, 300 sec: 3706.9). Total num frames: 741376. Throughput: 0: 971.2. Samples: 183504. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:09:34,000][00626] Avg episode reward: [(0, '4.572')] |
|
[2024-12-28 20:09:38,993][00626] Fps is (10 sec: 4508.7, 60 sec: 4096.0, 300 sec: 3736.4). Total num frames: 765952. Throughput: 0: 1008.7. Samples: 190976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:09:39,000][00626] Avg episode reward: [(0, '4.640')] |
|
[2024-12-28 20:09:39,008][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth... |
|
[2024-12-28 20:09:41,483][04128] Updated weights for policy 0, policy_version 190 (0.0028) |
|
[2024-12-28 20:09:43,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4028.0, 300 sec: 3725.4). Total num frames: 782336. Throughput: 0: 1027.8. Samples: 196842. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:09:43,999][00626] Avg episode reward: [(0, '4.966')] |
|
[2024-12-28 20:09:44,006][04114] Saving new best policy, reward=4.966! |
|
[2024-12-28 20:09:48,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3715.0). Total num frames: 798720. Throughput: 0: 990.7. Samples: 198852. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:09:48,995][00626] Avg episode reward: [(0, '4.919')] |
|
[2024-12-28 20:09:52,870][04128] Updated weights for policy 0, policy_version 200 (0.0031) |
|
[2024-12-28 20:09:53,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3742.3). Total num frames: 823296. Throughput: 0: 970.6. Samples: 205184. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:09:53,995][00626] Avg episode reward: [(0, '4.858')] |
|
[2024-12-28 20:09:58,994][00626] Fps is (10 sec: 4505.5, 60 sec: 4096.0, 300 sec: 3750.1). Total num frames: 843776. Throughput: 0: 1027.5. Samples: 212024. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:09:59,003][00626] Avg episode reward: [(0, '4.715')] |
|
[2024-12-28 20:10:03,564][04128] Updated weights for policy 0, policy_version 210 (0.0027) |
|
[2024-12-28 20:10:03,994][00626] Fps is (10 sec: 3686.2, 60 sec: 3959.4, 300 sec: 3739.8). Total num frames: 860160. Throughput: 0: 1004.2. Samples: 214246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:10:03,996][00626] Avg episode reward: [(0, '4.860')] |
|
[2024-12-28 20:10:08,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3891.5, 300 sec: 3747.4). Total num frames: 880640. Throughput: 0: 979.8. Samples: 220146. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:10:08,996][00626] Avg episode reward: [(0, '4.764')] |
|
[2024-12-28 20:10:12,687][04128] Updated weights for policy 0, policy_version 220 (0.0027) |
|
[2024-12-28 20:10:13,993][00626] Fps is (10 sec: 4505.9, 60 sec: 4096.2, 300 sec: 3771.7). Total num frames: 905216. Throughput: 0: 1025.1. Samples: 227164. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:10:13,996][00626] Avg episode reward: [(0, '4.680')] |
|
[2024-12-28 20:10:18,994][00626] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3761.6). Total num frames: 921600. Throughput: 0: 1030.5. Samples: 229878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:10:19,003][00626] Avg episode reward: [(0, '4.875')] |
|
[2024-12-28 20:10:23,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3751.9). Total num frames: 937984. Throughput: 0: 968.8. Samples: 234570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:10:23,996][00626] Avg episode reward: [(0, '5.033')] |
|
[2024-12-28 20:10:23,999][04114] Saving new best policy, reward=5.033! |
|
[2024-12-28 20:10:24,238][04128] Updated weights for policy 0, policy_version 230 (0.0015) |
|
[2024-12-28 20:10:28,994][00626] Fps is (10 sec: 4096.1, 60 sec: 4028.2, 300 sec: 3774.7). Total num frames: 962560. Throughput: 0: 996.4. Samples: 241682. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:10:28,996][00626] Avg episode reward: [(0, '5.156')] |
|
[2024-12-28 20:10:29,004][04114] Saving new best policy, reward=5.156! |
|
[2024-12-28 20:10:33,365][04128] Updated weights for policy 0, policy_version 240 (0.0028) |
|
[2024-12-28 20:10:33,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3780.9). Total num frames: 983040. Throughput: 0: 1030.2. Samples: 245212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:10:34,006][00626] Avg episode reward: [(0, '5.235')] |
|
[2024-12-28 20:10:34,016][04114] Saving new best policy, reward=5.235! |
|
[2024-12-28 20:10:38,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3771.4). Total num frames: 999424. Throughput: 0: 986.5. Samples: 249578. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:10:38,997][00626] Avg episode reward: [(0, '5.144')] |
|
[2024-12-28 20:10:43,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3777.4). Total num frames: 1019904. Throughput: 0: 982.4. Samples: 256234. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:10:44,000][00626] Avg episode reward: [(0, '4.976')] |
|
[2024-12-28 20:10:44,198][04128] Updated weights for policy 0, policy_version 250 (0.0036) |
|
[2024-12-28 20:10:48,996][00626] Fps is (10 sec: 4504.4, 60 sec: 4095.8, 300 sec: 3798.1). Total num frames: 1044480. Throughput: 0: 1010.2. Samples: 259708. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:10:48,998][00626] Avg episode reward: [(0, '4.778')] |
|
[2024-12-28 20:10:53,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3774.2). Total num frames: 1056768. Throughput: 0: 989.8. Samples: 264686. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:10:53,996][00626] Avg episode reward: [(0, '4.930')] |
|
[2024-12-28 20:10:55,797][04128] Updated weights for policy 0, policy_version 260 (0.0032) |
|
[2024-12-28 20:10:58,993][00626] Fps is (10 sec: 3277.7, 60 sec: 3891.2, 300 sec: 3779.8). Total num frames: 1077248. Throughput: 0: 962.9. Samples: 270496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:10:59,000][00626] Avg episode reward: [(0, '5.291')] |
|
[2024-12-28 20:10:59,009][04114] Saving new best policy, reward=5.291! |
|
[2024-12-28 20:11:03,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.8, 300 sec: 3799.4). Total num frames: 1101824. Throughput: 0: 982.5. Samples: 274088. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:11:04,000][00626] Avg episode reward: [(0, '5.100')] |
|
[2024-12-28 20:11:04,287][04128] Updated weights for policy 0, policy_version 270 (0.0029) |
|
[2024-12-28 20:11:08,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3804.4). Total num frames: 1122304. Throughput: 0: 1016.5. Samples: 280314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:11:08,996][00626] Avg episode reward: [(0, '5.108')] |
|
[2024-12-28 20:11:13,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 1138688. Throughput: 0: 967.4. Samples: 285214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:11:13,995][00626] Avg episode reward: [(0, '5.372')] |
|
[2024-12-28 20:11:14,001][04114] Saving new best policy, reward=5.372! |
|
[2024-12-28 20:11:15,721][04128] Updated weights for policy 0, policy_version 280 (0.0023) |
|
[2024-12-28 20:11:18,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3929.5). Total num frames: 1159168. Throughput: 0: 965.5. Samples: 288658. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:11:19,000][00626] Avg episode reward: [(0, '5.579')] |
|
[2024-12-28 20:11:19,008][04114] Saving new best policy, reward=5.579! |
|
[2024-12-28 20:11:23,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 1179648. Throughput: 0: 1020.2. Samples: 295488. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:11:23,999][00626] Avg episode reward: [(0, '5.863')] |
|
[2024-12-28 20:11:24,001][04114] Saving new best policy, reward=5.863! |
|
[2024-12-28 20:11:25,864][04128] Updated weights for policy 0, policy_version 290 (0.0028) |
|
[2024-12-28 20:11:28,995][00626] Fps is (10 sec: 3685.6, 60 sec: 3891.1, 300 sec: 3929.4). Total num frames: 1196032. Throughput: 0: 967.8. Samples: 299786. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:11:28,998][00626] Avg episode reward: [(0, '5.790')] |
|
[2024-12-28 20:11:33,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1220608. Throughput: 0: 967.7. Samples: 303252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:11:33,998][00626] Avg episode reward: [(0, '5.553')] |
|
[2024-12-28 20:11:35,483][04128] Updated weights for policy 0, policy_version 300 (0.0024) |
|
[2024-12-28 20:11:38,993][00626] Fps is (10 sec: 4916.2, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 1245184. Throughput: 0: 1024.2. Samples: 310774. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:11:39,001][00626] Avg episode reward: [(0, '5.791')] |
|
[2024-12-28 20:11:39,012][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000304_1245184.pth... |
|
[2024-12-28 20:11:39,136][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000070_286720.pth |
|
[2024-12-28 20:11:43,995][00626] Fps is (10 sec: 3685.9, 60 sec: 3959.4, 300 sec: 3957.1). Total num frames: 1257472. Throughput: 0: 1001.6. Samples: 315568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:11:44,001][00626] Avg episode reward: [(0, '5.923')] |
|
[2024-12-28 20:11:44,004][04114] Saving new best policy, reward=5.923! |
|
[2024-12-28 20:11:46,878][04128] Updated weights for policy 0, policy_version 310 (0.0022) |
|
[2024-12-28 20:11:48,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.4, 300 sec: 3957.2). Total num frames: 1277952. Throughput: 0: 975.5. Samples: 317986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:11:48,999][00626] Avg episode reward: [(0, '5.767')] |
|
[2024-12-28 20:11:53,993][00626] Fps is (10 sec: 4506.2, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 1302528. Throughput: 0: 993.6. Samples: 325026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:11:53,998][00626] Avg episode reward: [(0, '5.770')] |
|
[2024-12-28 20:11:55,399][04128] Updated weights for policy 0, policy_version 320 (0.0021) |
|
[2024-12-28 20:11:58,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3957.2). Total num frames: 1318912. Throughput: 0: 1017.4. Samples: 330998. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:11:58,997][00626] Avg episode reward: [(0, '6.076')] |
|
[2024-12-28 20:11:59,018][04114] Saving new best policy, reward=6.076! |
|
[2024-12-28 20:12:03,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 1335296. Throughput: 0: 988.7. Samples: 333150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:12:03,996][00626] Avg episode reward: [(0, '6.219')] |
|
[2024-12-28 20:12:03,999][04114] Saving new best policy, reward=6.219! |
|
[2024-12-28 20:12:06,813][04128] Updated weights for policy 0, policy_version 330 (0.0023) |
|
[2024-12-28 20:12:08,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1359872. Throughput: 0: 983.3. Samples: 339736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:09,001][00626] Avg episode reward: [(0, '6.870')] |
|
[2024-12-28 20:12:09,011][04114] Saving new best policy, reward=6.870! |
|
[2024-12-28 20:12:13,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1380352. Throughput: 0: 1036.6. Samples: 346432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:12:13,995][00626] Avg episode reward: [(0, '7.367')] |
|
[2024-12-28 20:12:13,999][04114] Saving new best policy, reward=7.367! |
|
[2024-12-28 20:12:17,395][04128] Updated weights for policy 0, policy_version 340 (0.0032) |
|
[2024-12-28 20:12:18,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 1396736. Throughput: 0: 1002.9. Samples: 348384. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:18,996][00626] Avg episode reward: [(0, '7.897')] |
|
[2024-12-28 20:12:19,007][04114] Saving new best policy, reward=7.897! |
|
[2024-12-28 20:12:23,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1417216. Throughput: 0: 957.2. Samples: 353850. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:23,999][00626] Avg episode reward: [(0, '7.238')] |
|
[2024-12-28 20:12:27,039][04128] Updated weights for policy 0, policy_version 350 (0.0021) |
|
[2024-12-28 20:12:28,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.1, 300 sec: 3984.9). Total num frames: 1441792. Throughput: 0: 1010.9. Samples: 361056. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:28,996][00626] Avg episode reward: [(0, '6.718')] |
|
[2024-12-28 20:12:33,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1458176. Throughput: 0: 1022.0. Samples: 363978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:33,998][00626] Avg episode reward: [(0, '7.215')] |
|
[2024-12-28 20:12:38,014][04128] Updated weights for policy 0, policy_version 360 (0.0020) |
|
[2024-12-28 20:12:38,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 1478656. Throughput: 0: 975.9. Samples: 368940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:39,001][00626] Avg episode reward: [(0, '8.068')] |
|
[2024-12-28 20:12:39,008][04114] Saving new best policy, reward=8.068! |
|
[2024-12-28 20:12:43,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 3984.9). Total num frames: 1499136. Throughput: 0: 996.9. Samples: 375858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:44,001][00626] Avg episode reward: [(0, '8.309')] |
|
[2024-12-28 20:12:44,004][04114] Saving new best policy, reward=8.309! |
|
[2024-12-28 20:12:47,013][04128] Updated weights for policy 0, policy_version 370 (0.0014) |
|
[2024-12-28 20:12:48,993][00626] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1519616. Throughput: 0: 1026.2. Samples: 379328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:48,996][00626] Avg episode reward: [(0, '8.581')] |
|
[2024-12-28 20:12:49,006][04114] Saving new best policy, reward=8.581! |
|
[2024-12-28 20:12:53,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 1531904. Throughput: 0: 974.5. Samples: 383588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:54,000][00626] Avg episode reward: [(0, '8.399')] |
|
[2024-12-28 20:12:58,204][04128] Updated weights for policy 0, policy_version 380 (0.0019) |
|
[2024-12-28 20:12:58,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3971.1). Total num frames: 1556480. Throughput: 0: 975.6. Samples: 390334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:12:58,996][00626] Avg episode reward: [(0, '8.239')] |
|
[2024-12-28 20:13:03,993][00626] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 1581056. Throughput: 0: 1014.6. Samples: 394042. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:13:03,997][00626] Avg episode reward: [(0, '7.935')] |
|
[2024-12-28 20:13:08,223][04128] Updated weights for policy 0, policy_version 390 (0.0029) |
|
[2024-12-28 20:13:08,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1597440. Throughput: 0: 1016.3. Samples: 399584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:13:08,996][00626] Avg episode reward: [(0, '7.879')] |
|
[2024-12-28 20:13:13,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1617920. Throughput: 0: 980.3. Samples: 405168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:13:13,996][00626] Avg episode reward: [(0, '8.804')] |
|
[2024-12-28 20:13:14,002][04114] Saving new best policy, reward=8.804! |
|
[2024-12-28 20:13:18,114][04128] Updated weights for policy 0, policy_version 400 (0.0019) |
|
[2024-12-28 20:13:18,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1638400. Throughput: 0: 993.1. Samples: 408666. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:13:18,995][00626] Avg episode reward: [(0, '8.941')] |
|
[2024-12-28 20:13:19,027][04114] Saving new best policy, reward=8.941! |
|
[2024-12-28 20:13:23,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1658880. Throughput: 0: 1020.6. Samples: 414868. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:13:23,996][00626] Avg episode reward: [(0, '9.625')] |
|
[2024-12-28 20:13:23,999][04114] Saving new best policy, reward=9.625! |
|
[2024-12-28 20:13:28,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 1675264. Throughput: 0: 972.1. Samples: 419604. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:13:28,995][00626] Avg episode reward: [(0, '8.896')] |
|
[2024-12-28 20:13:29,503][04128] Updated weights for policy 0, policy_version 410 (0.0031) |
|
[2024-12-28 20:13:33,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 1699840. Throughput: 0: 976.1. Samples: 423252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:13:33,997][00626] Avg episode reward: [(0, '10.182')] |
|
[2024-12-28 20:13:34,002][04114] Saving new best policy, reward=10.182! |
|
[2024-12-28 20:13:37,781][04128] Updated weights for policy 0, policy_version 420 (0.0016) |
|
[2024-12-28 20:13:38,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3998.9). Total num frames: 1720320. Throughput: 0: 1042.7. Samples: 430508. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:13:39,001][00626] Avg episode reward: [(0, '10.937')] |
|
[2024-12-28 20:13:39,016][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000420_1720320.pth... |
|
[2024-12-28 20:13:39,168][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000187_765952.pth |
|
[2024-12-28 20:13:39,189][04114] Saving new best policy, reward=10.937! |
|
[2024-12-28 20:13:43,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 1736704. Throughput: 0: 983.8. Samples: 434606. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:13:44,003][00626] Avg episode reward: [(0, '10.956')] |
|
[2024-12-28 20:13:44,005][04114] Saving new best policy, reward=10.956! |
|
[2024-12-28 20:13:48,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1757184. Throughput: 0: 967.6. Samples: 437582. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:13:48,998][00626] Avg episode reward: [(0, '10.659')] |
|
[2024-12-28 20:13:49,706][04128] Updated weights for policy 0, policy_version 430 (0.0024) |
|
[2024-12-28 20:13:53,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4012.7). Total num frames: 1781760. Throughput: 0: 1000.0. Samples: 444582. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-12-28 20:13:53,998][00626] Avg episode reward: [(0, '9.644')] |
|
[2024-12-28 20:13:58,996][00626] Fps is (10 sec: 3685.5, 60 sec: 3959.3, 300 sec: 3971.0). Total num frames: 1794048. Throughput: 0: 989.5. Samples: 449698. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:13:58,998][00626] Avg episode reward: [(0, '10.527')] |
|
[2024-12-28 20:14:01,002][04128] Updated weights for policy 0, policy_version 440 (0.0014) |
|
[2024-12-28 20:14:03,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 1814528. Throughput: 0: 957.7. Samples: 451764. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:14:03,997][00626] Avg episode reward: [(0, '10.896')] |
|
[2024-12-28 20:14:08,993][00626] Fps is (10 sec: 4097.0, 60 sec: 3959.5, 300 sec: 3985.0). Total num frames: 1835008. Throughput: 0: 974.0. Samples: 458696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:08,997][00626] Avg episode reward: [(0, '12.336')] |
|
[2024-12-28 20:14:09,005][04114] Saving new best policy, reward=12.336! |
|
[2024-12-28 20:14:10,227][04128] Updated weights for policy 0, policy_version 450 (0.0032) |
|
[2024-12-28 20:14:13,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1855488. Throughput: 0: 1005.1. Samples: 464834. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:14:14,000][00626] Avg episode reward: [(0, '12.641')] |
|
[2024-12-28 20:14:14,005][04114] Saving new best policy, reward=12.641! |
|
[2024-12-28 20:14:18,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 1867776. Throughput: 0: 967.6. Samples: 466796. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:14:18,998][00626] Avg episode reward: [(0, '12.583')] |
|
[2024-12-28 20:14:21,767][04128] Updated weights for policy 0, policy_version 460 (0.0017) |
|
[2024-12-28 20:14:23,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3971.1). Total num frames: 1892352. Throughput: 0: 941.5. Samples: 472874. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:14:23,996][00626] Avg episode reward: [(0, '13.000')] |
|
[2024-12-28 20:14:24,001][04114] Saving new best policy, reward=13.000! |
|
[2024-12-28 20:14:28,993][00626] Fps is (10 sec: 4915.2, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 1916928. Throughput: 0: 1008.0. Samples: 479966. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:14:28,996][00626] Avg episode reward: [(0, '13.217')] |
|
[2024-12-28 20:14:29,015][04114] Saving new best policy, reward=13.217! |
|
[2024-12-28 20:14:31,323][04128] Updated weights for policy 0, policy_version 470 (0.0042) |
|
[2024-12-28 20:14:33,994][00626] Fps is (10 sec: 3686.1, 60 sec: 3822.9, 300 sec: 3943.3). Total num frames: 1929216. Throughput: 0: 991.2. Samples: 482188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:34,001][00626] Avg episode reward: [(0, '13.269')] |
|
[2024-12-28 20:14:34,092][04114] Saving new best policy, reward=13.269! |
|
[2024-12-28 20:14:38,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 1949696. Throughput: 0: 953.8. Samples: 487502. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:38,996][00626] Avg episode reward: [(0, '13.372')] |
|
[2024-12-28 20:14:39,009][04114] Saving new best policy, reward=13.372! |
|
[2024-12-28 20:14:41,651][04128] Updated weights for policy 0, policy_version 480 (0.0017) |
|
[2024-12-28 20:14:43,993][00626] Fps is (10 sec: 4506.0, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 1974272. Throughput: 0: 1001.3. Samples: 494754. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:43,995][00626] Avg episode reward: [(0, '15.290')] |
|
[2024-12-28 20:14:44,003][04114] Saving new best policy, reward=15.290! |
|
[2024-12-28 20:14:48,997][00626] Fps is (10 sec: 4504.0, 60 sec: 3959.2, 300 sec: 3971.0). Total num frames: 1994752. Throughput: 0: 1023.9. Samples: 497842. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:49,001][00626] Avg episode reward: [(0, '14.741')] |
|
[2024-12-28 20:14:53,195][04128] Updated weights for policy 0, policy_version 490 (0.0023) |
|
[2024-12-28 20:14:53,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3943.3). Total num frames: 2007040. Throughput: 0: 962.7. Samples: 502016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:53,996][00626] Avg episode reward: [(0, '14.893')] |
|
[2024-12-28 20:14:58,994][00626] Fps is (10 sec: 3687.6, 60 sec: 3959.6, 300 sec: 3971.0). Total num frames: 2031616. Throughput: 0: 982.9. Samples: 509066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:14:58,996][00626] Avg episode reward: [(0, '15.406')] |
|
[2024-12-28 20:14:59,005][04114] Saving new best policy, reward=15.406! |
|
[2024-12-28 20:15:01,906][04128] Updated weights for policy 0, policy_version 500 (0.0043) |
|
[2024-12-28 20:15:03,993][00626] Fps is (10 sec: 4915.1, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2056192. Throughput: 0: 1015.9. Samples: 512510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:15:03,999][00626] Avg episode reward: [(0, '14.496')] |
|
[2024-12-28 20:15:08,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2068480. Throughput: 0: 993.6. Samples: 517588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:15:09,001][00626] Avg episode reward: [(0, '15.092')] |
|
[2024-12-28 20:15:12,912][04128] Updated weights for policy 0, policy_version 510 (0.0018) |
|
[2024-12-28 20:15:13,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2093056. Throughput: 0: 976.9. Samples: 523928. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:15:13,995][00626] Avg episode reward: [(0, '16.064')] |
|
[2024-12-28 20:15:13,998][04114] Saving new best policy, reward=16.064! |
|
[2024-12-28 20:15:18,993][00626] Fps is (10 sec: 4505.7, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2113536. Throughput: 0: 1005.0. Samples: 527412. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:15:18,996][00626] Avg episode reward: [(0, '16.952')] |
|
[2024-12-28 20:15:19,007][04114] Saving new best policy, reward=16.952! |
|
[2024-12-28 20:15:23,054][04128] Updated weights for policy 0, policy_version 520 (0.0022) |
|
[2024-12-28 20:15:23,994][00626] Fps is (10 sec: 3686.0, 60 sec: 3959.4, 300 sec: 3957.1). Total num frames: 2129920. Throughput: 0: 1010.5. Samples: 532974. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:15:24,000][00626] Avg episode reward: [(0, '16.757')] |
|
[2024-12-28 20:15:28,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2150400. Throughput: 0: 965.7. Samples: 538212. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:15:29,000][00626] Avg episode reward: [(0, '16.662')] |
|
[2024-12-28 20:15:33,046][04128] Updated weights for policy 0, policy_version 530 (0.0031) |
|
[2024-12-28 20:15:33,993][00626] Fps is (10 sec: 4506.1, 60 sec: 4096.1, 300 sec: 3984.9). Total num frames: 2174976. Throughput: 0: 977.0. Samples: 541804. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:15:33,995][00626] Avg episode reward: [(0, '17.075')] |
|
[2024-12-28 20:15:33,998][04114] Saving new best policy, reward=17.075! |
|
[2024-12-28 20:15:38,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2195456. Throughput: 0: 1038.5. Samples: 548750. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:15:38,999][00626] Avg episode reward: [(0, '17.111')] |
|
[2024-12-28 20:15:39,011][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000536_2195456.pth... |
|
[2024-12-28 20:15:39,179][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000304_1245184.pth |
|
[2024-12-28 20:15:39,208][04114] Saving new best policy, reward=17.111! |
|
[2024-12-28 20:15:43,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2207744. Throughput: 0: 978.7. Samples: 553106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:15:43,996][00626] Avg episode reward: [(0, '17.159')] |
|
[2024-12-28 20:15:43,999][04114] Saving new best policy, reward=17.159! |
|
[2024-12-28 20:15:44,246][04128] Updated weights for policy 0, policy_version 540 (0.0027) |
|
[2024-12-28 20:15:48,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.7, 300 sec: 3984.9). Total num frames: 2232320. Throughput: 0: 975.9. Samples: 556426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:15:49,000][00626] Avg episode reward: [(0, '19.266')] |
|
[2024-12-28 20:15:49,008][04114] Saving new best policy, reward=19.266! |
|
[2024-12-28 20:15:53,136][04128] Updated weights for policy 0, policy_version 550 (0.0023) |
|
[2024-12-28 20:15:53,994][00626] Fps is (10 sec: 4505.1, 60 sec: 4095.9, 300 sec: 3984.9). Total num frames: 2252800. Throughput: 0: 1016.7. Samples: 563342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:15:54,004][00626] Avg episode reward: [(0, '19.979')] |
|
[2024-12-28 20:15:54,008][04114] Saving new best policy, reward=19.979! |
|
[2024-12-28 20:15:58,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2269184. Throughput: 0: 978.2. Samples: 567946. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:15:58,998][00626] Avg episode reward: [(0, '18.984')] |
|
[2024-12-28 20:16:03,993][00626] Fps is (10 sec: 3686.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2289664. Throughput: 0: 961.8. Samples: 570692. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:16:03,996][00626] Avg episode reward: [(0, '19.771')] |
|
[2024-12-28 20:16:04,387][04128] Updated weights for policy 0, policy_version 560 (0.0058) |
|
[2024-12-28 20:16:08,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2314240. Throughput: 0: 1002.6. Samples: 578090. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:16:09,001][00626] Avg episode reward: [(0, '19.146')] |
|
[2024-12-28 20:16:13,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2330624. Throughput: 0: 1015.5. Samples: 583910. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:16:13,998][00626] Avg episode reward: [(0, '19.111')] |
|
[2024-12-28 20:16:14,150][04128] Updated weights for policy 0, policy_version 570 (0.0017) |
|
[2024-12-28 20:16:18,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2347008. Throughput: 0: 984.1. Samples: 586088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:16:18,995][00626] Avg episode reward: [(0, '19.849')] |
|
[2024-12-28 20:16:23,994][00626] Fps is (10 sec: 4095.9, 60 sec: 4027.8, 300 sec: 3984.9). Total num frames: 2371584. Throughput: 0: 977.0. Samples: 592716. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:16:23,999][00626] Avg episode reward: [(0, '20.483')] |
|
[2024-12-28 20:16:24,006][04114] Saving new best policy, reward=20.483! |
|
[2024-12-28 20:16:24,310][04128] Updated weights for policy 0, policy_version 580 (0.0020) |
|
[2024-12-28 20:16:28,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 2392064. Throughput: 0: 1025.9. Samples: 599272. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:16:29,002][00626] Avg episode reward: [(0, '19.946')] |
|
[2024-12-28 20:16:33,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3943.3). Total num frames: 2408448. Throughput: 0: 1000.3. Samples: 601438. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:16:34,001][00626] Avg episode reward: [(0, '18.692')] |
|
[2024-12-28 20:16:35,537][04128] Updated weights for policy 0, policy_version 590 (0.0027) |
|
[2024-12-28 20:16:38,996][00626] Fps is (10 sec: 4095.0, 60 sec: 3959.3, 300 sec: 3984.9). Total num frames: 2433024. Throughput: 0: 982.1. Samples: 607540. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:16:38,998][00626] Avg episode reward: [(0, '18.979')] |
|
[2024-12-28 20:16:43,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2453504. Throughput: 0: 1039.4. Samples: 614718. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:16:43,995][00626] Avg episode reward: [(0, '19.356')] |
|
[2024-12-28 20:16:44,031][04128] Updated weights for policy 0, policy_version 600 (0.0017) |
|
[2024-12-28 20:16:48,993][00626] Fps is (10 sec: 3687.3, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2469888. Throughput: 0: 1034.5. Samples: 617246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:16:48,998][00626] Avg episode reward: [(0, '18.179')] |
|
[2024-12-28 20:16:53,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2490368. Throughput: 0: 975.1. Samples: 621970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:16:53,995][00626] Avg episode reward: [(0, '17.926')] |
|
[2024-12-28 20:16:55,526][04128] Updated weights for policy 0, policy_version 610 (0.0031) |
|
[2024-12-28 20:16:58,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 2514944. Throughput: 0: 1007.1. Samples: 629228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:16:58,996][00626] Avg episode reward: [(0, '19.474')] |
|
[2024-12-28 20:17:03,994][00626] Fps is (10 sec: 4095.8, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 2531328. Throughput: 0: 1033.6. Samples: 632602. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:17:03,996][00626] Avg episode reward: [(0, '18.408')] |
|
[2024-12-28 20:17:05,850][04128] Updated weights for policy 0, policy_version 620 (0.0021) |
|
[2024-12-28 20:17:08,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2547712. Throughput: 0: 981.1. Samples: 636864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:17:09,001][00626] Avg episode reward: [(0, '19.123')] |
|
[2024-12-28 20:17:13,993][00626] Fps is (10 sec: 4096.2, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2572288. Throughput: 0: 983.2. Samples: 643516. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:17:14,000][00626] Avg episode reward: [(0, '19.975')] |
|
[2024-12-28 20:17:15,703][04128] Updated weights for policy 0, policy_version 630 (0.0019) |
|
[2024-12-28 20:17:18,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2592768. Throughput: 0: 1012.4. Samples: 646998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:17:18,998][00626] Avg episode reward: [(0, '21.305')] |
|
[2024-12-28 20:17:19,008][04114] Saving new best policy, reward=21.305! |
|
[2024-12-28 20:17:23,993][00626] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3957.1). Total num frames: 2609152. Throughput: 0: 990.3. Samples: 652102. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:17:23,996][00626] Avg episode reward: [(0, '21.794')] |
|
[2024-12-28 20:17:23,998][04114] Saving new best policy, reward=21.794! |
|
[2024-12-28 20:17:27,201][04128] Updated weights for policy 0, policy_version 640 (0.0024) |
|
[2024-12-28 20:17:28,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2629632. Throughput: 0: 959.1. Samples: 657878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:17:28,996][00626] Avg episode reward: [(0, '22.450')] |
|
[2024-12-28 20:17:29,007][04114] Saving new best policy, reward=22.450! |
|
[2024-12-28 20:17:33,994][00626] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 2650112. Throughput: 0: 980.0. Samples: 661348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:17:33,998][00626] Avg episode reward: [(0, '20.894')] |
|
[2024-12-28 20:17:35,730][04128] Updated weights for policy 0, policy_version 650 (0.0023) |
|
[2024-12-28 20:17:38,993][00626] Fps is (10 sec: 4095.9, 60 sec: 3959.6, 300 sec: 3971.0). Total num frames: 2670592. Throughput: 0: 1018.8. Samples: 667818. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:17:38,998][00626] Avg episode reward: [(0, '22.157')] |
|
[2024-12-28 20:17:39,013][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000652_2670592.pth... |
|
[2024-12-28 20:17:39,176][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000420_1720320.pth |
|
[2024-12-28 20:17:43,993][00626] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2686976. Throughput: 0: 963.3. Samples: 672578. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:17:44,000][00626] Avg episode reward: [(0, '21.119')] |
|
[2024-12-28 20:17:47,107][04128] Updated weights for policy 0, policy_version 660 (0.0017) |
|
[2024-12-28 20:17:48,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 2711552. Throughput: 0: 967.5. Samples: 676138. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:17:48,996][00626] Avg episode reward: [(0, '19.914')] |
|
[2024-12-28 20:17:53,997][00626] Fps is (10 sec: 4503.8, 60 sec: 4027.5, 300 sec: 3984.9). Total num frames: 2732032. Throughput: 0: 1030.8. Samples: 683252. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:17:54,005][00626] Avg episode reward: [(0, '19.776')] |
|
[2024-12-28 20:17:57,508][04128] Updated weights for policy 0, policy_version 670 (0.0023) |
|
[2024-12-28 20:17:58,998][00626] Fps is (10 sec: 3275.2, 60 sec: 3822.6, 300 sec: 3943.2). Total num frames: 2744320. Throughput: 0: 979.4. Samples: 687596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:17:59,001][00626] Avg episode reward: [(0, '18.884')] |
|
[2024-12-28 20:18:03,993][00626] Fps is (10 sec: 3687.9, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2768896. Throughput: 0: 970.2. Samples: 690656. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:18:03,997][00626] Avg episode reward: [(0, '19.020')] |
|
[2024-12-28 20:18:06,871][04128] Updated weights for policy 0, policy_version 680 (0.0017) |
|
[2024-12-28 20:18:08,993][00626] Fps is (10 sec: 4917.7, 60 sec: 4096.0, 300 sec: 3984.9). Total num frames: 2793472. Throughput: 0: 1022.3. Samples: 698106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:18:08,998][00626] Avg episode reward: [(0, '19.089')] |
|
[2024-12-28 20:18:13,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 2809856. Throughput: 0: 1010.1. Samples: 703334. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:18:14,000][00626] Avg episode reward: [(0, '19.797')] |
|
[2024-12-28 20:18:18,051][04128] Updated weights for policy 0, policy_version 690 (0.0024) |
|
[2024-12-28 20:18:18,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3957.2). Total num frames: 2826240. Throughput: 0: 983.7. Samples: 705616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:18:19,000][00626] Avg episode reward: [(0, '18.778')] |
|
[2024-12-28 20:18:23,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2850816. Throughput: 0: 996.7. Samples: 712668. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-12-28 20:18:24,000][00626] Avg episode reward: [(0, '21.552')] |
|
[2024-12-28 20:18:26,754][04128] Updated weights for policy 0, policy_version 700 (0.0015) |
|
[2024-12-28 20:18:28,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3971.0). Total num frames: 2871296. Throughput: 0: 1027.8. Samples: 718830. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:18:28,996][00626] Avg episode reward: [(0, '20.152')] |
|
[2024-12-28 20:18:33,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2887680. Throughput: 0: 998.1. Samples: 721054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:18:33,998][00626] Avg episode reward: [(0, '19.737')] |
|
[2024-12-28 20:18:37,850][04128] Updated weights for policy 0, policy_version 710 (0.0030) |
|
[2024-12-28 20:18:38,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 2912256. Throughput: 0: 986.9. Samples: 727660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:18:38,998][00626] Avg episode reward: [(0, '18.714')] |
|
[2024-12-28 20:18:43,993][00626] Fps is (10 sec: 4915.2, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 2936832. Throughput: 0: 1048.3. Samples: 734764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:18:44,002][00626] Avg episode reward: [(0, '16.857')] |
|
[2024-12-28 20:18:48,224][04128] Updated weights for policy 0, policy_version 720 (0.0047) |
|
[2024-12-28 20:18:48,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3957.2). Total num frames: 2949120. Throughput: 0: 1025.7. Samples: 736812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:18:48,996][00626] Avg episode reward: [(0, '16.981')] |
|
[2024-12-28 20:18:53,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3959.7, 300 sec: 3985.0). Total num frames: 2969600. Throughput: 0: 977.4. Samples: 742090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:18:54,000][00626] Avg episode reward: [(0, '16.920')] |
|
[2024-12-28 20:18:57,801][04128] Updated weights for policy 0, policy_version 730 (0.0021) |
|
[2024-12-28 20:18:58,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4164.6, 300 sec: 3998.8). Total num frames: 2994176. Throughput: 0: 1023.1. Samples: 749374. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:18:59,000][00626] Avg episode reward: [(0, '19.719')] |
|
[2024-12-28 20:19:03,994][00626] Fps is (10 sec: 4095.6, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3010560. Throughput: 0: 1041.6. Samples: 752488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:19:04,002][00626] Avg episode reward: [(0, '21.204')] |
|
[2024-12-28 20:19:08,848][04128] Updated weights for policy 0, policy_version 740 (0.0018) |
|
[2024-12-28 20:19:08,993][00626] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3031040. Throughput: 0: 986.4. Samples: 757058. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:19:09,000][00626] Avg episode reward: [(0, '23.582')] |
|
[2024-12-28 20:19:09,010][04114] Saving new best policy, reward=23.582! |
|
[2024-12-28 20:19:13,993][00626] Fps is (10 sec: 4096.4, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 3051520. Throughput: 0: 1008.4. Samples: 764210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:19:14,000][00626] Avg episode reward: [(0, '22.568')] |
|
[2024-12-28 20:19:17,778][04128] Updated weights for policy 0, policy_version 750 (0.0032) |
|
[2024-12-28 20:19:18,993][00626] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3072000. Throughput: 0: 1033.4. Samples: 767556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:19:18,996][00626] Avg episode reward: [(0, '22.757')] |
|
[2024-12-28 20:19:23,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3088384. Throughput: 0: 988.9. Samples: 772160. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:19:24,000][00626] Avg episode reward: [(0, '21.608')] |
|
[2024-12-28 20:19:28,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3108864. Throughput: 0: 968.0. Samples: 778326. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:19:28,995][00626] Avg episode reward: [(0, '20.192')] |
|
[2024-12-28 20:19:29,288][04128] Updated weights for policy 0, policy_version 760 (0.0045) |
|
[2024-12-28 20:19:33,999][00626] Fps is (10 sec: 4503.0, 60 sec: 4095.6, 300 sec: 4012.6). Total num frames: 3133440. Throughput: 0: 1000.4. Samples: 781836. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:19:34,002][00626] Avg episode reward: [(0, '19.252')] |
|
[2024-12-28 20:19:38,993][00626] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3971.0). Total num frames: 3145728. Throughput: 0: 1001.3. Samples: 787148. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:19:38,997][00626] Avg episode reward: [(0, '21.131')] |
|
[2024-12-28 20:19:39,018][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000769_3149824.pth... |
|
[2024-12-28 20:19:39,215][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000536_2195456.pth |
|
[2024-12-28 20:19:40,608][04128] Updated weights for policy 0, policy_version 770 (0.0024) |
|
[2024-12-28 20:19:43,993][00626] Fps is (10 sec: 3278.8, 60 sec: 3822.9, 300 sec: 3971.1). Total num frames: 3166208. Throughput: 0: 960.4. Samples: 792590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:19:44,001][00626] Avg episode reward: [(0, '21.565')] |
|
[2024-12-28 20:19:48,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 3190784. Throughput: 0: 969.3. Samples: 796104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:19:48,996][00626] Avg episode reward: [(0, '23.181')] |
|
[2024-12-28 20:19:49,514][04128] Updated weights for policy 0, policy_version 780 (0.0019) |
|
[2024-12-28 20:19:53,994][00626] Fps is (10 sec: 4095.6, 60 sec: 3959.4, 300 sec: 3984.9). Total num frames: 3207168. Throughput: 0: 1008.2. Samples: 802426. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:19:53,999][00626] Avg episode reward: [(0, '23.409')] |
|
[2024-12-28 20:19:58,993][00626] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3957.2). Total num frames: 3223552. Throughput: 0: 952.6. Samples: 807076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:19:58,996][00626] Avg episode reward: [(0, '24.368')] |
|
[2024-12-28 20:19:59,003][04114] Saving new best policy, reward=24.368! |
|
[2024-12-28 20:20:01,069][04128] Updated weights for policy 0, policy_version 790 (0.0015) |
|
[2024-12-28 20:20:03,994][00626] Fps is (10 sec: 4096.3, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3248128. Throughput: 0: 956.3. Samples: 810588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-12-28 20:20:03,998][00626] Avg episode reward: [(0, '23.616')] |
|
[2024-12-28 20:20:08,994][00626] Fps is (10 sec: 4914.7, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3272704. Throughput: 0: 1013.0. Samples: 817744. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-12-28 20:20:09,001][00626] Avg episode reward: [(0, '22.182')] |
|
[2024-12-28 20:20:10,048][04128] Updated weights for policy 0, policy_version 800 (0.0017) |
|
[2024-12-28 20:20:13,998][00626] Fps is (10 sec: 3684.6, 60 sec: 3890.9, 300 sec: 3971.0). Total num frames: 3284992. Throughput: 0: 979.4. Samples: 822406. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:20:14,008][00626] Avg episode reward: [(0, '21.910')] |
|
[2024-12-28 20:20:18,993][00626] Fps is (10 sec: 3277.1, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 3305472. Throughput: 0: 967.0. Samples: 825346. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:20:18,995][00626] Avg episode reward: [(0, '23.173')] |
|
[2024-12-28 20:20:20,836][04128] Updated weights for policy 0, policy_version 810 (0.0027) |
|
[2024-12-28 20:20:23,993][00626] Fps is (10 sec: 4507.8, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3330048. Throughput: 0: 1003.8. Samples: 832318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:20:23,996][00626] Avg episode reward: [(0, '22.868')] |
|
[2024-12-28 20:20:28,993][00626] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3346432. Throughput: 0: 1000.6. Samples: 837616. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:20:28,996][00626] Avg episode reward: [(0, '22.739')] |
|
[2024-12-28 20:20:32,224][04128] Updated weights for policy 0, policy_version 820 (0.0022) |
|
[2024-12-28 20:20:33,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3823.3, 300 sec: 3957.2). Total num frames: 3362816. Throughput: 0: 971.2. Samples: 839810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:20:33,996][00626] Avg episode reward: [(0, '23.195')] |
|
[2024-12-28 20:20:38,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3391488. Throughput: 0: 993.2. Samples: 847118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:20:38,996][00626] Avg episode reward: [(0, '21.989')] |
|
[2024-12-28 20:20:40,569][04128] Updated weights for policy 0, policy_version 830 (0.0020) |
|
[2024-12-28 20:20:43,993][00626] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3411968. Throughput: 0: 1036.1. Samples: 853700. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:20:44,002][00626] Avg episode reward: [(0, '20.648')] |
|
[2024-12-28 20:20:48,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3971.1). Total num frames: 3424256. Throughput: 0: 1004.4. Samples: 855786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:20:49,001][00626] Avg episode reward: [(0, '20.337')] |
|
[2024-12-28 20:20:51,903][04128] Updated weights for policy 0, policy_version 840 (0.0034) |
|
[2024-12-28 20:20:53,993][00626] Fps is (10 sec: 3686.4, 60 sec: 4027.8, 300 sec: 3998.8). Total num frames: 3448832. Throughput: 0: 981.9. Samples: 861928. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:20:53,996][00626] Avg episode reward: [(0, '19.946')] |
|
[2024-12-28 20:20:58,994][00626] Fps is (10 sec: 4914.9, 60 sec: 4164.2, 300 sec: 4012.7). Total num frames: 3473408. Throughput: 0: 1034.5. Samples: 868954. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:20:58,997][00626] Avg episode reward: [(0, '20.701')] |
|
[2024-12-28 20:21:01,601][04128] Updated weights for policy 0, policy_version 850 (0.0017) |
|
[2024-12-28 20:21:03,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3971.0). Total num frames: 3485696. Throughput: 0: 1017.4. Samples: 871128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:21:03,995][00626] Avg episode reward: [(0, '20.948')] |
|
[2024-12-28 20:21:08,993][00626] Fps is (10 sec: 3277.0, 60 sec: 3891.3, 300 sec: 3984.9). Total num frames: 3506176. Throughput: 0: 982.8. Samples: 876542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:21:08,999][00626] Avg episode reward: [(0, '21.855')] |
|
[2024-12-28 20:21:11,553][04128] Updated weights for policy 0, policy_version 860 (0.0035) |
|
[2024-12-28 20:21:13,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.3, 300 sec: 4012.7). Total num frames: 3530752. Throughput: 0: 1030.1. Samples: 883970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:21:14,000][00626] Avg episode reward: [(0, '22.375')] |
|
[2024-12-28 20:21:18,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3551232. Throughput: 0: 1051.5. Samples: 887128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:21:18,996][00626] Avg episode reward: [(0, '22.687')] |
|
[2024-12-28 20:21:23,000][04128] Updated weights for policy 0, policy_version 870 (0.0019) |
|
[2024-12-28 20:21:23,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3567616. Throughput: 0: 985.2. Samples: 891450. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:21:23,999][00626] Avg episode reward: [(0, '22.426')] |
|
[2024-12-28 20:21:28,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3592192. Throughput: 0: 995.8. Samples: 898512. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:21:29,000][00626] Avg episode reward: [(0, '23.614')] |
|
[2024-12-28 20:21:31,493][04128] Updated weights for policy 0, policy_version 880 (0.0034) |
|
[2024-12-28 20:21:33,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 3998.8). Total num frames: 3612672. Throughput: 0: 1029.0. Samples: 902092. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-12-28 20:21:33,996][00626] Avg episode reward: [(0, '23.905')] |
|
[2024-12-28 20:21:38,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3629056. Throughput: 0: 1003.0. Samples: 907062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:21:38,996][00626] Avg episode reward: [(0, '24.459')] |
|
[2024-12-28 20:21:39,010][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000886_3629056.pth... |
|
[2024-12-28 20:21:39,171][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000652_2670592.pth |
|
[2024-12-28 20:21:39,184][04114] Saving new best policy, reward=24.459! |
|
[2024-12-28 20:21:42,691][04128] Updated weights for policy 0, policy_version 890 (0.0014) |
|
[2024-12-28 20:21:43,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3649536. Throughput: 0: 987.9. Samples: 913408. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:21:44,003][00626] Avg episode reward: [(0, '24.365')] |
|
[2024-12-28 20:21:48,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4164.3, 300 sec: 4012.7). Total num frames: 3674112. Throughput: 0: 1018.2. Samples: 916946. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:21:49,000][00626] Avg episode reward: [(0, '22.645')] |
|
[2024-12-28 20:21:52,226][04128] Updated weights for policy 0, policy_version 900 (0.0018) |
|
[2024-12-28 20:21:53,994][00626] Fps is (10 sec: 4095.9, 60 sec: 4027.7, 300 sec: 3984.9). Total num frames: 3690496. Throughput: 0: 1022.8. Samples: 922568. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-12-28 20:21:53,999][00626] Avg episode reward: [(0, '21.773')] |
|
[2024-12-28 20:21:58,993][00626] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3984.9). Total num frames: 3706880. Throughput: 0: 975.0. Samples: 927846. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:21:59,001][00626] Avg episode reward: [(0, '21.031')] |
|
[2024-12-28 20:22:02,598][04128] Updated weights for policy 0, policy_version 910 (0.0019) |
|
[2024-12-28 20:22:03,993][00626] Fps is (10 sec: 4096.1, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3731456. Throughput: 0: 984.7. Samples: 931440. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:22:03,996][00626] Avg episode reward: [(0, '21.402')] |
|
[2024-12-28 20:22:08,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3998.8). Total num frames: 3751936. Throughput: 0: 1041.4. Samples: 938312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:08,996][00626] Avg episode reward: [(0, '21.767')] |
|
[2024-12-28 20:22:13,748][04128] Updated weights for policy 0, policy_version 920 (0.0027) |
|
[2024-12-28 20:22:13,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3984.9). Total num frames: 3768320. Throughput: 0: 983.6. Samples: 942774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:14,000][00626] Avg episode reward: [(0, '22.844')] |
|
[2024-12-28 20:22:18,993][00626] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3788800. Throughput: 0: 980.9. Samples: 946232. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:18,997][00626] Avg episode reward: [(0, '22.591')] |
|
[2024-12-28 20:22:22,549][04128] Updated weights for policy 0, policy_version 930 (0.0021) |
|
[2024-12-28 20:22:23,993][00626] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3813376. Throughput: 0: 1027.2. Samples: 953288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:22:23,998][00626] Avg episode reward: [(0, '22.287')] |
|
[2024-12-28 20:22:28,998][00626] Fps is (10 sec: 4094.1, 60 sec: 3959.2, 300 sec: 3998.8). Total num frames: 3829760. Throughput: 0: 991.3. Samples: 958020. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:29,000][00626] Avg episode reward: [(0, '23.088')] |
|
[2024-12-28 20:22:33,711][04128] Updated weights for policy 0, policy_version 940 (0.0018) |
|
[2024-12-28 20:22:33,994][00626] Fps is (10 sec: 3686.3, 60 sec: 3959.5, 300 sec: 3998.8). Total num frames: 3850240. Throughput: 0: 975.2. Samples: 960832. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:22:34,000][00626] Avg episode reward: [(0, '22.697')] |
|
[2024-12-28 20:22:38,993][00626] Fps is (10 sec: 4507.7, 60 sec: 4096.0, 300 sec: 4026.6). Total num frames: 3874816. Throughput: 0: 1012.0. Samples: 968106. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:22:38,996][00626] Avg episode reward: [(0, '22.231')] |
|
[2024-12-28 20:22:42,863][04128] Updated weights for policy 0, policy_version 950 (0.0039) |
|
[2024-12-28 20:22:43,993][00626] Fps is (10 sec: 4096.1, 60 sec: 4027.7, 300 sec: 3998.8). Total num frames: 3891200. Throughput: 0: 1026.8. Samples: 974050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:43,997][00626] Avg episode reward: [(0, '21.991')] |
|
[2024-12-28 20:22:48,993][00626] Fps is (10 sec: 3276.7, 60 sec: 3891.2, 300 sec: 3985.0). Total num frames: 3907584. Throughput: 0: 993.9. Samples: 976164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-12-28 20:22:49,000][00626] Avg episode reward: [(0, '21.478')] |
|
[2024-12-28 20:22:53,460][04128] Updated weights for policy 0, policy_version 960 (0.0025) |
|
[2024-12-28 20:22:53,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4026.6). Total num frames: 3932160. Throughput: 0: 990.0. Samples: 982860. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-12-28 20:22:53,999][00626] Avg episode reward: [(0, '21.755')] |
|
[2024-12-28 20:22:58,993][00626] Fps is (10 sec: 4505.7, 60 sec: 4096.0, 300 sec: 4012.7). Total num frames: 3952640. Throughput: 0: 1036.4. Samples: 989414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:22:58,995][00626] Avg episode reward: [(0, '20.360')] |
|
[2024-12-28 20:23:03,996][00626] Fps is (10 sec: 3685.3, 60 sec: 3959.3, 300 sec: 3984.9). Total num frames: 3969024. Throughput: 0: 1007.3. Samples: 991562. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:23:03,999][00626] Avg episode reward: [(0, '20.409')] |
|
[2024-12-28 20:23:04,819][04128] Updated weights for policy 0, policy_version 970 (0.0030) |
|
[2024-12-28 20:23:08,993][00626] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 4012.7). Total num frames: 3993600. Throughput: 0: 984.5. Samples: 997592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-12-28 20:23:09,001][00626] Avg episode reward: [(0, '20.335')] |
|
[2024-12-28 20:23:11,361][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-12-28 20:23:11,372][04114] Stopping Batcher_0... |
|
[2024-12-28 20:23:11,372][04114] Loop batcher_evt_loop terminating... |
|
[2024-12-28 20:23:11,372][00626] Component Batcher_0 stopped! |
|
[2024-12-28 20:23:11,438][04128] Weights refcount: 2 0 |
|
[2024-12-28 20:23:11,441][00626] Component InferenceWorker_p0-w0 stopped! |
|
[2024-12-28 20:23:11,446][04128] Stopping InferenceWorker_p0-w0... |
|
[2024-12-28 20:23:11,446][04128] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-12-28 20:23:11,508][04114] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000769_3149824.pth |
|
[2024-12-28 20:23:11,518][04114] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-12-28 20:23:11,664][04130] Stopping RolloutWorker_w2... |
|
[2024-12-28 20:23:11,664][00626] Component RolloutWorker_w2 stopped! |
|
[2024-12-28 20:23:11,672][04130] Loop rollout_proc2_evt_loop terminating... |
|
[2024-12-28 20:23:11,705][00626] Component LearnerWorker_p0 stopped! |
|
[2024-12-28 20:23:11,711][04114] Stopping LearnerWorker_p0... |
|
[2024-12-28 20:23:11,712][04114] Loop learner_proc0_evt_loop terminating... |
|
[2024-12-28 20:23:11,728][04127] Stopping RolloutWorker_w0... |
|
[2024-12-28 20:23:11,728][00626] Component RolloutWorker_w0 stopped! |
|
[2024-12-28 20:23:11,731][04127] Loop rollout_proc0_evt_loop terminating... |
|
[2024-12-28 20:23:11,742][04133] Stopping RolloutWorker_w4... |
|
[2024-12-28 20:23:11,742][00626] Component RolloutWorker_w4 stopped! |
|
[2024-12-28 20:23:11,744][04133] Loop rollout_proc4_evt_loop terminating... |
|
[2024-12-28 20:23:11,763][04134] Stopping RolloutWorker_w6... |
|
[2024-12-28 20:23:11,763][00626] Component RolloutWorker_w6 stopped! |
|
[2024-12-28 20:23:11,765][04134] Loop rollout_proc6_evt_loop terminating... |
|
[2024-12-28 20:23:11,775][04132] Stopping RolloutWorker_w5... |
|
[2024-12-28 20:23:11,775][00626] Component RolloutWorker_w5 stopped! |
|
[2024-12-28 20:23:11,776][04132] Loop rollout_proc5_evt_loop terminating... |
|
[2024-12-28 20:23:11,828][04131] Stopping RolloutWorker_w3... |
|
[2024-12-28 20:23:11,828][00626] Component RolloutWorker_w3 stopped! |
|
[2024-12-28 20:23:11,834][04131] Loop rollout_proc3_evt_loop terminating... |
|
[2024-12-28 20:23:11,862][04129] Stopping RolloutWorker_w1... |
|
[2024-12-28 20:23:11,862][00626] Component RolloutWorker_w1 stopped! |
|
[2024-12-28 20:23:11,869][04135] Stopping RolloutWorker_w7... |
|
[2024-12-28 20:23:11,870][04129] Loop rollout_proc1_evt_loop terminating... |
|
[2024-12-28 20:23:11,870][04135] Loop rollout_proc7_evt_loop terminating... |
|
[2024-12-28 20:23:11,869][00626] Component RolloutWorker_w7 stopped! |
|
[2024-12-28 20:23:11,872][00626] Waiting for process learner_proc0 to stop... |
|
[2024-12-28 20:23:13,335][00626] Waiting for process inference_proc0-0 to join... |
|
[2024-12-28 20:23:13,340][00626] Waiting for process rollout_proc0 to join... |
|
[2024-12-28 20:23:15,580][00626] Waiting for process rollout_proc1 to join... |
|
[2024-12-28 20:23:15,585][00626] Waiting for process rollout_proc2 to join... |
|
[2024-12-28 20:23:15,593][00626] Waiting for process rollout_proc3 to join... |
|
[2024-12-28 20:23:15,597][00626] Waiting for process rollout_proc4 to join... |
|
[2024-12-28 20:23:15,601][00626] Waiting for process rollout_proc5 to join... |
|
[2024-12-28 20:23:15,604][00626] Waiting for process rollout_proc6 to join... |
|
[2024-12-28 20:23:15,609][00626] Waiting for process rollout_proc7 to join... |
|
[2024-12-28 20:23:15,612][00626] Batcher 0 profile tree view: |
|
batching: 27.5807, releasing_batches: 0.0291 |
|
[2024-12-28 20:23:15,615][00626] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 406.0289 |
|
update_model: 8.3759 |
|
weight_update: 0.0019 |
|
one_step: 0.0186 |
|
handle_policy_step: 563.2376 |
|
deserialize: 14.2860, stack: 3.1107, obs_to_device_normalize: 121.2345, forward: 281.7099, send_messages: 27.9347 |
|
prepare_outputs: 87.3239 |
|
to_cpu: 53.3399 |
|
[2024-12-28 20:23:15,617][00626] Learner 0 profile tree view: |
|
misc: 0.0046, prepare_batch: 14.2795 |
|
train: 74.5448 |
|
epoch_init: 0.0155, minibatch_init: 0.0073, losses_postprocess: 0.6940, kl_divergence: 0.6267, after_optimizer: 33.7139 |
|
calculate_losses: 27.1371 |
|
losses_init: 0.0035, forward_head: 1.3767, bptt_initial: 18.5136, tail: 1.0474, advantages_returns: 0.2445, losses: 3.8973 |
|
bptt: 1.7427 |
|
bptt_forward_core: 1.6690 |
|
update: 11.7971 |
|
clip: 0.8641 |
|
[2024-12-28 20:23:15,619][00626] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3469, enqueue_policy_requests: 94.8238, env_step: 794.5031, overhead: 12.3122, complete_rollouts: 6.4749 |
|
save_policy_outputs: 20.8073 |
|
split_output_tensors: 8.3164 |
|
[2024-12-28 20:23:15,621][00626] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3279, enqueue_policy_requests: 97.5902, env_step: 790.2851, overhead: 12.5222, complete_rollouts: 6.9817 |
|
save_policy_outputs: 20.1893 |
|
split_output_tensors: 8.0445 |
|
[2024-12-28 20:23:15,622][00626] Loop Runner_EvtLoop terminating... |
|
[2024-12-28 20:23:15,623][00626] Runner profile tree view: |
|
main_loop: 1047.7220 |
|
[2024-12-28 20:23:15,625][00626] Collected {0: 4005888}, FPS: 3823.4 |
|
[2024-12-28 20:23:16,202][00626] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-12-28 20:23:16,204][00626] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-12-28 20:23:16,205][00626] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-12-28 20:23:16,207][00626] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-12-28 20:23:16,208][00626] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-12-28 20:23:16,210][00626] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-12-28 20:23:16,211][00626] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-12-28 20:23:16,212][00626] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-12-28 20:23:16,214][00626] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-12-28 20:23:16,215][00626] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-12-28 20:23:16,220][00626] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-12-28 20:23:16,221][00626] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-12-28 20:23:16,222][00626] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-12-28 20:23:16,223][00626] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-12-28 20:23:16,224][00626] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-12-28 20:23:16,278][00626] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-12-28 20:23:16,283][00626] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-12-28 20:23:16,286][00626] RunningMeanStd input shape: (1,) |
|
[2024-12-28 20:23:16,312][00626] ConvEncoder: input_channels=3 |
|
[2024-12-28 20:23:16,473][00626] Conv encoder output size: 512 |
|
[2024-12-28 20:23:16,475][00626] Policy head output size: 512 |
|
[2024-12-28 20:23:16,823][00626] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-12-28 20:23:17,810][00626] Num frames 100... |
|
[2024-12-28 20:23:17,933][00626] Num frames 200... |
|
[2024-12-28 20:23:18,052][00626] Num frames 300... |
|
[2024-12-28 20:23:18,173][00626] Num frames 400... |
|
[2024-12-28 20:23:18,293][00626] Num frames 500... |
|
[2024-12-28 20:23:18,414][00626] Num frames 600... |
|
[2024-12-28 20:23:18,519][00626] Avg episode rewards: #0: 11.400, true rewards: #0: 6.400 |
|
[2024-12-28 20:23:18,521][00626] Avg episode reward: 11.400, avg true_objective: 6.400 |
|
[2024-12-28 20:23:18,595][00626] Num frames 700... |
|
[2024-12-28 20:23:18,724][00626] Num frames 800... |
|
[2024-12-28 20:23:18,850][00626] Num frames 900... |
|
[2024-12-28 20:23:18,970][00626] Num frames 1000... |
|
[2024-12-28 20:23:19,094][00626] Num frames 1100... |
|
[2024-12-28 20:23:19,209][00626] Num frames 1200... |
|
[2024-12-28 20:23:19,332][00626] Num frames 1300... |
|
[2024-12-28 20:23:19,453][00626] Num frames 1400... |
|
[2024-12-28 20:23:19,577][00626] Num frames 1500... |
|
[2024-12-28 20:23:19,700][00626] Num frames 1600... |
|
[2024-12-28 20:23:19,836][00626] Num frames 1700... |
|
[2024-12-28 20:23:20,014][00626] Avg episode rewards: #0: 19.495, true rewards: #0: 8.995 |
|
[2024-12-28 20:23:20,016][00626] Avg episode reward: 19.495, avg true_objective: 8.995 |
|
[2024-12-28 20:23:20,021][00626] Num frames 1800... |
|
[2024-12-28 20:23:20,139][00626] Num frames 1900... |
|
[2024-12-28 20:23:20,257][00626] Num frames 2000... |
|
[2024-12-28 20:23:20,378][00626] Num frames 2100... |
|
[2024-12-28 20:23:20,496][00626] Num frames 2200... |
|
[2024-12-28 20:23:20,614][00626] Num frames 2300... |
|
[2024-12-28 20:23:20,733][00626] Num frames 2400... |
|
[2024-12-28 20:23:20,876][00626] Num frames 2500... |
|
[2024-12-28 20:23:20,995][00626] Num frames 2600... |
|
[2024-12-28 20:23:21,119][00626] Num frames 2700... |
|
[2024-12-28 20:23:21,247][00626] Num frames 2800... |
|
[2024-12-28 20:23:21,332][00626] Avg episode rewards: #0: 20.744, true rewards: #0: 9.410 |
|
[2024-12-28 20:23:21,333][00626] Avg episode reward: 20.744, avg true_objective: 9.410 |
|
[2024-12-28 20:23:21,429][00626] Num frames 2900... |
|
[2024-12-28 20:23:21,547][00626] Num frames 3000... |
|
[2024-12-28 20:23:21,670][00626] Num frames 3100... |
|
[2024-12-28 20:23:21,800][00626] Num frames 3200... |
|
[2024-12-28 20:23:21,924][00626] Num frames 3300... |
|
[2024-12-28 20:23:22,013][00626] Avg episode rewards: #0: 17.818, true rewards: #0: 8.317 |
|
[2024-12-28 20:23:22,015][00626] Avg episode reward: 17.818, avg true_objective: 8.317 |
|
[2024-12-28 20:23:22,107][00626] Num frames 3400... |
|
[2024-12-28 20:23:22,228][00626] Num frames 3500... |
|
[2024-12-28 20:23:22,349][00626] Num frames 3600... |
|
[2024-12-28 20:23:22,472][00626] Num frames 3700... |
|
[2024-12-28 20:23:22,587][00626] Num frames 3800... |
|
[2024-12-28 20:23:22,708][00626] Num frames 3900... |
|
[2024-12-28 20:23:22,843][00626] Num frames 4000... |
|
[2024-12-28 20:23:22,965][00626] Num frames 4100... |
|
[2024-12-28 20:23:23,088][00626] Num frames 4200... |
|
[2024-12-28 20:23:23,208][00626] Num frames 4300... |
|
[2024-12-28 20:23:23,331][00626] Num frames 4400... |
|
[2024-12-28 20:23:23,454][00626] Num frames 4500... |
|
[2024-12-28 20:23:23,573][00626] Num frames 4600... |
|
[2024-12-28 20:23:23,696][00626] Num frames 4700... |
|
[2024-12-28 20:23:23,820][00626] Num frames 4800... |
|
[2024-12-28 20:23:23,948][00626] Num frames 4900... |
|
[2024-12-28 20:23:24,070][00626] Num frames 5000... |
|
[2024-12-28 20:23:24,191][00626] Num frames 5100... |
|
[2024-12-28 20:23:24,315][00626] Num frames 5200... |
|
[2024-12-28 20:23:24,438][00626] Num frames 5300... |
|
[2024-12-28 20:23:24,560][00626] Num frames 5400... |
|
[2024-12-28 20:23:24,649][00626] Avg episode rewards: #0: 26.254, true rewards: #0: 10.854 |
|
[2024-12-28 20:23:24,652][00626] Avg episode reward: 26.254, avg true_objective: 10.854 |
|
[2024-12-28 20:23:24,739][00626] Num frames 5500... |
|
[2024-12-28 20:23:24,867][00626] Num frames 5600... |
|
[2024-12-28 20:23:25,001][00626] Num frames 5700... |
|
[2024-12-28 20:23:25,127][00626] Num frames 5800... |
|
[2024-12-28 20:23:25,249][00626] Num frames 5900... |
|
[2024-12-28 20:23:25,379][00626] Num frames 6000... |
|
[2024-12-28 20:23:25,501][00626] Num frames 6100... |
|
[2024-12-28 20:23:25,619][00626] Num frames 6200... |
|
[2024-12-28 20:23:25,738][00626] Num frames 6300... |
|
[2024-12-28 20:23:25,865][00626] Avg episode rewards: #0: 25.258, true rewards: #0: 10.592 |
|
[2024-12-28 20:23:25,867][00626] Avg episode reward: 25.258, avg true_objective: 10.592 |
|
[2024-12-28 20:23:25,929][00626] Num frames 6400... |
|
[2024-12-28 20:23:26,052][00626] Num frames 6500... |
|
[2024-12-28 20:23:26,177][00626] Num frames 6600... |
|
[2024-12-28 20:23:26,299][00626] Num frames 6700... |
|
[2024-12-28 20:23:26,423][00626] Num frames 6800... |
|
[2024-12-28 20:23:26,542][00626] Num frames 6900... |
|
[2024-12-28 20:23:26,657][00626] Num frames 7000... |
|
[2024-12-28 20:23:26,778][00626] Num frames 7100... |
|
[2024-12-28 20:23:26,904][00626] Num frames 7200... |
|
[2024-12-28 20:23:27,032][00626] Num frames 7300... |
|
[2024-12-28 20:23:27,151][00626] Num frames 7400... |
|
[2024-12-28 20:23:27,272][00626] Num frames 7500... |
|
[2024-12-28 20:23:27,392][00626] Num frames 7600... |
|
[2024-12-28 20:23:27,477][00626] Avg episode rewards: #0: 26.176, true rewards: #0: 10.890 |
|
[2024-12-28 20:23:27,481][00626] Avg episode reward: 26.176, avg true_objective: 10.890 |
|
[2024-12-28 20:23:27,613][00626] Num frames 7700... |
|
[2024-12-28 20:23:27,776][00626] Num frames 7800... |
|
[2024-12-28 20:23:27,952][00626] Num frames 7900... |
|
[2024-12-28 20:23:28,125][00626] Num frames 8000... |
|
[2024-12-28 20:23:28,290][00626] Num frames 8100... |
|
[2024-12-28 20:23:28,452][00626] Num frames 8200... |
|
[2024-12-28 20:23:28,616][00626] Num frames 8300... |
|
[2024-12-28 20:23:28,787][00626] Num frames 8400... |
|
[2024-12-28 20:23:28,957][00626] Num frames 8500... |
|
[2024-12-28 20:23:29,141][00626] Num frames 8600... |
|
[2024-12-28 20:23:29,307][00626] Num frames 8700... |
|
[2024-12-28 20:23:29,483][00626] Num frames 8800... |
|
[2024-12-28 20:23:29,652][00626] Num frames 8900... |
|
[2024-12-28 20:23:29,822][00626] Num frames 9000... |
|
[2024-12-28 20:23:30,000][00626] Num frames 9100... |
|
[2024-12-28 20:23:30,132][00626] Num frames 9200... |
|
[2024-12-28 20:23:30,251][00626] Num frames 9300... |
|
[2024-12-28 20:23:30,369][00626] Num frames 9400... |
|
[2024-12-28 20:23:30,491][00626] Num frames 9500... |
|
[2024-12-28 20:23:30,596][00626] Avg episode rewards: #0: 28.304, true rewards: #0: 11.929 |
|
[2024-12-28 20:23:30,598][00626] Avg episode reward: 28.304, avg true_objective: 11.929 |
|
[2024-12-28 20:23:30,666][00626] Num frames 9600... |
|
[2024-12-28 20:23:30,782][00626] Num frames 9700... |
|
[2024-12-28 20:23:30,910][00626] Num frames 9800... |
|
[2024-12-28 20:23:31,028][00626] Num frames 9900... |
|
[2024-12-28 20:23:31,158][00626] Num frames 10000... |
|
[2024-12-28 20:23:31,280][00626] Num frames 10100... |
|
[2024-12-28 20:23:31,400][00626] Num frames 10200... |
|
[2024-12-28 20:23:31,521][00626] Num frames 10300... |
|
[2024-12-28 20:23:31,647][00626] Avg episode rewards: #0: 26.955, true rewards: #0: 11.511 |
|
[2024-12-28 20:23:31,649][00626] Avg episode reward: 26.955, avg true_objective: 11.511 |
|
[2024-12-28 20:23:31,701][00626] Num frames 10400... |
|
[2024-12-28 20:23:31,828][00626] Num frames 10500... |
|
[2024-12-28 20:23:31,949][00626] Num frames 10600... |
|
[2024-12-28 20:23:32,071][00626] Num frames 10700... |
|
[2024-12-28 20:23:32,197][00626] Num frames 10800... |
|
[2024-12-28 20:23:32,263][00626] Avg episode rewards: #0: 24.808, true rewards: #0: 10.808 |
|
[2024-12-28 20:23:32,264][00626] Avg episode reward: 24.808, avg true_objective: 10.808 |
|
[2024-12-28 20:24:31,797][00626] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-12-28 20:26:53,017][00626] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-12-28 20:26:53,019][00626] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-12-28 20:26:53,021][00626] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-12-28 20:26:53,023][00626] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-12-28 20:26:53,025][00626] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-12-28 20:26:53,027][00626] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-12-28 20:26:53,028][00626] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-12-28 20:26:53,030][00626] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-12-28 20:26:53,032][00626] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-12-28 20:26:53,033][00626] Adding new argument 'hf_repository'='csabazs/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-12-28 20:26:53,034][00626] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-12-28 20:26:53,035][00626] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-12-28 20:26:53,036][00626] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-12-28 20:26:53,037][00626] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-12-28 20:26:53,038][00626] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-12-28 20:26:53,069][00626] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-12-28 20:26:53,072][00626] RunningMeanStd input shape: (1,) |
|
[2024-12-28 20:26:53,084][00626] ConvEncoder: input_channels=3 |
|
[2024-12-28 20:26:53,121][00626] Conv encoder output size: 512 |
|
[2024-12-28 20:26:53,122][00626] Policy head output size: 512 |
|
[2024-12-28 20:26:53,140][00626] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-12-28 20:26:53,556][00626] Num frames 100... |
|
[2024-12-28 20:26:53,674][00626] Num frames 200... |
|
[2024-12-28 20:26:53,795][00626] Num frames 300... |
|
[2024-12-28 20:26:53,946][00626] Num frames 400... |
|
[2024-12-28 20:26:54,064][00626] Num frames 500... |
|
[2024-12-28 20:26:54,178][00626] Avg episode rewards: #0: 9.470, true rewards: #0: 5.470 |
|
[2024-12-28 20:26:54,180][00626] Avg episode reward: 9.470, avg true_objective: 5.470 |
|
[2024-12-28 20:26:54,245][00626] Num frames 600... |
|
[2024-12-28 20:26:54,369][00626] Num frames 700... |
|
[2024-12-28 20:26:54,486][00626] Num frames 800... |
|
[2024-12-28 20:26:54,603][00626] Num frames 900... |
|
[2024-12-28 20:26:54,769][00626] Avg episode rewards: #0: 7.475, true rewards: #0: 4.975 |
|
[2024-12-28 20:26:54,771][00626] Avg episode reward: 7.475, avg true_objective: 4.975 |
|
[2024-12-28 20:26:54,780][00626] Num frames 1000... |
|
[2024-12-28 20:26:54,916][00626] Num frames 1100... |
|
[2024-12-28 20:26:55,040][00626] Num frames 1200... |
|
[2024-12-28 20:26:55,157][00626] Num frames 1300... |
|
[2024-12-28 20:26:55,279][00626] Num frames 1400... |
|
[2024-12-28 20:26:55,401][00626] Num frames 1500... |
|
[2024-12-28 20:26:55,517][00626] Num frames 1600... |
|
[2024-12-28 20:26:55,640][00626] Num frames 1700... |
|
[2024-12-28 20:26:55,758][00626] Num frames 1800... |
|
[2024-12-28 20:26:55,891][00626] Num frames 1900... |
|
[2024-12-28 20:26:55,958][00626] Avg episode rewards: #0: 10.683, true rewards: #0: 6.350 |
|
[2024-12-28 20:26:55,959][00626] Avg episode reward: 10.683, avg true_objective: 6.350 |
|
[2024-12-28 20:26:56,072][00626] Num frames 2000... |
|
[2024-12-28 20:26:56,193][00626] Num frames 2100... |
|
[2024-12-28 20:26:56,314][00626] Num frames 2200... |
|
[2024-12-28 20:26:56,434][00626] Num frames 2300... |
|
[2024-12-28 20:26:56,551][00626] Num frames 2400... |
|
[2024-12-28 20:26:56,667][00626] Num frames 2500... |
|
[2024-12-28 20:26:56,784][00626] Num frames 2600... |
|
[2024-12-28 20:26:56,920][00626] Num frames 2700... |
|
[2024-12-28 20:26:57,037][00626] Num frames 2800... |
|
[2024-12-28 20:26:57,133][00626] Avg episode rewards: #0: 13.083, true rewards: #0: 7.082 |
|
[2024-12-28 20:26:57,135][00626] Avg episode reward: 13.083, avg true_objective: 7.082 |
|
[2024-12-28 20:26:57,217][00626] Num frames 2900... |
|
[2024-12-28 20:26:57,335][00626] Num frames 3000... |
|
[2024-12-28 20:26:57,455][00626] Num frames 3100... |
|
[2024-12-28 20:26:57,571][00626] Num frames 3200... |
|
[2024-12-28 20:26:57,689][00626] Num frames 3300... |
|
[2024-12-28 20:26:57,824][00626] Num frames 3400... |
|
[2024-12-28 20:26:57,963][00626] Avg episode rewards: #0: 12.746, true rewards: #0: 6.946 |
|
[2024-12-28 20:26:57,965][00626] Avg episode reward: 12.746, avg true_objective: 6.946 |
|
[2024-12-28 20:26:57,999][00626] Num frames 3500... |
|
[2024-12-28 20:26:58,118][00626] Num frames 3600... |
|
[2024-12-28 20:26:58,235][00626] Num frames 3700... |
|
[2024-12-28 20:26:58,353][00626] Num frames 3800... |
|
[2024-12-28 20:26:58,475][00626] Num frames 3900... |
|
[2024-12-28 20:26:58,592][00626] Num frames 4000... |
|
[2024-12-28 20:26:58,709][00626] Num frames 4100... |
|
[2024-12-28 20:26:58,833][00626] Num frames 4200... |
|
[2024-12-28 20:26:58,963][00626] Num frames 4300... |
|
[2024-12-28 20:26:59,047][00626] Avg episode rewards: #0: 13.707, true rewards: #0: 7.207 |
|
[2024-12-28 20:26:59,048][00626] Avg episode reward: 13.707, avg true_objective: 7.207 |
|
[2024-12-28 20:26:59,142][00626] Num frames 4400... |
|
[2024-12-28 20:26:59,266][00626] Num frames 4500... |
|
[2024-12-28 20:26:59,389][00626] Num frames 4600... |
|
[2024-12-28 20:26:59,509][00626] Num frames 4700... |
|
[2024-12-28 20:26:59,628][00626] Num frames 4800... |
|
[2024-12-28 20:26:59,750][00626] Num frames 4900... |
|
[2024-12-28 20:26:59,916][00626] Num frames 5000... |
|
[2024-12-28 20:27:00,050][00626] Num frames 5100... |
|
[2024-12-28 20:27:00,174][00626] Num frames 5200... |
|
[2024-12-28 20:27:00,300][00626] Num frames 5300... |
|
[2024-12-28 20:27:00,424][00626] Num frames 5400... |
|
[2024-12-28 20:27:00,576][00626] Num frames 5500... |
|
[2024-12-28 20:27:00,664][00626] Avg episode rewards: #0: 15.893, true rewards: #0: 7.893 |
|
[2024-12-28 20:27:00,666][00626] Avg episode reward: 15.893, avg true_objective: 7.893 |
|
[2024-12-28 20:27:00,759][00626] Num frames 5600... |
|
[2024-12-28 20:27:00,884][00626] Num frames 5700... |
|
[2024-12-28 20:27:01,015][00626] Num frames 5800... |
|
[2024-12-28 20:27:01,135][00626] Num frames 5900... |
|
[2024-12-28 20:27:01,253][00626] Num frames 6000... |
|
[2024-12-28 20:27:01,377][00626] Num frames 6100... |
|
[2024-12-28 20:27:01,500][00626] Num frames 6200... |
|
[2024-12-28 20:27:01,624][00626] Num frames 6300... |
|
[2024-12-28 20:27:01,744][00626] Num frames 6400... |
|
[2024-12-28 20:27:01,872][00626] Num frames 6500... |
|
[2024-12-28 20:27:01,999][00626] Num frames 6600... |
|
[2024-12-28 20:27:02,136][00626] Num frames 6700... |
|
[2024-12-28 20:27:02,285][00626] Num frames 6800... |
|
[2024-12-28 20:27:02,450][00626] Num frames 6900... |
|
[2024-12-28 20:27:02,615][00626] Num frames 7000... |
|
[2024-12-28 20:27:02,773][00626] Num frames 7100... |
|
[2024-12-28 20:27:02,943][00626] Num frames 7200... |
|
[2024-12-28 20:27:03,118][00626] Num frames 7300... |
|
[2024-12-28 20:27:03,259][00626] Avg episode rewards: #0: 18.936, true rewards: #0: 9.186 |
|
[2024-12-28 20:27:03,261][00626] Avg episode reward: 18.936, avg true_objective: 9.186 |
|
[2024-12-28 20:27:03,349][00626] Num frames 7400... |
|
[2024-12-28 20:27:03,511][00626] Num frames 7500... |
|
[2024-12-28 20:27:03,684][00626] Num frames 7600... |
|
[2024-12-28 20:27:03,875][00626] Num frames 7700... |
|
[2024-12-28 20:27:04,049][00626] Num frames 7800... |
|
[2024-12-28 20:27:04,229][00626] Num frames 7900... |
|
[2024-12-28 20:27:04,402][00626] Num frames 8000... |
|
[2024-12-28 20:27:04,571][00626] Num frames 8100... |
|
[2024-12-28 20:27:04,745][00626] Avg episode rewards: #0: 18.535, true rewards: #0: 9.090 |
|
[2024-12-28 20:27:04,747][00626] Avg episode reward: 18.535, avg true_objective: 9.090 |
|
[2024-12-28 20:27:04,770][00626] Num frames 8200... |
|
[2024-12-28 20:27:04,890][00626] Num frames 8300... |
|
[2024-12-28 20:27:05,021][00626] Num frames 8400... |
|
[2024-12-28 20:27:05,136][00626] Num frames 8500... |
|
[2024-12-28 20:27:05,287][00626] Num frames 8600... |
|
[2024-12-28 20:27:05,412][00626] Num frames 8700... |
|
[2024-12-28 20:27:05,528][00626] Num frames 8800... |
|
[2024-12-28 20:27:05,645][00626] Num frames 8900... |
|
[2024-12-28 20:27:05,760][00626] Num frames 9000... |
|
[2024-12-28 20:27:05,891][00626] Num frames 9100... |
|
[2024-12-28 20:27:06,033][00626] Avg episode rewards: #0: 18.673, true rewards: #0: 9.173 |
|
[2024-12-28 20:27:06,034][00626] Avg episode reward: 18.673, avg true_objective: 9.173 |
|
[2024-12-28 20:27:56,964][00626] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|